00:00:00.001 Started by upstream project "autotest-per-patch" build number 132774 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.070 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:02.285 The recommended git tool is: git 00:00:02.285 using credential 00000000-0000-0000-0000-000000000002 00:00:02.287 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:02.298 Fetching changes from the remote Git repository 00:00:02.303 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:02.316 Using shallow fetch with depth 1 00:00:02.316 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:02.316 > git --version # timeout=10 00:00:02.328 > git --version # 'git version 2.39.2' 00:00:02.328 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:02.341 Setting http proxy: proxy-dmz.intel.com:911 00:00:02.341 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.609 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.624 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.641 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:07.641 > git config core.sparsecheckout # timeout=10 00:00:07.655 > git read-tree -mu HEAD # timeout=10 00:00:07.674 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:07.700 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:07.701 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:07.841 [Pipeline] Start of Pipeline 00:00:07.856 [Pipeline] library 00:00:07.858 Loading library shm_lib@master 00:00:07.858 Library shm_lib@master is cached. Copying from home. 00:00:07.877 [Pipeline] node 01:01:53.583 Still waiting to schedule task 01:01:53.583 Waiting for next available executor on ‘vagrant-vm-host’ 01:10:59.497 Running on VM-host-WFP1 in /var/jenkins/workspace/nvme-vg-autotest 01:10:59.498 [Pipeline] { 01:10:59.510 [Pipeline] catchError 01:10:59.511 [Pipeline] { 01:10:59.526 [Pipeline] wrap 01:10:59.536 [Pipeline] { 01:10:59.544 [Pipeline] stage 01:10:59.546 [Pipeline] { (Prologue) 01:10:59.566 [Pipeline] echo 01:10:59.568 Node: VM-host-WFP1 01:10:59.576 [Pipeline] cleanWs 01:10:59.586 [WS-CLEANUP] Deleting project workspace... 01:10:59.586 [WS-CLEANUP] Deferred wipeout is used... 01:10:59.594 [WS-CLEANUP] done 01:10:59.781 [Pipeline] setCustomBuildProperty 01:10:59.880 [Pipeline] httpRequest 01:11:00.286 [Pipeline] echo 01:11:00.288 Sorcerer 10.211.164.101 is alive 01:11:00.299 [Pipeline] retry 01:11:00.302 [Pipeline] { 01:11:00.316 [Pipeline] httpRequest 01:11:00.321 HttpMethod: GET 01:11:00.322 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 01:11:00.323 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 01:11:00.323 Response Code: HTTP/1.1 200 OK 01:11:00.324 Success: Status code 200 is in the accepted range: 200,404 01:11:00.324 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 01:11:00.469 [Pipeline] } 01:11:00.487 [Pipeline] // retry 01:11:00.495 [Pipeline] sh 01:11:00.779 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 01:11:00.799 [Pipeline] httpRequest 01:11:01.130 [Pipeline] echo 01:11:01.133 Sorcerer 10.211.164.101 is alive 01:11:01.146 [Pipeline] retry 01:11:01.149 [Pipeline] { 01:11:01.168 [Pipeline] httpRequest 01:11:01.173 HttpMethod: GET 01:11:01.174 URL: http://10.211.164.101/packages/spdk_cabd61f7fcfe4266fd041fd1c59711acd76f4aff.tar.gz 01:11:01.174 Sending request to url: http://10.211.164.101/packages/spdk_cabd61f7fcfe4266fd041fd1c59711acd76f4aff.tar.gz 01:11:01.175 Response Code: HTTP/1.1 200 OK 01:11:01.176 Success: Status code 200 is in the accepted range: 200,404 01:11:01.176 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_cabd61f7fcfe4266fd041fd1c59711acd76f4aff.tar.gz 01:11:03.449 [Pipeline] } 01:11:03.465 [Pipeline] // retry 01:11:03.472 [Pipeline] sh 01:11:03.755 + tar --no-same-owner -xf spdk_cabd61f7fcfe4266fd041fd1c59711acd76f4aff.tar.gz 01:11:06.298 [Pipeline] sh 01:11:06.583 + git -C spdk log --oneline -n5 01:11:06.583 cabd61f7f env: extend the page table to support 4-KiB mapping 01:11:06.583 66902d69a env: explicitly set --legacy-mem flag in no hugepages mode 01:11:06.584 421ce3854 env: add mem_map_fini and vtophys_fini to cleanup mem maps 01:11:06.584 35cd3e84d bdev/part: Pass through dif_check_flags via dif_check_flags_exclude_mask 01:11:06.584 01a2c4855 bdev/passthru: Pass through dif_check_flags via dif_check_flags_exclude_mask 01:11:06.599 [Pipeline] writeFile 01:11:06.611 [Pipeline] sh 01:11:06.894 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 01:11:06.906 [Pipeline] sh 01:11:07.188 + cat autorun-spdk.conf 01:11:07.188 SPDK_RUN_FUNCTIONAL_TEST=1 01:11:07.188 SPDK_TEST_NVME=1 01:11:07.188 SPDK_TEST_FTL=1 01:11:07.188 SPDK_TEST_ISAL=1 01:11:07.188 SPDK_RUN_ASAN=1 01:11:07.188 SPDK_RUN_UBSAN=1 01:11:07.188 SPDK_TEST_XNVME=1 01:11:07.188 SPDK_TEST_NVME_FDP=1 01:11:07.188 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 01:11:07.194 RUN_NIGHTLY=0 01:11:07.196 [Pipeline] } 01:11:07.207 [Pipeline] // stage 01:11:07.219 [Pipeline] stage 01:11:07.221 [Pipeline] { (Run VM) 01:11:07.232 [Pipeline] sh 01:11:07.515 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 01:11:07.515 + echo 'Start stage prepare_nvme.sh' 01:11:07.515 Start stage prepare_nvme.sh 01:11:07.515 + [[ -n 6 ]] 01:11:07.515 + disk_prefix=ex6 01:11:07.515 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 01:11:07.515 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 01:11:07.516 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 01:11:07.516 ++ SPDK_RUN_FUNCTIONAL_TEST=1 01:11:07.516 ++ SPDK_TEST_NVME=1 01:11:07.516 ++ SPDK_TEST_FTL=1 01:11:07.516 ++ SPDK_TEST_ISAL=1 01:11:07.516 ++ SPDK_RUN_ASAN=1 01:11:07.516 ++ SPDK_RUN_UBSAN=1 01:11:07.516 ++ SPDK_TEST_XNVME=1 01:11:07.516 ++ SPDK_TEST_NVME_FDP=1 01:11:07.516 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 01:11:07.516 ++ RUN_NIGHTLY=0 01:11:07.516 + cd /var/jenkins/workspace/nvme-vg-autotest 01:11:07.516 + nvme_files=() 01:11:07.516 + declare -A nvme_files 01:11:07.516 + backend_dir=/var/lib/libvirt/images/backends 01:11:07.516 + nvme_files['nvme.img']=5G 01:11:07.516 + nvme_files['nvme-cmb.img']=5G 01:11:07.516 + nvme_files['nvme-multi0.img']=4G 01:11:07.516 + nvme_files['nvme-multi1.img']=4G 01:11:07.516 + nvme_files['nvme-multi2.img']=4G 01:11:07.516 + nvme_files['nvme-openstack.img']=8G 01:11:07.516 + nvme_files['nvme-zns.img']=5G 01:11:07.516 + (( SPDK_TEST_NVME_PMR == 1 )) 01:11:07.516 + (( SPDK_TEST_FTL == 1 )) 01:11:07.516 + nvme_files["nvme-ftl.img"]=6G 01:11:07.516 + (( SPDK_TEST_NVME_FDP == 1 )) 01:11:07.516 + nvme_files["nvme-fdp.img"]=1G 01:11:07.516 + [[ ! -d /var/lib/libvirt/images/backends ]] 01:11:07.516 + for nvme in "${!nvme_files[@]}" 01:11:07.516 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi2.img -s 4G 01:11:07.516 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 01:11:07.516 + for nvme in "${!nvme_files[@]}" 01:11:07.516 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-ftl.img -s 6G 01:11:07.516 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 01:11:07.516 + for nvme in "${!nvme_files[@]}" 01:11:07.516 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-cmb.img -s 5G 01:11:07.516 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 01:11:07.516 + for nvme in "${!nvme_files[@]}" 01:11:07.516 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-openstack.img -s 8G 01:11:07.516 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 01:11:07.516 + for nvme in "${!nvme_files[@]}" 01:11:07.516 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-zns.img -s 5G 01:11:07.516 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 01:11:07.516 + for nvme in "${!nvme_files[@]}" 01:11:07.516 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi1.img -s 4G 01:11:07.775 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 01:11:07.775 + for nvme in "${!nvme_files[@]}" 01:11:07.775 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi0.img -s 4G 01:11:07.775 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 01:11:07.775 + for nvme in "${!nvme_files[@]}" 01:11:07.775 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-fdp.img -s 1G 01:11:07.775 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 01:11:07.775 + for nvme in "${!nvme_files[@]}" 01:11:07.775 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme.img -s 5G 01:11:07.775 Formatting '/var/lib/libvirt/images/backends/ex6-nvme.img', fmt=raw size=5368709120 preallocation=falloc 01:11:07.775 ++ sudo grep -rl ex6-nvme.img /etc/libvirt/qemu 01:11:07.775 + echo 'End stage prepare_nvme.sh' 01:11:07.775 End stage prepare_nvme.sh 01:11:07.786 [Pipeline] sh 01:11:08.105 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 01:11:08.105 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex6-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex6-nvme.img -b /var/lib/libvirt/images/backends/ex6-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex6-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 01:11:08.105 01:11:08.105 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 01:11:08.105 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 01:11:08.105 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 01:11:08.105 HELP=0 01:11:08.105 DRY_RUN=0 01:11:08.105 NVME_FILE=/var/lib/libvirt/images/backends/ex6-nvme-ftl.img,/var/lib/libvirt/images/backends/ex6-nvme.img,/var/lib/libvirt/images/backends/ex6-nvme-multi0.img,/var/lib/libvirt/images/backends/ex6-nvme-fdp.img, 01:11:08.105 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 01:11:08.105 NVME_AUTO_CREATE=0 01:11:08.105 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img,, 01:11:08.105 NVME_CMB=,,,, 01:11:08.105 NVME_PMR=,,,, 01:11:08.105 NVME_ZNS=,,,, 01:11:08.105 NVME_MS=true,,,, 01:11:08.105 NVME_FDP=,,,on, 01:11:08.105 SPDK_VAGRANT_DISTRO=fedora39 01:11:08.105 SPDK_VAGRANT_VMCPU=10 01:11:08.105 SPDK_VAGRANT_VMRAM=12288 01:11:08.105 SPDK_VAGRANT_PROVIDER=libvirt 01:11:08.105 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 01:11:08.105 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 01:11:08.105 SPDK_OPENSTACK_NETWORK=0 01:11:08.105 VAGRANT_PACKAGE_BOX=0 01:11:08.105 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 01:11:08.105 FORCE_DISTRO=true 01:11:08.105 VAGRANT_BOX_VERSION= 01:11:08.105 EXTRA_VAGRANTFILES= 01:11:08.105 NIC_MODEL=e1000 01:11:08.105 01:11:08.105 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 01:11:08.105 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 01:11:10.667 Bringing machine 'default' up with 'libvirt' provider... 01:11:11.607 ==> default: Creating image (snapshot of base box volume). 01:11:11.867 ==> default: Creating domain with the following settings... 01:11:11.867 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733720753_641a6dea0528d6f86a94 01:11:11.867 ==> default: -- Domain type: kvm 01:11:11.867 ==> default: -- Cpus: 10 01:11:11.867 ==> default: -- Feature: acpi 01:11:11.867 ==> default: -- Feature: apic 01:11:11.867 ==> default: -- Feature: pae 01:11:11.867 ==> default: -- Memory: 12288M 01:11:11.867 ==> default: -- Memory Backing: hugepages: 01:11:11.867 ==> default: -- Management MAC: 01:11:11.867 ==> default: -- Loader: 01:11:11.867 ==> default: -- Nvram: 01:11:11.867 ==> default: -- Base box: spdk/fedora39 01:11:11.867 ==> default: -- Storage pool: default 01:11:11.867 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733720753_641a6dea0528d6f86a94.img (20G) 01:11:11.867 ==> default: -- Volume Cache: default 01:11:11.867 ==> default: -- Kernel: 01:11:11.867 ==> default: -- Initrd: 01:11:11.867 ==> default: -- Graphics Type: vnc 01:11:11.867 ==> default: -- Graphics Port: -1 01:11:11.867 ==> default: -- Graphics IP: 127.0.0.1 01:11:11.867 ==> default: -- Graphics Password: Not defined 01:11:11.867 ==> default: -- Video Type: cirrus 01:11:11.867 ==> default: -- Video VRAM: 9216 01:11:11.867 ==> default: -- Sound Type: 01:11:11.867 ==> default: -- Keymap: en-us 01:11:11.867 ==> default: -- TPM Path: 01:11:11.867 ==> default: -- INPUT: type=mouse, bus=ps2 01:11:11.867 ==> default: -- Command line args: 01:11:11.867 ==> default: -> value=-device, 01:11:11.867 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 01:11:11.867 ==> default: -> value=-drive, 01:11:11.867 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-ftl.img,if=none,id=nvme-0-drive0, 01:11:11.867 ==> default: -> value=-device, 01:11:11.867 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 01:11:11.867 ==> default: -> value=-device, 01:11:11.867 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 01:11:11.867 ==> default: -> value=-drive, 01:11:11.867 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme.img,if=none,id=nvme-1-drive0, 01:11:11.867 ==> default: -> value=-device, 01:11:11.867 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 01:11:11.867 ==> default: -> value=-device, 01:11:11.867 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 01:11:11.867 ==> default: -> value=-drive, 01:11:11.867 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi0.img,if=none,id=nvme-2-drive0, 01:11:11.867 ==> default: -> value=-device, 01:11:11.867 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 01:11:11.867 ==> default: -> value=-drive, 01:11:11.867 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi1.img,if=none,id=nvme-2-drive1, 01:11:11.867 ==> default: -> value=-device, 01:11:11.867 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 01:11:11.867 ==> default: -> value=-drive, 01:11:11.867 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi2.img,if=none,id=nvme-2-drive2, 01:11:11.867 ==> default: -> value=-device, 01:11:11.867 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 01:11:11.867 ==> default: -> value=-device, 01:11:11.867 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 01:11:11.867 ==> default: -> value=-device, 01:11:11.867 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 01:11:11.867 ==> default: -> value=-drive, 01:11:11.867 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-fdp.img,if=none,id=nvme-3-drive0, 01:11:11.867 ==> default: -> value=-device, 01:11:11.867 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 01:11:12.126 ==> default: Creating shared folders metadata... 01:11:12.126 ==> default: Starting domain. 01:11:14.665 ==> default: Waiting for domain to get an IP address... 01:11:29.539 ==> default: Waiting for SSH to become available... 01:11:30.917 ==> default: Configuring and enabling network interfaces... 01:11:36.187 default: SSH address: 192.168.121.56:22 01:11:36.187 default: SSH username: vagrant 01:11:36.187 default: SSH auth method: private key 01:11:39.471 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 01:11:47.591 ==> default: Mounting SSHFS shared folder... 01:11:50.150 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 01:11:50.150 ==> default: Checking Mount.. 01:11:52.082 ==> default: Folder Successfully Mounted! 01:11:52.082 ==> default: Running provisioner: file... 01:11:53.019 default: ~/.gitconfig => .gitconfig 01:11:53.586 01:11:53.586 SUCCESS! 01:11:53.586 01:11:53.586 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 01:11:53.586 Use vagrant "suspend" and vagrant "resume" to stop and start. 01:11:53.586 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 01:11:53.586 01:11:53.594 [Pipeline] } 01:11:53.608 [Pipeline] // stage 01:11:53.617 [Pipeline] dir 01:11:53.617 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 01:11:53.619 [Pipeline] { 01:11:53.631 [Pipeline] catchError 01:11:53.633 [Pipeline] { 01:11:53.645 [Pipeline] sh 01:11:53.927 + vagrant ssh-config --host vagrant 01:11:53.927 + sed -ne /^Host/,$p 01:11:53.927 + tee ssh_conf 01:11:57.211 Host vagrant 01:11:57.211 HostName 192.168.121.56 01:11:57.211 User vagrant 01:11:57.211 Port 22 01:11:57.211 UserKnownHostsFile /dev/null 01:11:57.211 StrictHostKeyChecking no 01:11:57.211 PasswordAuthentication no 01:11:57.211 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 01:11:57.211 IdentitiesOnly yes 01:11:57.211 LogLevel FATAL 01:11:57.211 ForwardAgent yes 01:11:57.211 ForwardX11 yes 01:11:57.211 01:11:57.225 [Pipeline] withEnv 01:11:57.228 [Pipeline] { 01:11:57.240 [Pipeline] sh 01:11:57.518 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 01:11:57.518 source /etc/os-release 01:11:57.518 [[ -e /image.version ]] && img=$(< /image.version) 01:11:57.518 # Minimal, systemd-like check. 01:11:57.518 if [[ -e /.dockerenv ]]; then 01:11:57.518 # Clear garbage from the node's name: 01:11:57.518 # agt-er_autotest_547-896 -> autotest_547-896 01:11:57.518 # $HOSTNAME is the actual container id 01:11:57.518 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 01:11:57.518 if grep -q "/etc/hostname" /proc/self/mountinfo; then 01:11:57.518 # We can assume this is a mount from a host where container is running, 01:11:57.518 # so fetch its hostname to easily identify the target swarm worker. 01:11:57.518 container="$(< /etc/hostname) ($agent)" 01:11:57.518 else 01:11:57.518 # Fallback 01:11:57.518 container=$agent 01:11:57.518 fi 01:11:57.518 fi 01:11:57.518 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 01:11:57.518 01:11:57.789 [Pipeline] } 01:11:57.804 [Pipeline] // withEnv 01:11:57.811 [Pipeline] setCustomBuildProperty 01:11:57.825 [Pipeline] stage 01:11:57.827 [Pipeline] { (Tests) 01:11:57.844 [Pipeline] sh 01:11:58.123 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 01:11:58.396 [Pipeline] sh 01:11:58.773 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 01:11:58.788 [Pipeline] timeout 01:11:58.788 Timeout set to expire in 50 min 01:11:58.790 [Pipeline] { 01:11:58.802 [Pipeline] sh 01:11:59.080 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 01:11:59.646 HEAD is now at cabd61f7f env: extend the page table to support 4-KiB mapping 01:11:59.658 [Pipeline] sh 01:11:59.941 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 01:12:00.214 [Pipeline] sh 01:12:00.496 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 01:12:00.771 [Pipeline] sh 01:12:01.052 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo 01:12:01.312 ++ readlink -f spdk_repo 01:12:01.312 + DIR_ROOT=/home/vagrant/spdk_repo 01:12:01.312 + [[ -n /home/vagrant/spdk_repo ]] 01:12:01.312 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 01:12:01.312 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 01:12:01.312 + [[ -d /home/vagrant/spdk_repo/spdk ]] 01:12:01.312 + [[ ! -d /home/vagrant/spdk_repo/output ]] 01:12:01.312 + [[ -d /home/vagrant/spdk_repo/output ]] 01:12:01.312 + [[ nvme-vg-autotest == pkgdep-* ]] 01:12:01.312 + cd /home/vagrant/spdk_repo 01:12:01.312 + source /etc/os-release 01:12:01.312 ++ NAME='Fedora Linux' 01:12:01.312 ++ VERSION='39 (Cloud Edition)' 01:12:01.312 ++ ID=fedora 01:12:01.312 ++ VERSION_ID=39 01:12:01.312 ++ VERSION_CODENAME= 01:12:01.312 ++ PLATFORM_ID=platform:f39 01:12:01.312 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 01:12:01.312 ++ ANSI_COLOR='0;38;2;60;110;180' 01:12:01.312 ++ LOGO=fedora-logo-icon 01:12:01.312 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 01:12:01.312 ++ HOME_URL=https://fedoraproject.org/ 01:12:01.312 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 01:12:01.312 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 01:12:01.312 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 01:12:01.312 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 01:12:01.312 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 01:12:01.312 ++ REDHAT_SUPPORT_PRODUCT=Fedora 01:12:01.312 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 01:12:01.312 ++ SUPPORT_END=2024-11-12 01:12:01.312 ++ VARIANT='Cloud Edition' 01:12:01.312 ++ VARIANT_ID=cloud 01:12:01.312 + uname -a 01:12:01.312 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 01:12:01.312 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 01:12:01.880 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:12:02.138 Hugepages 01:12:02.138 node hugesize free / total 01:12:02.138 node0 1048576kB 0 / 0 01:12:02.138 node0 2048kB 0 / 0 01:12:02.138 01:12:02.138 Type BDF Vendor Device NUMA Driver Device Block devices 01:12:02.138 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 01:12:02.138 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 01:12:02.138 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 01:12:02.138 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 01:12:02.138 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 01:12:02.138 + rm -f /tmp/spdk-ld-path 01:12:02.138 + source autorun-spdk.conf 01:12:02.138 ++ SPDK_RUN_FUNCTIONAL_TEST=1 01:12:02.138 ++ SPDK_TEST_NVME=1 01:12:02.138 ++ SPDK_TEST_FTL=1 01:12:02.138 ++ SPDK_TEST_ISAL=1 01:12:02.138 ++ SPDK_RUN_ASAN=1 01:12:02.139 ++ SPDK_RUN_UBSAN=1 01:12:02.139 ++ SPDK_TEST_XNVME=1 01:12:02.139 ++ SPDK_TEST_NVME_FDP=1 01:12:02.139 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 01:12:02.139 ++ RUN_NIGHTLY=0 01:12:02.139 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 01:12:02.139 + [[ -n '' ]] 01:12:02.139 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 01:12:02.397 + for M in /var/spdk/build-*-manifest.txt 01:12:02.397 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 01:12:02.397 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 01:12:02.397 + for M in /var/spdk/build-*-manifest.txt 01:12:02.397 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 01:12:02.397 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 01:12:02.397 + for M in /var/spdk/build-*-manifest.txt 01:12:02.397 + [[ -f /var/spdk/build-repo-manifest.txt ]] 01:12:02.397 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 01:12:02.397 ++ uname 01:12:02.397 + [[ Linux == \L\i\n\u\x ]] 01:12:02.397 + sudo dmesg -T 01:12:02.397 + sudo dmesg --clear 01:12:02.397 + dmesg_pid=5261 01:12:02.397 + [[ Fedora Linux == FreeBSD ]] 01:12:02.397 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 01:12:02.397 + UNBIND_ENTIRE_IOMMU_GROUP=yes 01:12:02.397 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 01:12:02.397 + [[ -x /usr/src/fio-static/fio ]] 01:12:02.397 + sudo dmesg -Tw 01:12:02.397 + export FIO_BIN=/usr/src/fio-static/fio 01:12:02.397 + FIO_BIN=/usr/src/fio-static/fio 01:12:02.397 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 01:12:02.397 + [[ ! -v VFIO_QEMU_BIN ]] 01:12:02.397 + [[ -e /usr/local/qemu/vfio-user-latest ]] 01:12:02.397 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 01:12:02.397 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 01:12:02.397 + [[ -e /usr/local/qemu/vanilla-latest ]] 01:12:02.397 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 01:12:02.397 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 01:12:02.397 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 01:12:02.397 05:06:44 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 01:12:02.397 05:06:44 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 01:12:02.397 05:06:44 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 01:12:02.397 05:06:44 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 01:12:02.397 05:06:44 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 01:12:02.397 05:06:44 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 01:12:02.397 05:06:44 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 01:12:02.397 05:06:44 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 01:12:02.397 05:06:44 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 01:12:02.397 05:06:44 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 01:12:02.397 05:06:44 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 01:12:02.397 05:06:44 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=0 01:12:02.397 05:06:44 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 01:12:02.397 05:06:44 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 01:12:02.657 05:06:44 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 01:12:02.657 05:06:44 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:12:02.657 05:06:44 -- scripts/common.sh@15 -- $ shopt -s extglob 01:12:02.657 05:06:44 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 01:12:02.657 05:06:44 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:12:02.657 05:06:44 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 01:12:02.657 05:06:44 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:12:02.657 05:06:44 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:12:02.657 05:06:44 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:12:02.657 05:06:44 -- paths/export.sh@5 -- $ export PATH 01:12:02.657 05:06:44 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:12:02.657 05:06:44 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 01:12:02.657 05:06:44 -- common/autobuild_common.sh@493 -- $ date +%s 01:12:02.657 05:06:44 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733720804.XXXXXX 01:12:02.657 05:06:44 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733720804.74xGLT 01:12:02.657 05:06:44 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 01:12:02.657 05:06:44 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 01:12:02.657 05:06:44 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 01:12:02.657 05:06:44 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 01:12:02.657 05:06:44 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 01:12:02.657 05:06:44 -- common/autobuild_common.sh@509 -- $ get_config_params 01:12:02.657 05:06:44 -- common/autotest_common.sh@409 -- $ xtrace_disable 01:12:02.657 05:06:44 -- common/autotest_common.sh@10 -- $ set +x 01:12:02.657 05:06:44 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 01:12:02.657 05:06:44 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 01:12:02.657 05:06:44 -- pm/common@17 -- $ local monitor 01:12:02.657 05:06:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 01:12:02.657 05:06:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 01:12:02.657 05:06:44 -- pm/common@25 -- $ sleep 1 01:12:02.657 05:06:44 -- pm/common@21 -- $ date +%s 01:12:02.657 05:06:44 -- pm/common@21 -- $ date +%s 01:12:02.657 05:06:44 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733720804 01:12:02.657 05:06:44 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733720804 01:12:02.657 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733720804_collect-cpu-load.pm.log 01:12:02.657 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733720804_collect-vmstat.pm.log 01:12:03.591 05:06:45 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 01:12:03.591 05:06:45 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 01:12:03.591 05:06:45 -- spdk/autobuild.sh@12 -- $ umask 022 01:12:03.591 05:06:45 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 01:12:03.591 05:06:45 -- spdk/autobuild.sh@16 -- $ date -u 01:12:03.591 Mon Dec 9 05:06:45 AM UTC 2024 01:12:03.591 05:06:45 -- spdk/autobuild.sh@17 -- $ git describe --tags 01:12:03.591 v25.01-pre-279-gcabd61f7f 01:12:03.591 05:06:46 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 01:12:03.591 05:06:46 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 01:12:03.591 05:06:46 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 01:12:03.591 05:06:46 -- common/autotest_common.sh@1111 -- $ xtrace_disable 01:12:03.591 05:06:46 -- common/autotest_common.sh@10 -- $ set +x 01:12:03.591 ************************************ 01:12:03.591 START TEST asan 01:12:03.591 ************************************ 01:12:03.591 using asan 01:12:03.591 05:06:46 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 01:12:03.591 01:12:03.591 real 0m0.001s 01:12:03.591 user 0m0.000s 01:12:03.591 sys 0m0.000s 01:12:03.591 05:06:46 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 01:12:03.591 05:06:46 asan -- common/autotest_common.sh@10 -- $ set +x 01:12:03.591 ************************************ 01:12:03.591 END TEST asan 01:12:03.591 ************************************ 01:12:03.849 05:06:46 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 01:12:03.849 05:06:46 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 01:12:03.849 05:06:46 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 01:12:03.849 05:06:46 -- common/autotest_common.sh@1111 -- $ xtrace_disable 01:12:03.849 05:06:46 -- common/autotest_common.sh@10 -- $ set +x 01:12:03.849 ************************************ 01:12:03.849 START TEST ubsan 01:12:03.849 ************************************ 01:12:03.849 05:06:46 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 01:12:03.849 using ubsan 01:12:03.849 01:12:03.849 real 0m0.001s 01:12:03.849 user 0m0.000s 01:12:03.849 sys 0m0.001s 01:12:03.850 05:06:46 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 01:12:03.850 05:06:46 ubsan -- common/autotest_common.sh@10 -- $ set +x 01:12:03.850 ************************************ 01:12:03.850 END TEST ubsan 01:12:03.850 ************************************ 01:12:03.850 05:06:46 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 01:12:03.850 05:06:46 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 01:12:03.850 05:06:46 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 01:12:03.850 05:06:46 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 01:12:03.850 05:06:46 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 01:12:03.850 05:06:46 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 01:12:03.850 05:06:46 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 01:12:03.850 05:06:46 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 01:12:03.850 05:06:46 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 01:12:03.850 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 01:12:03.850 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 01:12:04.417 Using 'verbs' RDMA provider 01:12:20.716 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 01:12:35.610 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 01:12:36.449 Creating mk/config.mk...done. 01:12:36.449 Creating mk/cc.flags.mk...done. 01:12:36.449 Type 'make' to build. 01:12:36.449 05:07:18 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 01:12:36.449 05:07:18 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 01:12:36.449 05:07:18 -- common/autotest_common.sh@1111 -- $ xtrace_disable 01:12:36.449 05:07:18 -- common/autotest_common.sh@10 -- $ set +x 01:12:36.449 ************************************ 01:12:36.449 START TEST make 01:12:36.449 ************************************ 01:12:36.449 05:07:18 make -- common/autotest_common.sh@1129 -- $ make -j10 01:12:37.017 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 01:12:37.017 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 01:12:37.017 meson setup builddir \ 01:12:37.017 -Dwith-libaio=enabled \ 01:12:37.017 -Dwith-liburing=enabled \ 01:12:37.017 -Dwith-libvfn=disabled \ 01:12:37.017 -Dwith-spdk=disabled \ 01:12:37.017 -Dexamples=false \ 01:12:37.017 -Dtests=false \ 01:12:37.017 -Dtools=false && \ 01:12:37.017 meson compile -C builddir && \ 01:12:37.017 cd -) 01:12:37.017 make[1]: Nothing to be done for 'all'. 01:12:39.550 The Meson build system 01:12:39.550 Version: 1.5.0 01:12:39.550 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 01:12:39.550 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 01:12:39.550 Build type: native build 01:12:39.550 Project name: xnvme 01:12:39.550 Project version: 0.7.5 01:12:39.550 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 01:12:39.550 C linker for the host machine: cc ld.bfd 2.40-14 01:12:39.550 Host machine cpu family: x86_64 01:12:39.550 Host machine cpu: x86_64 01:12:39.550 Message: host_machine.system: linux 01:12:39.550 Compiler for C supports arguments -Wno-missing-braces: YES 01:12:39.550 Compiler for C supports arguments -Wno-cast-function-type: YES 01:12:39.550 Compiler for C supports arguments -Wno-strict-aliasing: YES 01:12:39.550 Run-time dependency threads found: YES 01:12:39.550 Has header "setupapi.h" : NO 01:12:39.550 Has header "linux/blkzoned.h" : YES 01:12:39.550 Has header "linux/blkzoned.h" : YES (cached) 01:12:39.550 Has header "libaio.h" : YES 01:12:39.550 Library aio found: YES 01:12:39.550 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 01:12:39.550 Run-time dependency liburing found: YES 2.2 01:12:39.550 Dependency libvfn skipped: feature with-libvfn disabled 01:12:39.550 Found CMake: /usr/bin/cmake (3.27.7) 01:12:39.550 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 01:12:39.550 Subproject spdk : skipped: feature with-spdk disabled 01:12:39.550 Run-time dependency appleframeworks found: NO (tried framework) 01:12:39.550 Run-time dependency appleframeworks found: NO (tried framework) 01:12:39.550 Library rt found: YES 01:12:39.550 Checking for function "clock_gettime" with dependency -lrt: YES 01:12:39.550 Configuring xnvme_config.h using configuration 01:12:39.550 Configuring xnvme.spec using configuration 01:12:39.550 Run-time dependency bash-completion found: YES 2.11 01:12:39.550 Message: Bash-completions: /usr/share/bash-completion/completions 01:12:39.550 Program cp found: YES (/usr/bin/cp) 01:12:39.550 Build targets in project: 3 01:12:39.550 01:12:39.550 xnvme 0.7.5 01:12:39.550 01:12:39.550 Subprojects 01:12:39.550 spdk : NO Feature 'with-spdk' disabled 01:12:39.550 01:12:39.550 User defined options 01:12:39.550 examples : false 01:12:39.550 tests : false 01:12:39.550 tools : false 01:12:39.550 with-libaio : enabled 01:12:39.550 with-liburing: enabled 01:12:39.550 with-libvfn : disabled 01:12:39.550 with-spdk : disabled 01:12:39.550 01:12:39.550 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 01:12:39.550 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 01:12:39.550 [1/76] Generating toolbox/xnvme-driver-script with a custom command 01:12:39.550 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 01:12:39.550 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 01:12:39.550 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 01:12:39.550 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 01:12:39.808 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 01:12:39.808 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 01:12:39.808 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 01:12:39.808 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 01:12:39.808 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 01:12:39.808 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 01:12:39.809 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 01:12:39.809 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 01:12:39.809 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 01:12:39.809 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 01:12:39.809 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 01:12:39.809 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 01:12:39.809 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 01:12:39.809 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 01:12:39.809 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 01:12:39.809 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 01:12:39.809 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 01:12:39.809 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 01:12:39.809 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 01:12:39.809 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 01:12:39.809 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 01:12:39.809 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 01:12:39.809 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 01:12:39.809 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 01:12:40.067 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 01:12:40.067 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 01:12:40.067 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 01:12:40.067 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 01:12:40.067 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 01:12:40.067 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 01:12:40.067 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 01:12:40.067 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 01:12:40.067 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 01:12:40.067 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 01:12:40.067 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 01:12:40.067 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 01:12:40.067 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 01:12:40.067 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 01:12:40.067 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 01:12:40.067 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 01:12:40.067 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 01:12:40.067 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 01:12:40.067 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 01:12:40.067 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 01:12:40.067 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 01:12:40.067 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 01:12:40.067 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 01:12:40.067 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 01:12:40.067 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 01:12:40.067 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 01:12:40.067 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 01:12:40.067 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 01:12:40.326 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 01:12:40.326 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 01:12:40.326 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 01:12:40.326 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 01:12:40.326 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 01:12:40.326 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 01:12:40.326 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 01:12:40.326 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 01:12:40.327 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 01:12:40.327 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 01:12:40.327 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 01:12:40.327 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 01:12:40.327 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 01:12:40.327 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 01:12:40.585 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 01:12:40.585 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 01:12:40.585 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 01:12:40.844 [75/76] Linking static target lib/libxnvme.a 01:12:40.844 [76/76] Linking target lib/libxnvme.so.0.7.5 01:12:40.844 INFO: autodetecting backend as ninja 01:12:40.844 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 01:12:40.844 /home/vagrant/spdk_repo/spdk/xnvmebuild 01:12:47.417 The Meson build system 01:12:47.417 Version: 1.5.0 01:12:47.417 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 01:12:47.417 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 01:12:47.417 Build type: native build 01:12:47.417 Program cat found: YES (/usr/bin/cat) 01:12:47.417 Project name: DPDK 01:12:47.417 Project version: 24.03.0 01:12:47.417 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 01:12:47.417 C linker for the host machine: cc ld.bfd 2.40-14 01:12:47.417 Host machine cpu family: x86_64 01:12:47.417 Host machine cpu: x86_64 01:12:47.417 Message: ## Building in Developer Mode ## 01:12:47.417 Program pkg-config found: YES (/usr/bin/pkg-config) 01:12:47.417 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 01:12:47.417 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 01:12:47.417 Program python3 found: YES (/usr/bin/python3) 01:12:47.417 Program cat found: YES (/usr/bin/cat) 01:12:47.417 Compiler for C supports arguments -march=native: YES 01:12:47.417 Checking for size of "void *" : 8 01:12:47.417 Checking for size of "void *" : 8 (cached) 01:12:47.417 Compiler for C supports link arguments -Wl,--undefined-version: YES 01:12:47.417 Library m found: YES 01:12:47.417 Library numa found: YES 01:12:47.417 Has header "numaif.h" : YES 01:12:47.417 Library fdt found: NO 01:12:47.417 Library execinfo found: NO 01:12:47.417 Has header "execinfo.h" : YES 01:12:47.417 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 01:12:47.417 Run-time dependency libarchive found: NO (tried pkgconfig) 01:12:47.417 Run-time dependency libbsd found: NO (tried pkgconfig) 01:12:47.417 Run-time dependency jansson found: NO (tried pkgconfig) 01:12:47.417 Run-time dependency openssl found: YES 3.1.1 01:12:47.417 Run-time dependency libpcap found: YES 1.10.4 01:12:47.417 Has header "pcap.h" with dependency libpcap: YES 01:12:47.417 Compiler for C supports arguments -Wcast-qual: YES 01:12:47.417 Compiler for C supports arguments -Wdeprecated: YES 01:12:47.417 Compiler for C supports arguments -Wformat: YES 01:12:47.417 Compiler for C supports arguments -Wformat-nonliteral: NO 01:12:47.417 Compiler for C supports arguments -Wformat-security: NO 01:12:47.417 Compiler for C supports arguments -Wmissing-declarations: YES 01:12:47.417 Compiler for C supports arguments -Wmissing-prototypes: YES 01:12:47.417 Compiler for C supports arguments -Wnested-externs: YES 01:12:47.417 Compiler for C supports arguments -Wold-style-definition: YES 01:12:47.417 Compiler for C supports arguments -Wpointer-arith: YES 01:12:47.417 Compiler for C supports arguments -Wsign-compare: YES 01:12:47.417 Compiler for C supports arguments -Wstrict-prototypes: YES 01:12:47.417 Compiler for C supports arguments -Wundef: YES 01:12:47.417 Compiler for C supports arguments -Wwrite-strings: YES 01:12:47.417 Compiler for C supports arguments -Wno-address-of-packed-member: YES 01:12:47.417 Compiler for C supports arguments -Wno-packed-not-aligned: YES 01:12:47.417 Compiler for C supports arguments -Wno-missing-field-initializers: YES 01:12:47.417 Compiler for C supports arguments -Wno-zero-length-bounds: YES 01:12:47.417 Program objdump found: YES (/usr/bin/objdump) 01:12:47.417 Compiler for C supports arguments -mavx512f: YES 01:12:47.417 Checking if "AVX512 checking" compiles: YES 01:12:47.417 Fetching value of define "__SSE4_2__" : 1 01:12:47.417 Fetching value of define "__AES__" : 1 01:12:47.417 Fetching value of define "__AVX__" : 1 01:12:47.417 Fetching value of define "__AVX2__" : 1 01:12:47.417 Fetching value of define "__AVX512BW__" : 1 01:12:47.417 Fetching value of define "__AVX512CD__" : 1 01:12:47.417 Fetching value of define "__AVX512DQ__" : 1 01:12:47.417 Fetching value of define "__AVX512F__" : 1 01:12:47.417 Fetching value of define "__AVX512VL__" : 1 01:12:47.417 Fetching value of define "__PCLMUL__" : 1 01:12:47.417 Fetching value of define "__RDRND__" : 1 01:12:47.417 Fetching value of define "__RDSEED__" : 1 01:12:47.417 Fetching value of define "__VPCLMULQDQ__" : (undefined) 01:12:47.417 Fetching value of define "__znver1__" : (undefined) 01:12:47.417 Fetching value of define "__znver2__" : (undefined) 01:12:47.417 Fetching value of define "__znver3__" : (undefined) 01:12:47.417 Fetching value of define "__znver4__" : (undefined) 01:12:47.417 Library asan found: YES 01:12:47.417 Compiler for C supports arguments -Wno-format-truncation: YES 01:12:47.417 Message: lib/log: Defining dependency "log" 01:12:47.417 Message: lib/kvargs: Defining dependency "kvargs" 01:12:47.417 Message: lib/telemetry: Defining dependency "telemetry" 01:12:47.417 Library rt found: YES 01:12:47.417 Checking for function "getentropy" : NO 01:12:47.417 Message: lib/eal: Defining dependency "eal" 01:12:47.417 Message: lib/ring: Defining dependency "ring" 01:12:47.417 Message: lib/rcu: Defining dependency "rcu" 01:12:47.417 Message: lib/mempool: Defining dependency "mempool" 01:12:47.418 Message: lib/mbuf: Defining dependency "mbuf" 01:12:47.418 Fetching value of define "__PCLMUL__" : 1 (cached) 01:12:47.418 Fetching value of define "__AVX512F__" : 1 (cached) 01:12:47.418 Fetching value of define "__AVX512BW__" : 1 (cached) 01:12:47.418 Fetching value of define "__AVX512DQ__" : 1 (cached) 01:12:47.418 Fetching value of define "__AVX512VL__" : 1 (cached) 01:12:47.418 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 01:12:47.418 Compiler for C supports arguments -mpclmul: YES 01:12:47.418 Compiler for C supports arguments -maes: YES 01:12:47.418 Compiler for C supports arguments -mavx512f: YES (cached) 01:12:47.418 Compiler for C supports arguments -mavx512bw: YES 01:12:47.418 Compiler for C supports arguments -mavx512dq: YES 01:12:47.418 Compiler for C supports arguments -mavx512vl: YES 01:12:47.418 Compiler for C supports arguments -mvpclmulqdq: YES 01:12:47.418 Compiler for C supports arguments -mavx2: YES 01:12:47.418 Compiler for C supports arguments -mavx: YES 01:12:47.418 Message: lib/net: Defining dependency "net" 01:12:47.418 Message: lib/meter: Defining dependency "meter" 01:12:47.418 Message: lib/ethdev: Defining dependency "ethdev" 01:12:47.418 Message: lib/pci: Defining dependency "pci" 01:12:47.418 Message: lib/cmdline: Defining dependency "cmdline" 01:12:47.418 Message: lib/hash: Defining dependency "hash" 01:12:47.418 Message: lib/timer: Defining dependency "timer" 01:12:47.418 Message: lib/compressdev: Defining dependency "compressdev" 01:12:47.418 Message: lib/cryptodev: Defining dependency "cryptodev" 01:12:47.418 Message: lib/dmadev: Defining dependency "dmadev" 01:12:47.418 Compiler for C supports arguments -Wno-cast-qual: YES 01:12:47.418 Message: lib/power: Defining dependency "power" 01:12:47.418 Message: lib/reorder: Defining dependency "reorder" 01:12:47.418 Message: lib/security: Defining dependency "security" 01:12:47.418 Has header "linux/userfaultfd.h" : YES 01:12:47.418 Has header "linux/vduse.h" : YES 01:12:47.418 Message: lib/vhost: Defining dependency "vhost" 01:12:47.418 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 01:12:47.418 Message: drivers/bus/pci: Defining dependency "bus_pci" 01:12:47.418 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 01:12:47.418 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 01:12:47.418 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 01:12:47.418 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 01:12:47.418 Message: Disabling ml/* drivers: missing internal dependency "mldev" 01:12:47.418 Message: Disabling event/* drivers: missing internal dependency "eventdev" 01:12:47.418 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 01:12:47.418 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 01:12:47.418 Program doxygen found: YES (/usr/local/bin/doxygen) 01:12:47.418 Configuring doxy-api-html.conf using configuration 01:12:47.418 Configuring doxy-api-man.conf using configuration 01:12:47.418 Program mandb found: YES (/usr/bin/mandb) 01:12:47.418 Program sphinx-build found: NO 01:12:47.418 Configuring rte_build_config.h using configuration 01:12:47.418 Message: 01:12:47.418 ================= 01:12:47.418 Applications Enabled 01:12:47.418 ================= 01:12:47.418 01:12:47.418 apps: 01:12:47.418 01:12:47.418 01:12:47.418 Message: 01:12:47.418 ================= 01:12:47.418 Libraries Enabled 01:12:47.418 ================= 01:12:47.418 01:12:47.418 libs: 01:12:47.418 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 01:12:47.418 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 01:12:47.418 cryptodev, dmadev, power, reorder, security, vhost, 01:12:47.418 01:12:47.418 Message: 01:12:47.418 =============== 01:12:47.418 Drivers Enabled 01:12:47.418 =============== 01:12:47.418 01:12:47.418 common: 01:12:47.418 01:12:47.418 bus: 01:12:47.418 pci, vdev, 01:12:47.418 mempool: 01:12:47.418 ring, 01:12:47.418 dma: 01:12:47.418 01:12:47.418 net: 01:12:47.418 01:12:47.418 crypto: 01:12:47.418 01:12:47.418 compress: 01:12:47.418 01:12:47.418 vdpa: 01:12:47.418 01:12:47.418 01:12:47.418 Message: 01:12:47.418 ================= 01:12:47.418 Content Skipped 01:12:47.418 ================= 01:12:47.418 01:12:47.418 apps: 01:12:47.418 dumpcap: explicitly disabled via build config 01:12:47.418 graph: explicitly disabled via build config 01:12:47.418 pdump: explicitly disabled via build config 01:12:47.418 proc-info: explicitly disabled via build config 01:12:47.418 test-acl: explicitly disabled via build config 01:12:47.418 test-bbdev: explicitly disabled via build config 01:12:47.418 test-cmdline: explicitly disabled via build config 01:12:47.418 test-compress-perf: explicitly disabled via build config 01:12:47.418 test-crypto-perf: explicitly disabled via build config 01:12:47.418 test-dma-perf: explicitly disabled via build config 01:12:47.418 test-eventdev: explicitly disabled via build config 01:12:47.418 test-fib: explicitly disabled via build config 01:12:47.418 test-flow-perf: explicitly disabled via build config 01:12:47.418 test-gpudev: explicitly disabled via build config 01:12:47.418 test-mldev: explicitly disabled via build config 01:12:47.418 test-pipeline: explicitly disabled via build config 01:12:47.418 test-pmd: explicitly disabled via build config 01:12:47.418 test-regex: explicitly disabled via build config 01:12:47.418 test-sad: explicitly disabled via build config 01:12:47.418 test-security-perf: explicitly disabled via build config 01:12:47.418 01:12:47.418 libs: 01:12:47.418 argparse: explicitly disabled via build config 01:12:47.418 metrics: explicitly disabled via build config 01:12:47.418 acl: explicitly disabled via build config 01:12:47.418 bbdev: explicitly disabled via build config 01:12:47.418 bitratestats: explicitly disabled via build config 01:12:47.418 bpf: explicitly disabled via build config 01:12:47.418 cfgfile: explicitly disabled via build config 01:12:47.418 distributor: explicitly disabled via build config 01:12:47.418 efd: explicitly disabled via build config 01:12:47.418 eventdev: explicitly disabled via build config 01:12:47.418 dispatcher: explicitly disabled via build config 01:12:47.418 gpudev: explicitly disabled via build config 01:12:47.418 gro: explicitly disabled via build config 01:12:47.418 gso: explicitly disabled via build config 01:12:47.418 ip_frag: explicitly disabled via build config 01:12:47.418 jobstats: explicitly disabled via build config 01:12:47.418 latencystats: explicitly disabled via build config 01:12:47.418 lpm: explicitly disabled via build config 01:12:47.418 member: explicitly disabled via build config 01:12:47.418 pcapng: explicitly disabled via build config 01:12:47.418 rawdev: explicitly disabled via build config 01:12:47.418 regexdev: explicitly disabled via build config 01:12:47.418 mldev: explicitly disabled via build config 01:12:47.418 rib: explicitly disabled via build config 01:12:47.418 sched: explicitly disabled via build config 01:12:47.418 stack: explicitly disabled via build config 01:12:47.418 ipsec: explicitly disabled via build config 01:12:47.418 pdcp: explicitly disabled via build config 01:12:47.418 fib: explicitly disabled via build config 01:12:47.418 port: explicitly disabled via build config 01:12:47.418 pdump: explicitly disabled via build config 01:12:47.418 table: explicitly disabled via build config 01:12:47.418 pipeline: explicitly disabled via build config 01:12:47.418 graph: explicitly disabled via build config 01:12:47.418 node: explicitly disabled via build config 01:12:47.418 01:12:47.418 drivers: 01:12:47.418 common/cpt: not in enabled drivers build config 01:12:47.418 common/dpaax: not in enabled drivers build config 01:12:47.418 common/iavf: not in enabled drivers build config 01:12:47.418 common/idpf: not in enabled drivers build config 01:12:47.418 common/ionic: not in enabled drivers build config 01:12:47.418 common/mvep: not in enabled drivers build config 01:12:47.418 common/octeontx: not in enabled drivers build config 01:12:47.418 bus/auxiliary: not in enabled drivers build config 01:12:47.418 bus/cdx: not in enabled drivers build config 01:12:47.418 bus/dpaa: not in enabled drivers build config 01:12:47.418 bus/fslmc: not in enabled drivers build config 01:12:47.418 bus/ifpga: not in enabled drivers build config 01:12:47.418 bus/platform: not in enabled drivers build config 01:12:47.418 bus/uacce: not in enabled drivers build config 01:12:47.418 bus/vmbus: not in enabled drivers build config 01:12:47.418 common/cnxk: not in enabled drivers build config 01:12:47.418 common/mlx5: not in enabled drivers build config 01:12:47.418 common/nfp: not in enabled drivers build config 01:12:47.418 common/nitrox: not in enabled drivers build config 01:12:47.418 common/qat: not in enabled drivers build config 01:12:47.418 common/sfc_efx: not in enabled drivers build config 01:12:47.418 mempool/bucket: not in enabled drivers build config 01:12:47.418 mempool/cnxk: not in enabled drivers build config 01:12:47.418 mempool/dpaa: not in enabled drivers build config 01:12:47.418 mempool/dpaa2: not in enabled drivers build config 01:12:47.418 mempool/octeontx: not in enabled drivers build config 01:12:47.418 mempool/stack: not in enabled drivers build config 01:12:47.418 dma/cnxk: not in enabled drivers build config 01:12:47.418 dma/dpaa: not in enabled drivers build config 01:12:47.418 dma/dpaa2: not in enabled drivers build config 01:12:47.418 dma/hisilicon: not in enabled drivers build config 01:12:47.418 dma/idxd: not in enabled drivers build config 01:12:47.418 dma/ioat: not in enabled drivers build config 01:12:47.418 dma/skeleton: not in enabled drivers build config 01:12:47.418 net/af_packet: not in enabled drivers build config 01:12:47.418 net/af_xdp: not in enabled drivers build config 01:12:47.418 net/ark: not in enabled drivers build config 01:12:47.418 net/atlantic: not in enabled drivers build config 01:12:47.418 net/avp: not in enabled drivers build config 01:12:47.418 net/axgbe: not in enabled drivers build config 01:12:47.418 net/bnx2x: not in enabled drivers build config 01:12:47.418 net/bnxt: not in enabled drivers build config 01:12:47.418 net/bonding: not in enabled drivers build config 01:12:47.418 net/cnxk: not in enabled drivers build config 01:12:47.419 net/cpfl: not in enabled drivers build config 01:12:47.419 net/cxgbe: not in enabled drivers build config 01:12:47.419 net/dpaa: not in enabled drivers build config 01:12:47.419 net/dpaa2: not in enabled drivers build config 01:12:47.419 net/e1000: not in enabled drivers build config 01:12:47.419 net/ena: not in enabled drivers build config 01:12:47.419 net/enetc: not in enabled drivers build config 01:12:47.419 net/enetfec: not in enabled drivers build config 01:12:47.419 net/enic: not in enabled drivers build config 01:12:47.419 net/failsafe: not in enabled drivers build config 01:12:47.419 net/fm10k: not in enabled drivers build config 01:12:47.419 net/gve: not in enabled drivers build config 01:12:47.419 net/hinic: not in enabled drivers build config 01:12:47.419 net/hns3: not in enabled drivers build config 01:12:47.419 net/i40e: not in enabled drivers build config 01:12:47.419 net/iavf: not in enabled drivers build config 01:12:47.419 net/ice: not in enabled drivers build config 01:12:47.419 net/idpf: not in enabled drivers build config 01:12:47.419 net/igc: not in enabled drivers build config 01:12:47.419 net/ionic: not in enabled drivers build config 01:12:47.419 net/ipn3ke: not in enabled drivers build config 01:12:47.419 net/ixgbe: not in enabled drivers build config 01:12:47.419 net/mana: not in enabled drivers build config 01:12:47.419 net/memif: not in enabled drivers build config 01:12:47.419 net/mlx4: not in enabled drivers build config 01:12:47.419 net/mlx5: not in enabled drivers build config 01:12:47.419 net/mvneta: not in enabled drivers build config 01:12:47.419 net/mvpp2: not in enabled drivers build config 01:12:47.419 net/netvsc: not in enabled drivers build config 01:12:47.419 net/nfb: not in enabled drivers build config 01:12:47.419 net/nfp: not in enabled drivers build config 01:12:47.419 net/ngbe: not in enabled drivers build config 01:12:47.419 net/null: not in enabled drivers build config 01:12:47.419 net/octeontx: not in enabled drivers build config 01:12:47.419 net/octeon_ep: not in enabled drivers build config 01:12:47.419 net/pcap: not in enabled drivers build config 01:12:47.419 net/pfe: not in enabled drivers build config 01:12:47.419 net/qede: not in enabled drivers build config 01:12:47.419 net/ring: not in enabled drivers build config 01:12:47.419 net/sfc: not in enabled drivers build config 01:12:47.419 net/softnic: not in enabled drivers build config 01:12:47.419 net/tap: not in enabled drivers build config 01:12:47.419 net/thunderx: not in enabled drivers build config 01:12:47.419 net/txgbe: not in enabled drivers build config 01:12:47.419 net/vdev_netvsc: not in enabled drivers build config 01:12:47.419 net/vhost: not in enabled drivers build config 01:12:47.419 net/virtio: not in enabled drivers build config 01:12:47.419 net/vmxnet3: not in enabled drivers build config 01:12:47.419 raw/*: missing internal dependency, "rawdev" 01:12:47.419 crypto/armv8: not in enabled drivers build config 01:12:47.419 crypto/bcmfs: not in enabled drivers build config 01:12:47.419 crypto/caam_jr: not in enabled drivers build config 01:12:47.419 crypto/ccp: not in enabled drivers build config 01:12:47.419 crypto/cnxk: not in enabled drivers build config 01:12:47.419 crypto/dpaa_sec: not in enabled drivers build config 01:12:47.419 crypto/dpaa2_sec: not in enabled drivers build config 01:12:47.419 crypto/ipsec_mb: not in enabled drivers build config 01:12:47.419 crypto/mlx5: not in enabled drivers build config 01:12:47.419 crypto/mvsam: not in enabled drivers build config 01:12:47.419 crypto/nitrox: not in enabled drivers build config 01:12:47.419 crypto/null: not in enabled drivers build config 01:12:47.419 crypto/octeontx: not in enabled drivers build config 01:12:47.419 crypto/openssl: not in enabled drivers build config 01:12:47.419 crypto/scheduler: not in enabled drivers build config 01:12:47.419 crypto/uadk: not in enabled drivers build config 01:12:47.419 crypto/virtio: not in enabled drivers build config 01:12:47.419 compress/isal: not in enabled drivers build config 01:12:47.419 compress/mlx5: not in enabled drivers build config 01:12:47.419 compress/nitrox: not in enabled drivers build config 01:12:47.419 compress/octeontx: not in enabled drivers build config 01:12:47.419 compress/zlib: not in enabled drivers build config 01:12:47.419 regex/*: missing internal dependency, "regexdev" 01:12:47.419 ml/*: missing internal dependency, "mldev" 01:12:47.419 vdpa/ifc: not in enabled drivers build config 01:12:47.419 vdpa/mlx5: not in enabled drivers build config 01:12:47.419 vdpa/nfp: not in enabled drivers build config 01:12:47.419 vdpa/sfc: not in enabled drivers build config 01:12:47.419 event/*: missing internal dependency, "eventdev" 01:12:47.419 baseband/*: missing internal dependency, "bbdev" 01:12:47.419 gpu/*: missing internal dependency, "gpudev" 01:12:47.419 01:12:47.419 01:12:47.678 Build targets in project: 85 01:12:47.678 01:12:47.678 DPDK 24.03.0 01:12:47.678 01:12:47.678 User defined options 01:12:47.678 buildtype : debug 01:12:47.678 default_library : shared 01:12:47.678 libdir : lib 01:12:47.678 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 01:12:47.678 b_sanitize : address 01:12:47.678 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 01:12:47.678 c_link_args : 01:12:47.678 cpu_instruction_set: native 01:12:47.678 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 01:12:47.678 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 01:12:47.678 enable_docs : false 01:12:47.678 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 01:12:47.678 enable_kmods : false 01:12:47.678 max_lcores : 128 01:12:47.678 tests : false 01:12:47.678 01:12:47.678 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 01:12:48.246 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 01:12:48.246 [1/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 01:12:48.246 [2/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 01:12:48.246 [3/268] Linking static target lib/librte_kvargs.a 01:12:48.246 [4/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 01:12:48.506 [5/268] Compiling C object lib/librte_log.a.p/log_log.c.o 01:12:48.506 [6/268] Linking static target lib/librte_log.a 01:12:48.766 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 01:12:48.766 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 01:12:48.766 [9/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 01:12:48.766 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 01:12:48.766 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 01:12:48.766 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 01:12:48.766 [13/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 01:12:48.766 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 01:12:48.766 [15/268] Linking static target lib/librte_telemetry.a 01:12:48.766 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 01:12:49.024 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 01:12:49.024 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 01:12:49.284 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 01:12:49.284 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 01:12:49.284 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 01:12:49.284 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 01:12:49.284 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 01:12:49.543 [24/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 01:12:49.543 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 01:12:49.543 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 01:12:49.543 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 01:12:49.543 [28/268] Linking target lib/librte_log.so.24.1 01:12:49.543 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 01:12:49.544 [30/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 01:12:49.803 [31/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 01:12:49.803 [32/268] Linking target lib/librte_kvargs.so.24.1 01:12:49.803 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 01:12:49.803 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 01:12:49.803 [35/268] Linking target lib/librte_telemetry.so.24.1 01:12:50.062 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 01:12:50.062 [37/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 01:12:50.062 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 01:12:50.062 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 01:12:50.062 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 01:12:50.062 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 01:12:50.062 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 01:12:50.062 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 01:12:50.062 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 01:12:50.062 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 01:12:50.062 [46/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 01:12:50.320 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 01:12:50.320 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 01:12:50.320 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 01:12:50.580 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 01:12:50.580 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 01:12:50.580 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 01:12:50.839 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 01:12:50.839 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 01:12:50.839 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 01:12:50.839 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 01:12:50.839 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 01:12:50.839 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 01:12:50.839 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 01:12:51.098 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 01:12:51.098 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 01:12:51.098 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 01:12:51.098 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 01:12:51.373 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 01:12:51.373 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 01:12:51.373 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 01:12:51.373 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 01:12:51.373 [68/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 01:12:51.665 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 01:12:51.665 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 01:12:51.665 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 01:12:51.665 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 01:12:51.665 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 01:12:51.924 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 01:12:51.924 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 01:12:51.924 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 01:12:51.924 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 01:12:51.924 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 01:12:51.924 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 01:12:51.924 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 01:12:52.184 [81/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 01:12:52.184 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 01:12:52.184 [83/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 01:12:52.184 [84/268] Linking static target lib/librte_ring.a 01:12:52.443 [85/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 01:12:52.443 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 01:12:52.443 [87/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 01:12:52.443 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 01:12:52.443 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 01:12:52.443 [90/268] Linking static target lib/librte_eal.a 01:12:52.443 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 01:12:52.443 [92/268] Linking static target lib/librte_mempool.a 01:12:52.443 [93/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 01:12:52.443 [94/268] Linking static target lib/librte_rcu.a 01:12:52.727 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 01:12:52.727 [96/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 01:12:52.727 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 01:12:52.727 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 01:12:52.727 [99/268] Linking static target lib/net/libnet_crc_avx512_lib.a 01:12:52.727 [100/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 01:12:52.985 [101/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 01:12:52.985 [102/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 01:12:53.245 [103/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 01:12:53.245 [104/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 01:12:53.245 [105/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 01:12:53.245 [106/268] Linking static target lib/librte_meter.a 01:12:53.245 [107/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 01:12:53.245 [108/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 01:12:53.245 [109/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 01:12:53.245 [110/268] Linking static target lib/librte_net.a 01:12:53.245 [111/268] Linking static target lib/librte_mbuf.a 01:12:53.504 [112/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 01:12:53.504 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 01:12:53.504 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 01:12:53.504 [115/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 01:12:53.764 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 01:12:53.764 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 01:12:53.764 [118/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 01:12:54.023 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 01:12:54.023 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 01:12:54.282 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 01:12:54.282 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 01:12:54.282 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 01:12:54.541 [124/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 01:12:54.541 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 01:12:54.541 [126/268] Linking static target lib/librte_pci.a 01:12:54.541 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 01:12:54.541 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 01:12:54.799 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 01:12:54.799 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 01:12:54.799 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 01:12:54.799 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 01:12:54.799 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 01:12:54.799 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 01:12:54.799 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 01:12:54.799 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 01:12:54.800 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 01:12:54.800 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 01:12:55.058 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 01:12:55.058 [140/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 01:12:55.058 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 01:12:55.058 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 01:12:55.058 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 01:12:55.058 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 01:12:55.058 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 01:12:55.316 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 01:12:55.316 [147/268] Linking static target lib/librte_cmdline.a 01:12:55.316 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 01:12:55.316 [149/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 01:12:55.316 [150/268] Linking static target lib/librte_timer.a 01:12:55.573 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 01:12:55.573 [152/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 01:12:55.573 [153/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 01:12:55.573 [154/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 01:12:55.831 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 01:12:56.089 [156/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 01:12:56.089 [157/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 01:12:56.089 [158/268] Linking static target lib/librte_hash.a 01:12:56.089 [159/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 01:12:56.089 [160/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 01:12:56.089 [161/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 01:12:56.089 [162/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 01:12:56.089 [163/268] Linking static target lib/librte_compressdev.a 01:12:56.348 [164/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 01:12:56.348 [165/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 01:12:56.348 [166/268] Linking static target lib/librte_dmadev.a 01:12:56.348 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 01:12:56.348 [168/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 01:12:56.606 [169/268] Linking static target lib/librte_ethdev.a 01:12:56.606 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 01:12:56.606 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 01:12:56.606 [172/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 01:12:56.865 [173/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 01:12:56.865 [174/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 01:12:57.124 [175/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 01:12:57.124 [176/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 01:12:57.124 [177/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 01:12:57.124 [178/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 01:12:57.124 [179/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 01:12:57.383 [180/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 01:12:57.383 [181/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 01:12:57.383 [182/268] Linking static target lib/librte_cryptodev.a 01:12:57.383 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 01:12:57.383 [184/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 01:12:57.643 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 01:12:57.643 [186/268] Linking static target lib/librte_power.a 01:12:57.643 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 01:12:57.643 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 01:12:57.643 [189/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 01:12:57.901 [190/268] Linking static target lib/librte_reorder.a 01:12:57.901 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 01:12:57.901 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 01:12:57.901 [193/268] Linking static target lib/librte_security.a 01:12:58.468 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 01:12:58.468 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 01:12:58.727 [196/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 01:12:58.727 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 01:12:58.986 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 01:12:58.986 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 01:12:58.986 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 01:12:58.986 [201/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 01:12:59.245 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 01:12:59.245 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 01:12:59.245 [204/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 01:12:59.245 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 01:12:59.245 [206/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 01:12:59.516 [207/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 01:12:59.516 [208/268] Linking static target drivers/libtmp_rte_bus_vdev.a 01:12:59.516 [209/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 01:12:59.516 [210/268] Linking static target drivers/libtmp_rte_bus_pci.a 01:12:59.841 [211/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 01:12:59.841 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 01:12:59.841 [213/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 01:12:59.841 [214/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 01:12:59.841 [215/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 01:12:59.841 [216/268] Linking static target drivers/librte_bus_vdev.a 01:12:59.841 [217/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 01:12:59.841 [218/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 01:12:59.841 [219/268] Linking static target drivers/librte_bus_pci.a 01:12:59.841 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 01:12:59.841 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 01:13:00.099 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 01:13:00.099 [223/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 01:13:00.099 [224/268] Linking static target drivers/librte_mempool_ring.a 01:13:00.099 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 01:13:00.099 [226/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 01:13:00.357 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 01:13:01.293 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 01:13:04.578 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 01:13:04.578 [230/268] Linking static target lib/librte_vhost.a 01:13:05.143 [231/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 01:13:05.144 [232/268] Linking target lib/librte_eal.so.24.1 01:13:05.402 [233/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 01:13:05.402 [234/268] Linking target lib/librte_ring.so.24.1 01:13:05.402 [235/268] Linking target lib/librte_pci.so.24.1 01:13:05.402 [236/268] Linking target lib/librte_meter.so.24.1 01:13:05.402 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 01:13:05.402 [238/268] Linking target lib/librte_timer.so.24.1 01:13:05.402 [239/268] Linking target lib/librte_dmadev.so.24.1 01:13:05.660 [240/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 01:13:05.660 [241/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 01:13:05.660 [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 01:13:05.660 [243/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 01:13:05.660 [244/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 01:13:05.660 [245/268] Linking target drivers/librte_bus_pci.so.24.1 01:13:05.660 [246/268] Linking target lib/librte_rcu.so.24.1 01:13:05.660 [247/268] Linking target lib/librte_mempool.so.24.1 01:13:05.660 [248/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 01:13:05.660 [249/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 01:13:05.660 [250/268] Linking target drivers/librte_mempool_ring.so.24.1 01:13:05.660 [251/268] Linking target lib/librte_mbuf.so.24.1 01:13:05.919 [252/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 01:13:05.919 [253/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 01:13:05.919 [254/268] Linking target lib/librte_reorder.so.24.1 01:13:05.919 [255/268] Linking target lib/librte_compressdev.so.24.1 01:13:05.919 [256/268] Linking target lib/librte_net.so.24.1 01:13:05.919 [257/268] Linking target lib/librte_cryptodev.so.24.1 01:13:06.177 [258/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 01:13:06.177 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 01:13:06.177 [260/268] Linking target lib/librte_cmdline.so.24.1 01:13:06.177 [261/268] Linking target lib/librte_hash.so.24.1 01:13:06.177 [262/268] Linking target lib/librte_security.so.24.1 01:13:06.177 [263/268] Linking target lib/librte_ethdev.so.24.1 01:13:06.177 [264/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 01:13:06.177 [265/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 01:13:06.436 [266/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 01:13:06.436 [267/268] Linking target lib/librte_power.so.24.1 01:13:06.436 [268/268] Linking target lib/librte_vhost.so.24.1 01:13:06.436 INFO: autodetecting backend as ninja 01:13:06.436 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 01:13:24.544 CC lib/ut/ut.o 01:13:24.544 CC lib/ut_mock/mock.o 01:13:24.544 CC lib/log/log.o 01:13:24.544 CC lib/log/log_flags.o 01:13:24.544 CC lib/log/log_deprecated.o 01:13:24.544 LIB libspdk_ut.a 01:13:24.544 LIB libspdk_log.a 01:13:24.544 LIB libspdk_ut_mock.a 01:13:24.544 SO libspdk_ut.so.2.0 01:13:24.544 SO libspdk_log.so.7.1 01:13:24.544 SO libspdk_ut_mock.so.6.0 01:13:24.544 SYMLINK libspdk_ut.so 01:13:24.544 SYMLINK libspdk_log.so 01:13:24.544 SYMLINK libspdk_ut_mock.so 01:13:24.544 CXX lib/trace_parser/trace.o 01:13:24.544 CC lib/dma/dma.o 01:13:24.544 CC lib/util/bit_array.o 01:13:24.544 CC lib/util/base64.o 01:13:24.544 CC lib/util/cpuset.o 01:13:24.544 CC lib/util/crc32c.o 01:13:24.544 CC lib/util/crc32.o 01:13:24.544 CC lib/util/crc16.o 01:13:24.544 CC lib/ioat/ioat.o 01:13:24.544 CC lib/vfio_user/host/vfio_user_pci.o 01:13:24.544 CC lib/util/crc32_ieee.o 01:13:24.544 CC lib/util/crc64.o 01:13:24.544 CC lib/vfio_user/host/vfio_user.o 01:13:24.544 CC lib/util/dif.o 01:13:24.544 LIB libspdk_dma.a 01:13:24.544 CC lib/util/fd.o 01:13:24.544 CC lib/util/fd_group.o 01:13:24.544 SO libspdk_dma.so.5.0 01:13:24.544 CC lib/util/file.o 01:13:24.544 CC lib/util/hexlify.o 01:13:24.544 SYMLINK libspdk_dma.so 01:13:24.544 CC lib/util/iov.o 01:13:24.544 CC lib/util/math.o 01:13:24.544 CC lib/util/net.o 01:13:24.544 LIB libspdk_ioat.a 01:13:24.544 LIB libspdk_vfio_user.a 01:13:24.544 SO libspdk_ioat.so.7.0 01:13:24.544 CC lib/util/pipe.o 01:13:24.544 CC lib/util/strerror_tls.o 01:13:24.544 SO libspdk_vfio_user.so.5.0 01:13:24.544 SYMLINK libspdk_ioat.so 01:13:24.544 CC lib/util/string.o 01:13:24.544 CC lib/util/uuid.o 01:13:24.544 SYMLINK libspdk_vfio_user.so 01:13:24.544 CC lib/util/xor.o 01:13:24.544 CC lib/util/zipf.o 01:13:24.544 CC lib/util/md5.o 01:13:24.544 LIB libspdk_util.a 01:13:24.544 LIB libspdk_trace_parser.a 01:13:24.544 SO libspdk_trace_parser.so.6.0 01:13:24.544 SO libspdk_util.so.10.1 01:13:24.544 SYMLINK libspdk_trace_parser.so 01:13:24.544 SYMLINK libspdk_util.so 01:13:24.544 CC lib/env_dpdk/env.o 01:13:24.544 CC lib/env_dpdk/memory.o 01:13:24.544 CC lib/env_dpdk/pci.o 01:13:24.544 CC lib/env_dpdk/init.o 01:13:24.544 CC lib/env_dpdk/threads.o 01:13:24.544 CC lib/idxd/idxd.o 01:13:24.544 CC lib/rdma_utils/rdma_utils.o 01:13:24.544 CC lib/json/json_parse.o 01:13:24.544 CC lib/conf/conf.o 01:13:24.544 CC lib/vmd/vmd.o 01:13:24.802 CC lib/vmd/led.o 01:13:24.802 LIB libspdk_conf.a 01:13:24.802 SO libspdk_conf.so.6.0 01:13:24.802 CC lib/json/json_util.o 01:13:24.802 LIB libspdk_rdma_utils.a 01:13:24.802 SO libspdk_rdma_utils.so.1.0 01:13:24.802 CC lib/env_dpdk/pci_ioat.o 01:13:24.802 SYMLINK libspdk_conf.so 01:13:24.802 CC lib/env_dpdk/pci_virtio.o 01:13:25.060 CC lib/json/json_write.o 01:13:25.060 SYMLINK libspdk_rdma_utils.so 01:13:25.060 CC lib/idxd/idxd_user.o 01:13:25.060 CC lib/idxd/idxd_kernel.o 01:13:25.060 CC lib/env_dpdk/pci_vmd.o 01:13:25.060 CC lib/env_dpdk/pci_idxd.o 01:13:25.060 CC lib/env_dpdk/pci_event.o 01:13:25.060 CC lib/env_dpdk/sigbus_handler.o 01:13:25.317 CC lib/rdma_provider/common.o 01:13:25.317 CC lib/rdma_provider/rdma_provider_verbs.o 01:13:25.317 CC lib/env_dpdk/pci_dpdk.o 01:13:25.317 LIB libspdk_json.a 01:13:25.317 LIB libspdk_idxd.a 01:13:25.317 SO libspdk_json.so.6.0 01:13:25.317 CC lib/env_dpdk/pci_dpdk_2207.o 01:13:25.317 SO libspdk_idxd.so.12.1 01:13:25.317 CC lib/env_dpdk/pci_dpdk_2211.o 01:13:25.317 SYMLINK libspdk_json.so 01:13:25.317 LIB libspdk_vmd.a 01:13:25.317 SYMLINK libspdk_idxd.so 01:13:25.317 SO libspdk_vmd.so.6.0 01:13:25.317 LIB libspdk_rdma_provider.a 01:13:25.576 SO libspdk_rdma_provider.so.7.0 01:13:25.576 SYMLINK libspdk_vmd.so 01:13:25.576 SYMLINK libspdk_rdma_provider.so 01:13:25.576 CC lib/jsonrpc/jsonrpc_server.o 01:13:25.576 CC lib/jsonrpc/jsonrpc_client.o 01:13:25.576 CC lib/jsonrpc/jsonrpc_server_tcp.o 01:13:25.576 CC lib/jsonrpc/jsonrpc_client_tcp.o 01:13:25.835 LIB libspdk_jsonrpc.a 01:13:26.093 SO libspdk_jsonrpc.so.6.0 01:13:26.093 SYMLINK libspdk_jsonrpc.so 01:13:26.093 LIB libspdk_env_dpdk.a 01:13:26.353 SO libspdk_env_dpdk.so.15.1 01:13:26.353 SYMLINK libspdk_env_dpdk.so 01:13:26.353 CC lib/rpc/rpc.o 01:13:26.613 LIB libspdk_rpc.a 01:13:26.876 SO libspdk_rpc.so.6.0 01:13:26.876 SYMLINK libspdk_rpc.so 01:13:27.135 CC lib/notify/notify.o 01:13:27.135 CC lib/keyring/keyring.o 01:13:27.135 CC lib/notify/notify_rpc.o 01:13:27.135 CC lib/keyring/keyring_rpc.o 01:13:27.135 CC lib/trace/trace.o 01:13:27.135 CC lib/trace/trace_rpc.o 01:13:27.135 CC lib/trace/trace_flags.o 01:13:27.393 LIB libspdk_notify.a 01:13:27.393 SO libspdk_notify.so.6.0 01:13:27.393 SYMLINK libspdk_notify.so 01:13:27.393 LIB libspdk_keyring.a 01:13:27.393 LIB libspdk_trace.a 01:13:27.651 SO libspdk_keyring.so.2.0 01:13:27.651 SO libspdk_trace.so.11.0 01:13:27.651 SYMLINK libspdk_keyring.so 01:13:27.651 SYMLINK libspdk_trace.so 01:13:28.219 CC lib/sock/sock.o 01:13:28.219 CC lib/thread/thread.o 01:13:28.219 CC lib/sock/sock_rpc.o 01:13:28.219 CC lib/thread/iobuf.o 01:13:28.478 LIB libspdk_sock.a 01:13:28.478 SO libspdk_sock.so.10.0 01:13:28.738 SYMLINK libspdk_sock.so 01:13:29.000 CC lib/nvme/nvme_ctrlr.o 01:13:29.000 CC lib/nvme/nvme_ctrlr_cmd.o 01:13:29.000 CC lib/nvme/nvme_fabric.o 01:13:29.000 CC lib/nvme/nvme_ns_cmd.o 01:13:29.000 CC lib/nvme/nvme_ns.o 01:13:29.000 CC lib/nvme/nvme_pcie.o 01:13:29.000 CC lib/nvme/nvme.o 01:13:29.000 CC lib/nvme/nvme_qpair.o 01:13:29.000 CC lib/nvme/nvme_pcie_common.o 01:13:29.585 LIB libspdk_thread.a 01:13:29.585 SO libspdk_thread.so.11.0 01:13:29.585 CC lib/nvme/nvme_quirks.o 01:13:29.884 CC lib/nvme/nvme_transport.o 01:13:29.884 SYMLINK libspdk_thread.so 01:13:29.884 CC lib/nvme/nvme_discovery.o 01:13:29.884 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 01:13:29.884 CC lib/nvme/nvme_ns_ocssd_cmd.o 01:13:29.884 CC lib/nvme/nvme_tcp.o 01:13:29.884 CC lib/nvme/nvme_opal.o 01:13:29.884 CC lib/nvme/nvme_io_msg.o 01:13:30.143 CC lib/nvme/nvme_poll_group.o 01:13:30.143 CC lib/nvme/nvme_zns.o 01:13:30.404 CC lib/nvme/nvme_stubs.o 01:13:30.404 CC lib/nvme/nvme_auth.o 01:13:30.404 CC lib/nvme/nvme_cuse.o 01:13:30.404 CC lib/accel/accel.o 01:13:30.404 CC lib/accel/accel_rpc.o 01:13:30.665 CC lib/blob/blobstore.o 01:13:30.665 CC lib/accel/accel_sw.o 01:13:30.665 CC lib/nvme/nvme_rdma.o 01:13:30.665 CC lib/blob/request.o 01:13:30.665 CC lib/blob/zeroes.o 01:13:30.925 CC lib/blob/blob_bs_dev.o 01:13:31.184 CC lib/init/json_config.o 01:13:31.184 CC lib/virtio/virtio.o 01:13:31.184 CC lib/init/subsystem.o 01:13:31.184 CC lib/virtio/virtio_vhost_user.o 01:13:31.184 CC lib/fsdev/fsdev.o 01:13:31.443 CC lib/fsdev/fsdev_io.o 01:13:31.443 CC lib/fsdev/fsdev_rpc.o 01:13:31.443 CC lib/init/subsystem_rpc.o 01:13:31.443 CC lib/init/rpc.o 01:13:31.443 CC lib/virtio/virtio_vfio_user.o 01:13:31.443 CC lib/virtio/virtio_pci.o 01:13:31.443 LIB libspdk_init.a 01:13:31.702 SO libspdk_init.so.6.0 01:13:31.702 LIB libspdk_accel.a 01:13:31.702 SYMLINK libspdk_init.so 01:13:31.702 SO libspdk_accel.so.16.0 01:13:31.702 LIB libspdk_virtio.a 01:13:31.702 SYMLINK libspdk_accel.so 01:13:31.961 SO libspdk_virtio.so.7.0 01:13:31.961 SYMLINK libspdk_virtio.so 01:13:31.961 LIB libspdk_fsdev.a 01:13:31.961 CC lib/event/app.o 01:13:31.961 CC lib/event/log_rpc.o 01:13:31.961 CC lib/event/app_rpc.o 01:13:31.961 CC lib/event/reactor.o 01:13:31.961 CC lib/event/scheduler_static.o 01:13:31.961 SO libspdk_fsdev.so.2.0 01:13:32.220 CC lib/bdev/bdev.o 01:13:32.220 CC lib/bdev/bdev_rpc.o 01:13:32.220 SYMLINK libspdk_fsdev.so 01:13:32.220 LIB libspdk_nvme.a 01:13:32.220 CC lib/bdev/bdev_zone.o 01:13:32.220 CC lib/bdev/part.o 01:13:32.220 CC lib/fuse_dispatcher/fuse_dispatcher.o 01:13:32.220 CC lib/bdev/scsi_nvme.o 01:13:32.479 SO libspdk_nvme.so.15.0 01:13:32.479 LIB libspdk_event.a 01:13:32.738 SO libspdk_event.so.14.0 01:13:32.738 SYMLINK libspdk_nvme.so 01:13:32.738 SYMLINK libspdk_event.so 01:13:33.015 LIB libspdk_fuse_dispatcher.a 01:13:33.015 SO libspdk_fuse_dispatcher.so.1.0 01:13:33.015 SYMLINK libspdk_fuse_dispatcher.so 01:13:34.394 LIB libspdk_blob.a 01:13:34.394 SO libspdk_blob.so.12.0 01:13:34.394 SYMLINK libspdk_blob.so 01:13:34.961 CC lib/blobfs/tree.o 01:13:34.961 CC lib/blobfs/blobfs.o 01:13:34.961 CC lib/lvol/lvol.o 01:13:35.220 LIB libspdk_bdev.a 01:13:35.220 SO libspdk_bdev.so.17.0 01:13:35.479 SYMLINK libspdk_bdev.so 01:13:35.739 CC lib/nvmf/ctrlr.o 01:13:35.739 CC lib/nbd/nbd.o 01:13:35.739 CC lib/nvmf/ctrlr_discovery.o 01:13:35.739 CC lib/nvmf/ctrlr_bdev.o 01:13:35.739 CC lib/nvmf/subsystem.o 01:13:35.739 CC lib/ublk/ublk.o 01:13:35.739 CC lib/ftl/ftl_core.o 01:13:35.739 CC lib/scsi/dev.o 01:13:35.739 LIB libspdk_blobfs.a 01:13:35.739 SO libspdk_blobfs.so.11.0 01:13:35.999 CC lib/scsi/lun.o 01:13:35.999 SYMLINK libspdk_blobfs.so 01:13:35.999 CC lib/ublk/ublk_rpc.o 01:13:35.999 LIB libspdk_lvol.a 01:13:35.999 SO libspdk_lvol.so.11.0 01:13:35.999 CC lib/nbd/nbd_rpc.o 01:13:35.999 SYMLINK libspdk_lvol.so 01:13:35.999 CC lib/ftl/ftl_init.o 01:13:35.999 CC lib/ftl/ftl_layout.o 01:13:35.999 CC lib/scsi/port.o 01:13:36.258 CC lib/nvmf/nvmf.o 01:13:36.258 CC lib/scsi/scsi.o 01:13:36.258 LIB libspdk_nbd.a 01:13:36.258 CC lib/scsi/scsi_bdev.o 01:13:36.258 SO libspdk_nbd.so.7.0 01:13:36.258 CC lib/scsi/scsi_pr.o 01:13:36.258 SYMLINK libspdk_nbd.so 01:13:36.258 CC lib/scsi/scsi_rpc.o 01:13:36.258 LIB libspdk_ublk.a 01:13:36.258 SO libspdk_ublk.so.3.0 01:13:36.258 CC lib/nvmf/nvmf_rpc.o 01:13:36.517 CC lib/nvmf/transport.o 01:13:36.517 CC lib/ftl/ftl_debug.o 01:13:36.517 SYMLINK libspdk_ublk.so 01:13:36.517 CC lib/nvmf/tcp.o 01:13:36.517 CC lib/ftl/ftl_io.o 01:13:36.517 CC lib/ftl/ftl_sb.o 01:13:36.517 CC lib/nvmf/stubs.o 01:13:36.777 CC lib/scsi/task.o 01:13:36.777 CC lib/ftl/ftl_l2p.o 01:13:36.777 CC lib/nvmf/mdns_server.o 01:13:36.777 LIB libspdk_scsi.a 01:13:37.036 CC lib/nvmf/rdma.o 01:13:37.036 CC lib/ftl/ftl_l2p_flat.o 01:13:37.036 SO libspdk_scsi.so.9.0 01:13:37.036 CC lib/nvmf/auth.o 01:13:37.036 SYMLINK libspdk_scsi.so 01:13:37.036 CC lib/ftl/ftl_nv_cache.o 01:13:37.036 CC lib/ftl/ftl_band.o 01:13:37.036 CC lib/ftl/ftl_band_ops.o 01:13:37.296 CC lib/ftl/ftl_writer.o 01:13:37.296 CC lib/ftl/ftl_rq.o 01:13:37.296 CC lib/ftl/ftl_reloc.o 01:13:37.296 CC lib/ftl/ftl_l2p_cache.o 01:13:37.556 CC lib/ftl/ftl_p2l.o 01:13:37.556 CC lib/ftl/ftl_p2l_log.o 01:13:37.556 CC lib/ftl/mngt/ftl_mngt.o 01:13:37.556 CC lib/ftl/mngt/ftl_mngt_bdev.o 01:13:37.556 CC lib/ftl/mngt/ftl_mngt_shutdown.o 01:13:37.816 CC lib/ftl/mngt/ftl_mngt_startup.o 01:13:37.816 CC lib/ftl/mngt/ftl_mngt_md.o 01:13:37.816 CC lib/ftl/mngt/ftl_mngt_misc.o 01:13:37.816 CC lib/ftl/mngt/ftl_mngt_ioch.o 01:13:38.075 CC lib/ftl/mngt/ftl_mngt_l2p.o 01:13:38.075 CC lib/ftl/mngt/ftl_mngt_band.o 01:13:38.075 CC lib/vhost/vhost.o 01:13:38.075 CC lib/iscsi/conn.o 01:13:38.075 CC lib/ftl/mngt/ftl_mngt_self_test.o 01:13:38.075 CC lib/ftl/mngt/ftl_mngt_p2l.o 01:13:38.075 CC lib/vhost/vhost_rpc.o 01:13:38.075 CC lib/ftl/mngt/ftl_mngt_recovery.o 01:13:38.075 CC lib/vhost/vhost_scsi.o 01:13:38.075 CC lib/vhost/vhost_blk.o 01:13:38.335 CC lib/vhost/rte_vhost_user.o 01:13:38.335 CC lib/iscsi/init_grp.o 01:13:38.335 CC lib/iscsi/iscsi.o 01:13:38.595 CC lib/ftl/mngt/ftl_mngt_upgrade.o 01:13:38.595 CC lib/iscsi/param.o 01:13:38.595 CC lib/ftl/utils/ftl_conf.o 01:13:38.595 CC lib/ftl/utils/ftl_md.o 01:13:38.855 CC lib/iscsi/portal_grp.o 01:13:38.855 CC lib/iscsi/tgt_node.o 01:13:38.855 CC lib/iscsi/iscsi_subsystem.o 01:13:38.855 CC lib/iscsi/iscsi_rpc.o 01:13:39.115 CC lib/iscsi/task.o 01:13:39.115 CC lib/ftl/utils/ftl_mempool.o 01:13:39.115 CC lib/ftl/utils/ftl_bitmap.o 01:13:39.115 CC lib/ftl/utils/ftl_property.o 01:13:39.115 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 01:13:39.115 CC lib/ftl/upgrade/ftl_layout_upgrade.o 01:13:39.374 CC lib/ftl/upgrade/ftl_sb_upgrade.o 01:13:39.374 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 01:13:39.374 LIB libspdk_vhost.a 01:13:39.374 CC lib/ftl/upgrade/ftl_band_upgrade.o 01:13:39.374 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 01:13:39.374 LIB libspdk_nvmf.a 01:13:39.374 SO libspdk_vhost.so.8.0 01:13:39.374 CC lib/ftl/upgrade/ftl_trim_upgrade.o 01:13:39.374 CC lib/ftl/upgrade/ftl_sb_v3.o 01:13:39.374 CC lib/ftl/upgrade/ftl_sb_v5.o 01:13:39.374 CC lib/ftl/nvc/ftl_nvc_dev.o 01:13:39.374 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 01:13:39.374 SYMLINK libspdk_vhost.so 01:13:39.634 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 01:13:39.634 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 01:13:39.634 SO libspdk_nvmf.so.20.0 01:13:39.634 CC lib/ftl/base/ftl_base_dev.o 01:13:39.634 CC lib/ftl/base/ftl_base_bdev.o 01:13:39.634 CC lib/ftl/ftl_trace.o 01:13:39.893 SYMLINK libspdk_nvmf.so 01:13:39.893 LIB libspdk_iscsi.a 01:13:39.893 LIB libspdk_ftl.a 01:13:39.893 SO libspdk_iscsi.so.8.0 01:13:40.193 SYMLINK libspdk_iscsi.so 01:13:40.193 SO libspdk_ftl.so.9.0 01:13:40.464 SYMLINK libspdk_ftl.so 01:13:41.032 CC module/env_dpdk/env_dpdk_rpc.o 01:13:41.032 CC module/scheduler/dynamic/scheduler_dynamic.o 01:13:41.032 CC module/accel/ioat/accel_ioat.o 01:13:41.032 CC module/sock/posix/posix.o 01:13:41.032 CC module/scheduler/dpdk_governor/dpdk_governor.o 01:13:41.032 CC module/accel/dsa/accel_dsa.o 01:13:41.032 CC module/blob/bdev/blob_bdev.o 01:13:41.032 CC module/accel/error/accel_error.o 01:13:41.032 CC module/fsdev/aio/fsdev_aio.o 01:13:41.032 CC module/keyring/file/keyring.o 01:13:41.032 LIB libspdk_env_dpdk_rpc.a 01:13:41.032 SO libspdk_env_dpdk_rpc.so.6.0 01:13:41.032 SYMLINK libspdk_env_dpdk_rpc.so 01:13:41.032 CC module/accel/ioat/accel_ioat_rpc.o 01:13:41.032 LIB libspdk_scheduler_dpdk_governor.a 01:13:41.032 CC module/keyring/file/keyring_rpc.o 01:13:41.032 SO libspdk_scheduler_dpdk_governor.so.4.0 01:13:41.032 LIB libspdk_scheduler_dynamic.a 01:13:41.291 SO libspdk_scheduler_dynamic.so.4.0 01:13:41.291 SYMLINK libspdk_scheduler_dpdk_governor.so 01:13:41.291 CC module/accel/error/accel_error_rpc.o 01:13:41.291 LIB libspdk_accel_ioat.a 01:13:41.291 SO libspdk_accel_ioat.so.6.0 01:13:41.291 SYMLINK libspdk_scheduler_dynamic.so 01:13:41.291 LIB libspdk_blob_bdev.a 01:13:41.291 CC module/accel/dsa/accel_dsa_rpc.o 01:13:41.291 LIB libspdk_keyring_file.a 01:13:41.291 SO libspdk_blob_bdev.so.12.0 01:13:41.291 SO libspdk_keyring_file.so.2.0 01:13:41.291 SYMLINK libspdk_accel_ioat.so 01:13:41.291 CC module/accel/iaa/accel_iaa.o 01:13:41.291 CC module/accel/iaa/accel_iaa_rpc.o 01:13:41.291 LIB libspdk_accel_error.a 01:13:41.291 SYMLINK libspdk_blob_bdev.so 01:13:41.291 CC module/fsdev/aio/fsdev_aio_rpc.o 01:13:41.291 SYMLINK libspdk_keyring_file.so 01:13:41.291 CC module/fsdev/aio/linux_aio_mgr.o 01:13:41.291 CC module/scheduler/gscheduler/gscheduler.o 01:13:41.291 SO libspdk_accel_error.so.2.0 01:13:41.549 LIB libspdk_accel_dsa.a 01:13:41.549 CC module/keyring/linux/keyring.o 01:13:41.549 SYMLINK libspdk_accel_error.so 01:13:41.549 CC module/keyring/linux/keyring_rpc.o 01:13:41.549 SO libspdk_accel_dsa.so.5.0 01:13:41.549 LIB libspdk_scheduler_gscheduler.a 01:13:41.549 LIB libspdk_accel_iaa.a 01:13:41.549 SO libspdk_scheduler_gscheduler.so.4.0 01:13:41.549 SYMLINK libspdk_accel_dsa.so 01:13:41.549 SO libspdk_accel_iaa.so.3.0 01:13:41.549 SYMLINK libspdk_scheduler_gscheduler.so 01:13:41.549 LIB libspdk_keyring_linux.a 01:13:41.549 SO libspdk_keyring_linux.so.1.0 01:13:41.549 SYMLINK libspdk_accel_iaa.so 01:13:41.807 LIB libspdk_fsdev_aio.a 01:13:41.807 SYMLINK libspdk_keyring_linux.so 01:13:41.807 SO libspdk_fsdev_aio.so.1.0 01:13:41.807 LIB libspdk_sock_posix.a 01:13:41.807 CC module/bdev/gpt/gpt.o 01:13:41.807 CC module/bdev/delay/vbdev_delay.o 01:13:41.807 CC module/bdev/lvol/vbdev_lvol.o 01:13:41.807 CC module/bdev/error/vbdev_error.o 01:13:41.807 SO libspdk_sock_posix.so.6.0 01:13:41.807 CC module/bdev/malloc/bdev_malloc.o 01:13:41.807 CC module/blobfs/bdev/blobfs_bdev.o 01:13:41.807 SYMLINK libspdk_fsdev_aio.so 01:13:41.807 CC module/bdev/lvol/vbdev_lvol_rpc.o 01:13:41.807 CC module/bdev/null/bdev_null.o 01:13:41.807 CC module/bdev/nvme/bdev_nvme.o 01:13:41.807 SYMLINK libspdk_sock_posix.so 01:13:41.807 CC module/bdev/nvme/bdev_nvme_rpc.o 01:13:42.066 CC module/bdev/gpt/vbdev_gpt.o 01:13:42.066 CC module/blobfs/bdev/blobfs_bdev_rpc.o 01:13:42.066 CC module/bdev/error/vbdev_error_rpc.o 01:13:42.066 CC module/bdev/null/bdev_null_rpc.o 01:13:42.066 CC module/bdev/delay/vbdev_delay_rpc.o 01:13:42.066 LIB libspdk_blobfs_bdev.a 01:13:42.066 CC module/bdev/malloc/bdev_malloc_rpc.o 01:13:42.066 SO libspdk_blobfs_bdev.so.6.0 01:13:42.325 LIB libspdk_bdev_error.a 01:13:42.325 LIB libspdk_bdev_gpt.a 01:13:42.325 SO libspdk_bdev_error.so.6.0 01:13:42.325 SO libspdk_bdev_gpt.so.6.0 01:13:42.325 SYMLINK libspdk_blobfs_bdev.so 01:13:42.325 CC module/bdev/nvme/nvme_rpc.o 01:13:42.325 LIB libspdk_bdev_null.a 01:13:42.325 LIB libspdk_bdev_lvol.a 01:13:42.325 LIB libspdk_bdev_delay.a 01:13:42.325 SYMLINK libspdk_bdev_error.so 01:13:42.325 SYMLINK libspdk_bdev_gpt.so 01:13:42.325 SO libspdk_bdev_null.so.6.0 01:13:42.325 LIB libspdk_bdev_malloc.a 01:13:42.325 SO libspdk_bdev_lvol.so.6.0 01:13:42.325 SO libspdk_bdev_delay.so.6.0 01:13:42.325 SO libspdk_bdev_malloc.so.6.0 01:13:42.325 CC module/bdev/passthru/vbdev_passthru.o 01:13:42.325 SYMLINK libspdk_bdev_null.so 01:13:42.325 CC module/bdev/nvme/bdev_mdns_client.o 01:13:42.325 SYMLINK libspdk_bdev_delay.so 01:13:42.325 CC module/bdev/nvme/vbdev_opal.o 01:13:42.325 SYMLINK libspdk_bdev_lvol.so 01:13:42.325 CC module/bdev/nvme/vbdev_opal_rpc.o 01:13:42.325 SYMLINK libspdk_bdev_malloc.so 01:13:42.325 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 01:13:42.583 CC module/bdev/split/vbdev_split.o 01:13:42.583 CC module/bdev/raid/bdev_raid.o 01:13:42.583 CC module/bdev/raid/bdev_raid_rpc.o 01:13:42.583 CC module/bdev/raid/bdev_raid_sb.o 01:13:42.583 CC module/bdev/raid/raid0.o 01:13:42.583 CC module/bdev/raid/raid1.o 01:13:42.583 CC module/bdev/raid/concat.o 01:13:42.583 CC module/bdev/passthru/vbdev_passthru_rpc.o 01:13:42.583 CC module/bdev/split/vbdev_split_rpc.o 01:13:42.848 LIB libspdk_bdev_passthru.a 01:13:42.848 LIB libspdk_bdev_split.a 01:13:42.848 CC module/bdev/zone_block/vbdev_zone_block.o 01:13:42.848 SO libspdk_bdev_passthru.so.6.0 01:13:42.848 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 01:13:42.848 SO libspdk_bdev_split.so.6.0 01:13:42.848 CC module/bdev/xnvme/bdev_xnvme.o 01:13:42.849 CC module/bdev/xnvme/bdev_xnvme_rpc.o 01:13:42.849 SYMLINK libspdk_bdev_passthru.so 01:13:42.849 SYMLINK libspdk_bdev_split.so 01:13:42.849 CC module/bdev/aio/bdev_aio.o 01:13:43.110 CC module/bdev/aio/bdev_aio_rpc.o 01:13:43.110 CC module/bdev/ftl/bdev_ftl.o 01:13:43.110 CC module/bdev/ftl/bdev_ftl_rpc.o 01:13:43.110 CC module/bdev/iscsi/bdev_iscsi.o 01:13:43.110 CC module/bdev/virtio/bdev_virtio_scsi.o 01:13:43.110 LIB libspdk_bdev_xnvme.a 01:13:43.110 SO libspdk_bdev_xnvme.so.3.0 01:13:43.110 LIB libspdk_bdev_zone_block.a 01:13:43.110 CC module/bdev/iscsi/bdev_iscsi_rpc.o 01:13:43.368 SO libspdk_bdev_zone_block.so.6.0 01:13:43.368 SYMLINK libspdk_bdev_xnvme.so 01:13:43.368 CC module/bdev/virtio/bdev_virtio_blk.o 01:13:43.368 CC module/bdev/virtio/bdev_virtio_rpc.o 01:13:43.368 SYMLINK libspdk_bdev_zone_block.so 01:13:43.368 LIB libspdk_bdev_ftl.a 01:13:43.368 LIB libspdk_bdev_aio.a 01:13:43.368 SO libspdk_bdev_ftl.so.6.0 01:13:43.368 SO libspdk_bdev_aio.so.6.0 01:13:43.368 SYMLINK libspdk_bdev_ftl.so 01:13:43.368 SYMLINK libspdk_bdev_aio.so 01:13:43.626 LIB libspdk_bdev_iscsi.a 01:13:43.626 SO libspdk_bdev_iscsi.so.6.0 01:13:43.626 LIB libspdk_bdev_raid.a 01:13:43.626 SYMLINK libspdk_bdev_iscsi.so 01:13:43.626 LIB libspdk_bdev_virtio.a 01:13:43.626 SO libspdk_bdev_raid.so.6.0 01:13:43.882 SO libspdk_bdev_virtio.so.6.0 01:13:43.882 SYMLINK libspdk_bdev_raid.so 01:13:43.882 SYMLINK libspdk_bdev_virtio.so 01:13:44.814 LIB libspdk_bdev_nvme.a 01:13:44.814 SO libspdk_bdev_nvme.so.7.1 01:13:45.073 SYMLINK libspdk_bdev_nvme.so 01:13:45.642 CC module/event/subsystems/keyring/keyring.o 01:13:45.642 CC module/event/subsystems/scheduler/scheduler.o 01:13:45.642 CC module/event/subsystems/vmd/vmd_rpc.o 01:13:45.642 CC module/event/subsystems/fsdev/fsdev.o 01:13:45.642 CC module/event/subsystems/vmd/vmd.o 01:13:45.642 CC module/event/subsystems/sock/sock.o 01:13:45.642 CC module/event/subsystems/vhost_blk/vhost_blk.o 01:13:45.642 CC module/event/subsystems/iobuf/iobuf.o 01:13:45.642 CC module/event/subsystems/iobuf/iobuf_rpc.o 01:13:45.642 LIB libspdk_event_fsdev.a 01:13:45.642 LIB libspdk_event_scheduler.a 01:13:45.642 LIB libspdk_event_sock.a 01:13:45.642 LIB libspdk_event_vmd.a 01:13:45.642 LIB libspdk_event_keyring.a 01:13:45.642 LIB libspdk_event_vhost_blk.a 01:13:45.642 SO libspdk_event_fsdev.so.1.0 01:13:45.642 SO libspdk_event_sock.so.5.0 01:13:45.642 SO libspdk_event_scheduler.so.4.0 01:13:45.642 SO libspdk_event_keyring.so.1.0 01:13:45.642 LIB libspdk_event_iobuf.a 01:13:45.642 SO libspdk_event_vmd.so.6.0 01:13:45.642 SO libspdk_event_vhost_blk.so.3.0 01:13:45.642 SO libspdk_event_iobuf.so.3.0 01:13:45.642 SYMLINK libspdk_event_sock.so 01:13:45.642 SYMLINK libspdk_event_fsdev.so 01:13:45.901 SYMLINK libspdk_event_scheduler.so 01:13:45.901 SYMLINK libspdk_event_keyring.so 01:13:45.901 SYMLINK libspdk_event_vhost_blk.so 01:13:45.901 SYMLINK libspdk_event_vmd.so 01:13:45.901 SYMLINK libspdk_event_iobuf.so 01:13:46.161 CC module/event/subsystems/accel/accel.o 01:13:46.420 LIB libspdk_event_accel.a 01:13:46.420 SO libspdk_event_accel.so.6.0 01:13:46.420 SYMLINK libspdk_event_accel.so 01:13:46.989 CC module/event/subsystems/bdev/bdev.o 01:13:46.989 LIB libspdk_event_bdev.a 01:13:47.248 SO libspdk_event_bdev.so.6.0 01:13:47.248 SYMLINK libspdk_event_bdev.so 01:13:47.507 CC module/event/subsystems/nbd/nbd.o 01:13:47.507 CC module/event/subsystems/scsi/scsi.o 01:13:47.507 CC module/event/subsystems/nvmf/nvmf_rpc.o 01:13:47.507 CC module/event/subsystems/nvmf/nvmf_tgt.o 01:13:47.507 CC module/event/subsystems/ublk/ublk.o 01:13:47.766 LIB libspdk_event_nbd.a 01:13:47.766 LIB libspdk_event_scsi.a 01:13:47.766 SO libspdk_event_nbd.so.6.0 01:13:47.766 LIB libspdk_event_ublk.a 01:13:47.766 SO libspdk_event_scsi.so.6.0 01:13:47.766 SO libspdk_event_ublk.so.3.0 01:13:47.766 SYMLINK libspdk_event_nbd.so 01:13:47.766 SYMLINK libspdk_event_scsi.so 01:13:47.766 LIB libspdk_event_nvmf.a 01:13:47.766 SYMLINK libspdk_event_ublk.so 01:13:47.766 SO libspdk_event_nvmf.so.6.0 01:13:48.025 SYMLINK libspdk_event_nvmf.so 01:13:48.025 CC module/event/subsystems/iscsi/iscsi.o 01:13:48.025 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 01:13:48.284 LIB libspdk_event_iscsi.a 01:13:48.284 LIB libspdk_event_vhost_scsi.a 01:13:48.284 SO libspdk_event_vhost_scsi.so.3.0 01:13:48.284 SO libspdk_event_iscsi.so.6.0 01:13:48.284 SYMLINK libspdk_event_vhost_scsi.so 01:13:48.284 SYMLINK libspdk_event_iscsi.so 01:13:48.543 SO libspdk.so.6.0 01:13:48.543 SYMLINK libspdk.so 01:13:48.803 CC app/trace_record/trace_record.o 01:13:48.803 CXX app/trace/trace.o 01:13:48.803 TEST_HEADER include/spdk/accel.h 01:13:48.803 TEST_HEADER include/spdk/accel_module.h 01:13:48.803 TEST_HEADER include/spdk/assert.h 01:13:49.064 TEST_HEADER include/spdk/barrier.h 01:13:49.064 TEST_HEADER include/spdk/base64.h 01:13:49.064 TEST_HEADER include/spdk/bdev.h 01:13:49.064 TEST_HEADER include/spdk/bdev_module.h 01:13:49.064 TEST_HEADER include/spdk/bdev_zone.h 01:13:49.064 TEST_HEADER include/spdk/bit_array.h 01:13:49.064 TEST_HEADER include/spdk/bit_pool.h 01:13:49.064 TEST_HEADER include/spdk/blob_bdev.h 01:13:49.064 TEST_HEADER include/spdk/blobfs_bdev.h 01:13:49.064 TEST_HEADER include/spdk/blobfs.h 01:13:49.064 TEST_HEADER include/spdk/blob.h 01:13:49.064 TEST_HEADER include/spdk/conf.h 01:13:49.064 TEST_HEADER include/spdk/config.h 01:13:49.064 TEST_HEADER include/spdk/cpuset.h 01:13:49.064 CC examples/interrupt_tgt/interrupt_tgt.o 01:13:49.064 TEST_HEADER include/spdk/crc16.h 01:13:49.064 TEST_HEADER include/spdk/crc32.h 01:13:49.064 TEST_HEADER include/spdk/crc64.h 01:13:49.064 TEST_HEADER include/spdk/dif.h 01:13:49.064 TEST_HEADER include/spdk/dma.h 01:13:49.064 TEST_HEADER include/spdk/endian.h 01:13:49.064 TEST_HEADER include/spdk/env_dpdk.h 01:13:49.064 TEST_HEADER include/spdk/env.h 01:13:49.064 TEST_HEADER include/spdk/event.h 01:13:49.064 TEST_HEADER include/spdk/fd_group.h 01:13:49.064 TEST_HEADER include/spdk/fd.h 01:13:49.064 TEST_HEADER include/spdk/file.h 01:13:49.064 TEST_HEADER include/spdk/fsdev.h 01:13:49.064 TEST_HEADER include/spdk/fsdev_module.h 01:13:49.064 TEST_HEADER include/spdk/ftl.h 01:13:49.064 TEST_HEADER include/spdk/fuse_dispatcher.h 01:13:49.064 CC examples/ioat/perf/perf.o 01:13:49.064 TEST_HEADER include/spdk/gpt_spec.h 01:13:49.064 CC test/thread/poller_perf/poller_perf.o 01:13:49.065 TEST_HEADER include/spdk/hexlify.h 01:13:49.065 CC examples/util/zipf/zipf.o 01:13:49.065 TEST_HEADER include/spdk/histogram_data.h 01:13:49.065 TEST_HEADER include/spdk/idxd.h 01:13:49.065 TEST_HEADER include/spdk/idxd_spec.h 01:13:49.065 TEST_HEADER include/spdk/init.h 01:13:49.065 TEST_HEADER include/spdk/ioat.h 01:13:49.065 TEST_HEADER include/spdk/ioat_spec.h 01:13:49.065 TEST_HEADER include/spdk/iscsi_spec.h 01:13:49.065 TEST_HEADER include/spdk/json.h 01:13:49.065 TEST_HEADER include/spdk/jsonrpc.h 01:13:49.065 TEST_HEADER include/spdk/keyring.h 01:13:49.065 TEST_HEADER include/spdk/keyring_module.h 01:13:49.065 TEST_HEADER include/spdk/likely.h 01:13:49.065 TEST_HEADER include/spdk/log.h 01:13:49.065 TEST_HEADER include/spdk/lvol.h 01:13:49.065 TEST_HEADER include/spdk/md5.h 01:13:49.065 TEST_HEADER include/spdk/memory.h 01:13:49.065 TEST_HEADER include/spdk/mmio.h 01:13:49.065 TEST_HEADER include/spdk/nbd.h 01:13:49.065 TEST_HEADER include/spdk/net.h 01:13:49.065 TEST_HEADER include/spdk/notify.h 01:13:49.065 TEST_HEADER include/spdk/nvme.h 01:13:49.065 TEST_HEADER include/spdk/nvme_intel.h 01:13:49.065 CC test/app/bdev_svc/bdev_svc.o 01:13:49.065 TEST_HEADER include/spdk/nvme_ocssd.h 01:13:49.065 TEST_HEADER include/spdk/nvme_ocssd_spec.h 01:13:49.065 CC test/dma/test_dma/test_dma.o 01:13:49.065 TEST_HEADER include/spdk/nvme_spec.h 01:13:49.065 TEST_HEADER include/spdk/nvme_zns.h 01:13:49.065 TEST_HEADER include/spdk/nvmf_cmd.h 01:13:49.065 TEST_HEADER include/spdk/nvmf_fc_spec.h 01:13:49.065 TEST_HEADER include/spdk/nvmf.h 01:13:49.065 TEST_HEADER include/spdk/nvmf_spec.h 01:13:49.065 TEST_HEADER include/spdk/nvmf_transport.h 01:13:49.065 TEST_HEADER include/spdk/opal.h 01:13:49.065 TEST_HEADER include/spdk/opal_spec.h 01:13:49.065 TEST_HEADER include/spdk/pci_ids.h 01:13:49.065 TEST_HEADER include/spdk/pipe.h 01:13:49.065 TEST_HEADER include/spdk/queue.h 01:13:49.065 TEST_HEADER include/spdk/reduce.h 01:13:49.065 TEST_HEADER include/spdk/rpc.h 01:13:49.065 TEST_HEADER include/spdk/scheduler.h 01:13:49.065 TEST_HEADER include/spdk/scsi.h 01:13:49.065 TEST_HEADER include/spdk/scsi_spec.h 01:13:49.065 CC test/env/mem_callbacks/mem_callbacks.o 01:13:49.065 TEST_HEADER include/spdk/sock.h 01:13:49.065 TEST_HEADER include/spdk/stdinc.h 01:13:49.065 TEST_HEADER include/spdk/string.h 01:13:49.065 TEST_HEADER include/spdk/thread.h 01:13:49.065 TEST_HEADER include/spdk/trace.h 01:13:49.065 TEST_HEADER include/spdk/trace_parser.h 01:13:49.065 TEST_HEADER include/spdk/tree.h 01:13:49.065 TEST_HEADER include/spdk/ublk.h 01:13:49.065 TEST_HEADER include/spdk/util.h 01:13:49.065 TEST_HEADER include/spdk/uuid.h 01:13:49.065 TEST_HEADER include/spdk/version.h 01:13:49.065 TEST_HEADER include/spdk/vfio_user_pci.h 01:13:49.065 TEST_HEADER include/spdk/vfio_user_spec.h 01:13:49.065 TEST_HEADER include/spdk/vhost.h 01:13:49.065 TEST_HEADER include/spdk/vmd.h 01:13:49.065 TEST_HEADER include/spdk/xor.h 01:13:49.065 TEST_HEADER include/spdk/zipf.h 01:13:49.065 CXX test/cpp_headers/accel.o 01:13:49.065 LINK interrupt_tgt 01:13:49.065 LINK poller_perf 01:13:49.065 LINK zipf 01:13:49.346 LINK spdk_trace_record 01:13:49.346 LINK bdev_svc 01:13:49.346 LINK ioat_perf 01:13:49.346 CXX test/cpp_headers/accel_module.o 01:13:49.346 CXX test/cpp_headers/assert.o 01:13:49.346 CXX test/cpp_headers/barrier.o 01:13:49.346 LINK spdk_trace 01:13:49.346 CC examples/ioat/verify/verify.o 01:13:49.346 CC app/nvmf_tgt/nvmf_main.o 01:13:49.346 CXX test/cpp_headers/base64.o 01:13:49.605 CXX test/cpp_headers/bdev.o 01:13:49.605 CC app/iscsi_tgt/iscsi_tgt.o 01:13:49.605 CC test/env/vtophys/vtophys.o 01:13:49.605 LINK test_dma 01:13:49.605 CC test/app/histogram_perf/histogram_perf.o 01:13:49.605 LINK mem_callbacks 01:13:49.605 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 01:13:49.605 LINK nvmf_tgt 01:13:49.605 LINK iscsi_tgt 01:13:49.605 LINK verify 01:13:49.605 CXX test/cpp_headers/bdev_module.o 01:13:49.605 LINK vtophys 01:13:49.605 LINK histogram_perf 01:13:49.605 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 01:13:49.864 CXX test/cpp_headers/bdev_zone.o 01:13:49.864 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 01:13:49.864 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 01:13:49.864 CXX test/cpp_headers/bit_array.o 01:13:49.864 CXX test/cpp_headers/bit_pool.o 01:13:49.864 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 01:13:49.864 CC test/env/memory/memory_ut.o 01:13:50.123 CC examples/thread/thread/thread_ex.o 01:13:50.123 LINK nvme_fuzz 01:13:50.123 CXX test/cpp_headers/blob_bdev.o 01:13:50.123 CC app/spdk_tgt/spdk_tgt.o 01:13:50.123 CC test/env/pci/pci_ut.o 01:13:50.123 LINK env_dpdk_post_init 01:13:50.123 CC app/spdk_lspci/spdk_lspci.o 01:13:50.123 CXX test/cpp_headers/blobfs_bdev.o 01:13:50.123 LINK spdk_tgt 01:13:50.123 LINK vhost_fuzz 01:13:50.381 LINK thread 01:13:50.381 CC app/spdk_nvme_perf/perf.o 01:13:50.381 LINK spdk_lspci 01:13:50.381 CC app/spdk_nvme_identify/identify.o 01:13:50.381 CXX test/cpp_headers/blobfs.o 01:13:50.381 CXX test/cpp_headers/blob.o 01:13:50.381 LINK pci_ut 01:13:50.381 CC app/spdk_nvme_discover/discovery_aer.o 01:13:50.639 CC app/spdk_top/spdk_top.o 01:13:50.639 CXX test/cpp_headers/conf.o 01:13:50.639 CC examples/sock/hello_world/hello_sock.o 01:13:50.639 CC app/vhost/vhost.o 01:13:50.639 LINK spdk_nvme_discover 01:13:50.639 CXX test/cpp_headers/config.o 01:13:50.898 CC app/spdk_dd/spdk_dd.o 01:13:50.898 CXX test/cpp_headers/cpuset.o 01:13:50.898 LINK hello_sock 01:13:50.898 LINK vhost 01:13:50.898 CXX test/cpp_headers/crc16.o 01:13:51.155 CC app/fio/nvme/fio_plugin.o 01:13:51.155 CXX test/cpp_headers/crc32.o 01:13:51.155 LINK spdk_nvme_perf 01:13:51.155 LINK spdk_dd 01:13:51.155 CC examples/vmd/lsvmd/lsvmd.o 01:13:51.155 LINK memory_ut 01:13:51.155 LINK spdk_nvme_identify 01:13:51.155 CC app/fio/bdev/fio_plugin.o 01:13:51.413 CXX test/cpp_headers/crc64.o 01:13:51.413 LINK lsvmd 01:13:51.413 CXX test/cpp_headers/dif.o 01:13:51.413 CXX test/cpp_headers/dma.o 01:13:51.413 CC examples/idxd/perf/perf.o 01:13:51.413 LINK spdk_top 01:13:51.413 CC examples/vmd/led/led.o 01:13:51.413 CXX test/cpp_headers/endian.o 01:13:51.671 LINK iscsi_fuzz 01:13:51.671 CXX test/cpp_headers/env_dpdk.o 01:13:51.671 LINK spdk_nvme 01:13:51.671 LINK led 01:13:51.671 CC examples/fsdev/hello_world/hello_fsdev.o 01:13:51.671 CC examples/accel/perf/accel_perf.o 01:13:51.929 CXX test/cpp_headers/env.o 01:13:51.929 CC examples/blob/hello_world/hello_blob.o 01:13:51.929 CC examples/blob/cli/blobcli.o 01:13:51.929 CXX test/cpp_headers/event.o 01:13:51.929 LINK idxd_perf 01:13:51.929 LINK spdk_bdev 01:13:51.929 CC test/app/jsoncat/jsoncat.o 01:13:51.929 CC examples/nvme/hello_world/hello_world.o 01:13:51.929 LINK hello_fsdev 01:13:51.929 CXX test/cpp_headers/fd_group.o 01:13:51.929 LINK jsoncat 01:13:51.929 LINK hello_blob 01:13:52.202 CC examples/nvme/reconnect/reconnect.o 01:13:52.202 CC test/app/stub/stub.o 01:13:52.202 CC test/rpc_client/rpc_client_test.o 01:13:52.202 LINK hello_world 01:13:52.202 CXX test/cpp_headers/fd.o 01:13:52.202 LINK accel_perf 01:13:52.202 LINK stub 01:13:52.202 LINK rpc_client_test 01:13:52.202 LINK blobcli 01:13:52.460 CXX test/cpp_headers/file.o 01:13:52.460 CC test/accel/dif/dif.o 01:13:52.460 CC examples/nvme/nvme_manage/nvme_manage.o 01:13:52.460 CC examples/nvme/arbitration/arbitration.o 01:13:52.460 CC test/blobfs/mkfs/mkfs.o 01:13:52.460 CXX test/cpp_headers/fsdev.o 01:13:52.460 LINK reconnect 01:13:52.460 CC examples/nvme/hotplug/hotplug.o 01:13:52.727 LINK mkfs 01:13:52.727 CC test/event/event_perf/event_perf.o 01:13:52.727 CC test/event/reactor/reactor.o 01:13:52.727 CXX test/cpp_headers/fsdev_module.o 01:13:52.727 CC examples/nvme/cmb_copy/cmb_copy.o 01:13:52.727 LINK arbitration 01:13:52.727 CC examples/bdev/hello_world/hello_bdev.o 01:13:52.727 LINK reactor 01:13:52.727 LINK event_perf 01:13:52.727 LINK hotplug 01:13:52.727 CXX test/cpp_headers/ftl.o 01:13:52.986 LINK cmb_copy 01:13:52.986 CC examples/bdev/bdevperf/bdevperf.o 01:13:52.986 LINK nvme_manage 01:13:52.986 CC test/event/reactor_perf/reactor_perf.o 01:13:52.986 CC test/event/app_repeat/app_repeat.o 01:13:52.986 LINK hello_bdev 01:13:52.986 CXX test/cpp_headers/fuse_dispatcher.o 01:13:52.986 CC test/event/scheduler/scheduler.o 01:13:52.986 CC examples/nvme/abort/abort.o 01:13:52.986 LINK dif 01:13:52.986 LINK reactor_perf 01:13:53.244 LINK app_repeat 01:13:53.244 CXX test/cpp_headers/gpt_spec.o 01:13:53.244 CXX test/cpp_headers/hexlify.o 01:13:53.244 CC examples/nvme/pmr_persistence/pmr_persistence.o 01:13:53.244 CXX test/cpp_headers/histogram_data.o 01:13:53.244 CXX test/cpp_headers/idxd.o 01:13:53.244 LINK scheduler 01:13:53.244 CC test/lvol/esnap/esnap.o 01:13:53.244 CXX test/cpp_headers/idxd_spec.o 01:13:53.501 CXX test/cpp_headers/init.o 01:13:53.501 LINK pmr_persistence 01:13:53.501 LINK abort 01:13:53.501 CXX test/cpp_headers/ioat.o 01:13:53.501 CXX test/cpp_headers/ioat_spec.o 01:13:53.501 CC test/nvme/aer/aer.o 01:13:53.501 CC test/nvme/reset/reset.o 01:13:53.501 CC test/bdev/bdevio/bdevio.o 01:13:53.759 CXX test/cpp_headers/iscsi_spec.o 01:13:53.759 CC test/nvme/e2edp/nvme_dp.o 01:13:53.759 CC test/nvme/sgl/sgl.o 01:13:53.759 CC test/nvme/overhead/overhead.o 01:13:53.759 CC test/nvme/err_injection/err_injection.o 01:13:53.759 LINK bdevperf 01:13:53.759 LINK reset 01:13:53.759 LINK aer 01:13:53.759 CXX test/cpp_headers/json.o 01:13:54.016 LINK err_injection 01:13:54.016 LINK bdevio 01:13:54.016 LINK nvme_dp 01:13:54.016 LINK sgl 01:13:54.016 CXX test/cpp_headers/jsonrpc.o 01:13:54.016 LINK overhead 01:13:54.016 CC test/nvme/startup/startup.o 01:13:54.016 CC test/nvme/reserve/reserve.o 01:13:54.016 CC test/nvme/simple_copy/simple_copy.o 01:13:54.276 CXX test/cpp_headers/keyring.o 01:13:54.276 CC test/nvme/connect_stress/connect_stress.o 01:13:54.276 CC examples/nvmf/nvmf/nvmf.o 01:13:54.276 CC test/nvme/boot_partition/boot_partition.o 01:13:54.276 CC test/nvme/compliance/nvme_compliance.o 01:13:54.276 CC test/nvme/fused_ordering/fused_ordering.o 01:13:54.276 LINK startup 01:13:54.276 CXX test/cpp_headers/keyring_module.o 01:13:54.276 LINK reserve 01:13:54.276 LINK boot_partition 01:13:54.276 LINK connect_stress 01:13:54.276 LINK simple_copy 01:13:54.535 LINK fused_ordering 01:13:54.535 CXX test/cpp_headers/likely.o 01:13:54.535 LINK nvmf 01:13:54.535 CC test/nvme/doorbell_aers/doorbell_aers.o 01:13:54.535 CXX test/cpp_headers/log.o 01:13:54.535 CXX test/cpp_headers/lvol.o 01:13:54.535 LINK nvme_compliance 01:13:54.535 CC test/nvme/fdp/fdp.o 01:13:54.535 CC test/nvme/cuse/cuse.o 01:13:54.535 CXX test/cpp_headers/md5.o 01:13:54.535 CXX test/cpp_headers/memory.o 01:13:54.795 CXX test/cpp_headers/mmio.o 01:13:54.795 CXX test/cpp_headers/nbd.o 01:13:54.795 CXX test/cpp_headers/net.o 01:13:54.795 LINK doorbell_aers 01:13:54.795 CXX test/cpp_headers/notify.o 01:13:54.795 CXX test/cpp_headers/nvme.o 01:13:54.795 CXX test/cpp_headers/nvme_intel.o 01:13:54.795 CXX test/cpp_headers/nvme_ocssd.o 01:13:54.795 CXX test/cpp_headers/nvme_ocssd_spec.o 01:13:54.795 CXX test/cpp_headers/nvme_spec.o 01:13:54.795 CXX test/cpp_headers/nvme_zns.o 01:13:54.795 CXX test/cpp_headers/nvmf_cmd.o 01:13:54.795 LINK fdp 01:13:54.795 CXX test/cpp_headers/nvmf_fc_spec.o 01:13:54.795 CXX test/cpp_headers/nvmf.o 01:13:55.054 CXX test/cpp_headers/nvmf_spec.o 01:13:55.054 CXX test/cpp_headers/nvmf_transport.o 01:13:55.054 CXX test/cpp_headers/opal.o 01:13:55.054 CXX test/cpp_headers/opal_spec.o 01:13:55.054 CXX test/cpp_headers/pci_ids.o 01:13:55.054 CXX test/cpp_headers/pipe.o 01:13:55.054 CXX test/cpp_headers/queue.o 01:13:55.054 CXX test/cpp_headers/reduce.o 01:13:55.054 CXX test/cpp_headers/rpc.o 01:13:55.054 CXX test/cpp_headers/scheduler.o 01:13:55.054 CXX test/cpp_headers/scsi.o 01:13:55.313 CXX test/cpp_headers/scsi_spec.o 01:13:55.313 CXX test/cpp_headers/sock.o 01:13:55.313 CXX test/cpp_headers/stdinc.o 01:13:55.313 CXX test/cpp_headers/string.o 01:13:55.313 CXX test/cpp_headers/thread.o 01:13:55.313 CXX test/cpp_headers/trace.o 01:13:55.313 CXX test/cpp_headers/trace_parser.o 01:13:55.313 CXX test/cpp_headers/tree.o 01:13:55.313 CXX test/cpp_headers/ublk.o 01:13:55.313 CXX test/cpp_headers/util.o 01:13:55.313 CXX test/cpp_headers/uuid.o 01:13:55.313 CXX test/cpp_headers/version.o 01:13:55.313 CXX test/cpp_headers/vfio_user_pci.o 01:13:55.313 CXX test/cpp_headers/vfio_user_spec.o 01:13:55.573 CXX test/cpp_headers/vhost.o 01:13:55.573 CXX test/cpp_headers/vmd.o 01:13:55.573 CXX test/cpp_headers/xor.o 01:13:55.573 CXX test/cpp_headers/zipf.o 01:13:55.832 LINK cuse 01:14:00.031 LINK esnap 01:14:00.031 01:14:00.031 real 1m23.183s 01:14:00.031 user 7m1.692s 01:14:00.031 sys 1m55.585s 01:14:00.031 05:08:42 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 01:14:00.031 05:08:42 make -- common/autotest_common.sh@10 -- $ set +x 01:14:00.031 ************************************ 01:14:00.031 END TEST make 01:14:00.031 ************************************ 01:14:00.031 05:08:42 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 01:14:00.031 05:08:42 -- pm/common@29 -- $ signal_monitor_resources TERM 01:14:00.032 05:08:42 -- pm/common@40 -- $ local monitor pid pids signal=TERM 01:14:00.032 05:08:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 01:14:00.032 05:08:42 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 01:14:00.032 05:08:42 -- pm/common@44 -- $ pid=5304 01:14:00.032 05:08:42 -- pm/common@50 -- $ kill -TERM 5304 01:14:00.032 05:08:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 01:14:00.032 05:08:42 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 01:14:00.032 05:08:42 -- pm/common@44 -- $ pid=5306 01:14:00.032 05:08:42 -- pm/common@50 -- $ kill -TERM 5306 01:14:00.032 05:08:42 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 01:14:00.032 05:08:42 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 01:14:00.032 05:08:42 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:14:00.032 05:08:42 -- common/autotest_common.sh@1693 -- # lcov --version 01:14:00.032 05:08:42 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:14:00.032 05:08:42 -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:14:00.032 05:08:42 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:14:00.032 05:08:42 -- scripts/common.sh@333 -- # local ver1 ver1_l 01:14:00.032 05:08:42 -- scripts/common.sh@334 -- # local ver2 ver2_l 01:14:00.032 05:08:42 -- scripts/common.sh@336 -- # IFS=.-: 01:14:00.032 05:08:42 -- scripts/common.sh@336 -- # read -ra ver1 01:14:00.032 05:08:42 -- scripts/common.sh@337 -- # IFS=.-: 01:14:00.032 05:08:42 -- scripts/common.sh@337 -- # read -ra ver2 01:14:00.032 05:08:42 -- scripts/common.sh@338 -- # local 'op=<' 01:14:00.032 05:08:42 -- scripts/common.sh@340 -- # ver1_l=2 01:14:00.032 05:08:42 -- scripts/common.sh@341 -- # ver2_l=1 01:14:00.032 05:08:42 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:14:00.032 05:08:42 -- scripts/common.sh@344 -- # case "$op" in 01:14:00.032 05:08:42 -- scripts/common.sh@345 -- # : 1 01:14:00.032 05:08:42 -- scripts/common.sh@364 -- # (( v = 0 )) 01:14:00.032 05:08:42 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:14:00.032 05:08:42 -- scripts/common.sh@365 -- # decimal 1 01:14:00.032 05:08:42 -- scripts/common.sh@353 -- # local d=1 01:14:00.032 05:08:42 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:14:00.032 05:08:42 -- scripts/common.sh@355 -- # echo 1 01:14:00.032 05:08:42 -- scripts/common.sh@365 -- # ver1[v]=1 01:14:00.032 05:08:42 -- scripts/common.sh@366 -- # decimal 2 01:14:00.032 05:08:42 -- scripts/common.sh@353 -- # local d=2 01:14:00.032 05:08:42 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:14:00.032 05:08:42 -- scripts/common.sh@355 -- # echo 2 01:14:00.032 05:08:42 -- scripts/common.sh@366 -- # ver2[v]=2 01:14:00.032 05:08:42 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:14:00.032 05:08:42 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:14:00.032 05:08:42 -- scripts/common.sh@368 -- # return 0 01:14:00.032 05:08:42 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:14:00.032 05:08:42 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:14:00.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:14:00.032 --rc genhtml_branch_coverage=1 01:14:00.032 --rc genhtml_function_coverage=1 01:14:00.032 --rc genhtml_legend=1 01:14:00.032 --rc geninfo_all_blocks=1 01:14:00.032 --rc geninfo_unexecuted_blocks=1 01:14:00.032 01:14:00.032 ' 01:14:00.032 05:08:42 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:14:00.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:14:00.032 --rc genhtml_branch_coverage=1 01:14:00.032 --rc genhtml_function_coverage=1 01:14:00.032 --rc genhtml_legend=1 01:14:00.032 --rc geninfo_all_blocks=1 01:14:00.032 --rc geninfo_unexecuted_blocks=1 01:14:00.032 01:14:00.032 ' 01:14:00.032 05:08:42 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:14:00.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:14:00.032 --rc genhtml_branch_coverage=1 01:14:00.032 --rc genhtml_function_coverage=1 01:14:00.032 --rc genhtml_legend=1 01:14:00.032 --rc geninfo_all_blocks=1 01:14:00.032 --rc geninfo_unexecuted_blocks=1 01:14:00.032 01:14:00.032 ' 01:14:00.032 05:08:42 -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:14:00.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:14:00.032 --rc genhtml_branch_coverage=1 01:14:00.032 --rc genhtml_function_coverage=1 01:14:00.032 --rc genhtml_legend=1 01:14:00.032 --rc geninfo_all_blocks=1 01:14:00.032 --rc geninfo_unexecuted_blocks=1 01:14:00.032 01:14:00.032 ' 01:14:00.032 05:08:42 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:14:00.032 05:08:42 -- nvmf/common.sh@7 -- # uname -s 01:14:00.032 05:08:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:14:00.032 05:08:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:14:00.032 05:08:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:14:00.032 05:08:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:14:00.032 05:08:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:14:00.032 05:08:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:14:00.032 05:08:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:14:00.032 05:08:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:14:00.032 05:08:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:14:00.032 05:08:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:14:00.032 05:08:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:342cf203-c16b-474e-b3e5-4e4e1e38bf6e 01:14:00.032 05:08:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=342cf203-c16b-474e-b3e5-4e4e1e38bf6e 01:14:00.032 05:08:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:14:00.032 05:08:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:14:00.032 05:08:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 01:14:00.032 05:08:42 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:14:00.032 05:08:42 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:14:00.032 05:08:42 -- scripts/common.sh@15 -- # shopt -s extglob 01:14:00.032 05:08:42 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:14:00.032 05:08:42 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:14:00.032 05:08:42 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:14:00.032 05:08:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:14:00.032 05:08:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:14:00.032 05:08:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:14:00.032 05:08:42 -- paths/export.sh@5 -- # export PATH 01:14:00.032 05:08:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:14:00.032 05:08:42 -- nvmf/common.sh@51 -- # : 0 01:14:00.032 05:08:42 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:14:00.032 05:08:42 -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:14:00.032 05:08:42 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:14:00.032 05:08:42 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:14:00.032 05:08:42 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:14:00.032 05:08:42 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:14:00.032 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:14:00.032 05:08:42 -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:14:00.032 05:08:42 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:14:00.032 05:08:42 -- nvmf/common.sh@55 -- # have_pci_nics=0 01:14:00.032 05:08:42 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 01:14:00.032 05:08:42 -- spdk/autotest.sh@32 -- # uname -s 01:14:00.032 05:08:42 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 01:14:00.032 05:08:42 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 01:14:00.032 05:08:42 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 01:14:00.032 05:08:42 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 01:14:00.032 05:08:42 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 01:14:00.032 05:08:42 -- spdk/autotest.sh@44 -- # modprobe nbd 01:14:00.032 05:08:42 -- spdk/autotest.sh@46 -- # type -P udevadm 01:14:00.032 05:08:42 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 01:14:00.032 05:08:42 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 01:14:00.032 05:08:42 -- spdk/autotest.sh@48 -- # udevadm_pid=54735 01:14:00.032 05:08:42 -- spdk/autotest.sh@53 -- # start_monitor_resources 01:14:00.032 05:08:42 -- pm/common@17 -- # local monitor 01:14:00.032 05:08:42 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 01:14:00.032 05:08:42 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 01:14:00.032 05:08:42 -- pm/common@25 -- # sleep 1 01:14:00.032 05:08:42 -- pm/common@21 -- # date +%s 01:14:00.032 05:08:42 -- pm/common@21 -- # date +%s 01:14:00.032 05:08:42 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733720922 01:14:00.032 05:08:42 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733720922 01:14:00.032 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733720922_collect-vmstat.pm.log 01:14:00.032 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733720922_collect-cpu-load.pm.log 01:14:01.409 05:08:43 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 01:14:01.410 05:08:43 -- spdk/autotest.sh@57 -- # timing_enter autotest 01:14:01.410 05:08:43 -- common/autotest_common.sh@726 -- # xtrace_disable 01:14:01.410 05:08:43 -- common/autotest_common.sh@10 -- # set +x 01:14:01.410 05:08:43 -- spdk/autotest.sh@59 -- # create_test_list 01:14:01.410 05:08:43 -- common/autotest_common.sh@752 -- # xtrace_disable 01:14:01.410 05:08:43 -- common/autotest_common.sh@10 -- # set +x 01:14:01.410 05:08:43 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 01:14:01.410 05:08:43 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 01:14:01.410 05:08:43 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 01:14:01.410 05:08:43 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 01:14:01.410 05:08:43 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 01:14:01.410 05:08:43 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 01:14:01.410 05:08:43 -- common/autotest_common.sh@1457 -- # uname 01:14:01.410 05:08:43 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 01:14:01.410 05:08:43 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 01:14:01.410 05:08:43 -- common/autotest_common.sh@1477 -- # uname 01:14:01.410 05:08:43 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 01:14:01.410 05:08:43 -- spdk/autotest.sh@68 -- # [[ y == y ]] 01:14:01.410 05:08:43 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 01:14:01.410 lcov: LCOV version 1.15 01:14:01.410 05:08:43 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 01:14:16.284 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 01:14:16.284 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 01:14:31.169 05:09:13 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 01:14:31.169 05:09:13 -- common/autotest_common.sh@726 -- # xtrace_disable 01:14:31.169 05:09:13 -- common/autotest_common.sh@10 -- # set +x 01:14:31.169 05:09:13 -- spdk/autotest.sh@78 -- # rm -f 01:14:31.169 05:09:13 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 01:14:31.428 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:14:32.364 0000:00:11.0 (1b36 0010): Already using the nvme driver 01:14:32.364 0000:00:10.0 (1b36 0010): Already using the nvme driver 01:14:32.364 0000:00:12.0 (1b36 0010): Already using the nvme driver 01:14:32.364 0000:00:13.0 (1b36 0010): Already using the nvme driver 01:14:32.364 05:09:14 -- spdk/autotest.sh@83 -- # get_zoned_devs 01:14:32.364 05:09:14 -- common/autotest_common.sh@1657 -- # zoned_devs=() 01:14:32.364 05:09:14 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 01:14:32.364 05:09:14 -- common/autotest_common.sh@1658 -- # local nvme bdf 01:14:32.364 05:09:14 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 01:14:32.364 05:09:14 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 01:14:32.364 05:09:14 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 01:14:32.364 05:09:14 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 01:14:32.364 05:09:14 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:14:32.364 05:09:14 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 01:14:32.364 05:09:14 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 01:14:32.364 05:09:14 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 01:14:32.364 05:09:14 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 01:14:32.364 05:09:14 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:14:32.364 05:09:14 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 01:14:32.364 05:09:14 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 01:14:32.364 05:09:14 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 01:14:32.364 05:09:14 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 01:14:32.364 05:09:14 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:14:32.364 05:09:14 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 01:14:32.364 05:09:14 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 01:14:32.364 05:09:14 -- common/autotest_common.sh@1650 -- # local device=nvme2n2 01:14:32.364 05:09:14 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 01:14:32.364 05:09:14 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:14:32.364 05:09:14 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 01:14:32.364 05:09:14 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 01:14:32.364 05:09:14 -- common/autotest_common.sh@1650 -- # local device=nvme2n3 01:14:32.364 05:09:14 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 01:14:32.364 05:09:14 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:14:32.364 05:09:14 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 01:14:32.364 05:09:14 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 01:14:32.364 05:09:14 -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 01:14:32.364 05:09:14 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 01:14:32.364 05:09:14 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:14:32.364 05:09:14 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 01:14:32.364 05:09:14 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 01:14:32.364 05:09:14 -- common/autotest_common.sh@1650 -- # local device=nvme3n1 01:14:32.364 05:09:14 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 01:14:32.365 05:09:14 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:14:32.365 05:09:14 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 01:14:32.365 05:09:14 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 01:14:32.365 05:09:14 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 01:14:32.365 05:09:14 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 01:14:32.365 05:09:14 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 01:14:32.365 05:09:14 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 01:14:32.365 No valid GPT data, bailing 01:14:32.365 05:09:14 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 01:14:32.365 05:09:14 -- scripts/common.sh@394 -- # pt= 01:14:32.365 05:09:14 -- scripts/common.sh@395 -- # return 1 01:14:32.365 05:09:14 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 01:14:32.365 1+0 records in 01:14:32.365 1+0 records out 01:14:32.365 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0152191 s, 68.9 MB/s 01:14:32.365 05:09:14 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 01:14:32.365 05:09:14 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 01:14:32.365 05:09:14 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 01:14:32.365 05:09:14 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 01:14:32.365 05:09:14 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 01:14:32.365 No valid GPT data, bailing 01:14:32.365 05:09:14 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 01:14:32.365 05:09:14 -- scripts/common.sh@394 -- # pt= 01:14:32.365 05:09:14 -- scripts/common.sh@395 -- # return 1 01:14:32.365 05:09:14 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 01:14:32.365 1+0 records in 01:14:32.365 1+0 records out 01:14:32.365 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00715065 s, 147 MB/s 01:14:32.365 05:09:14 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 01:14:32.365 05:09:14 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 01:14:32.365 05:09:14 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 01:14:32.365 05:09:14 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 01:14:32.365 05:09:14 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 01:14:32.624 No valid GPT data, bailing 01:14:32.624 05:09:14 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 01:14:32.624 05:09:14 -- scripts/common.sh@394 -- # pt= 01:14:32.624 05:09:14 -- scripts/common.sh@395 -- # return 1 01:14:32.624 05:09:14 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 01:14:32.624 1+0 records in 01:14:32.624 1+0 records out 01:14:32.624 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00598 s, 175 MB/s 01:14:32.624 05:09:14 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 01:14:32.624 05:09:14 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 01:14:32.624 05:09:14 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 01:14:32.624 05:09:14 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 01:14:32.624 05:09:14 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 01:14:32.624 No valid GPT data, bailing 01:14:32.624 05:09:14 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 01:14:32.624 05:09:14 -- scripts/common.sh@394 -- # pt= 01:14:32.624 05:09:14 -- scripts/common.sh@395 -- # return 1 01:14:32.624 05:09:14 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 01:14:32.624 1+0 records in 01:14:32.624 1+0 records out 01:14:32.624 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00592049 s, 177 MB/s 01:14:32.624 05:09:14 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 01:14:32.624 05:09:14 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 01:14:32.624 05:09:14 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 01:14:32.624 05:09:14 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 01:14:32.624 05:09:14 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 01:14:32.624 No valid GPT data, bailing 01:14:32.624 05:09:15 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 01:14:32.624 05:09:15 -- scripts/common.sh@394 -- # pt= 01:14:32.624 05:09:15 -- scripts/common.sh@395 -- # return 1 01:14:32.624 05:09:15 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 01:14:32.624 1+0 records in 01:14:32.624 1+0 records out 01:14:32.624 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00567916 s, 185 MB/s 01:14:32.624 05:09:15 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 01:14:32.624 05:09:15 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 01:14:32.624 05:09:15 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 01:14:32.624 05:09:15 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 01:14:32.624 05:09:15 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 01:14:32.887 No valid GPT data, bailing 01:14:32.887 05:09:15 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 01:14:32.887 05:09:15 -- scripts/common.sh@394 -- # pt= 01:14:32.887 05:09:15 -- scripts/common.sh@395 -- # return 1 01:14:32.887 05:09:15 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 01:14:32.887 1+0 records in 01:14:32.887 1+0 records out 01:14:32.887 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00593211 s, 177 MB/s 01:14:32.887 05:09:15 -- spdk/autotest.sh@105 -- # sync 01:14:32.887 05:09:15 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 01:14:32.887 05:09:15 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 01:14:32.887 05:09:15 -- common/autotest_common.sh@22 -- # reap_spdk_processes 01:14:36.175 05:09:18 -- spdk/autotest.sh@111 -- # uname -s 01:14:36.175 05:09:18 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 01:14:36.175 05:09:18 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 01:14:36.175 05:09:18 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 01:14:36.435 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:14:37.004 Hugepages 01:14:37.004 node hugesize free / total 01:14:37.004 node0 1048576kB 0 / 0 01:14:37.004 node0 2048kB 0 / 0 01:14:37.004 01:14:37.004 Type BDF Vendor Device NUMA Driver Device Block devices 01:14:37.264 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 01:14:37.264 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 01:14:37.264 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 01:14:37.523 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 01:14:37.523 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 01:14:37.524 05:09:19 -- spdk/autotest.sh@117 -- # uname -s 01:14:37.524 05:09:19 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 01:14:37.524 05:09:19 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 01:14:37.524 05:09:19 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 01:14:38.463 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:14:39.167 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 01:14:39.167 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 01:14:39.167 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 01:14:39.167 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 01:14:39.167 05:09:21 -- common/autotest_common.sh@1517 -- # sleep 1 01:14:40.104 05:09:22 -- common/autotest_common.sh@1518 -- # bdfs=() 01:14:40.104 05:09:22 -- common/autotest_common.sh@1518 -- # local bdfs 01:14:40.104 05:09:22 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 01:14:40.104 05:09:22 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 01:14:40.104 05:09:22 -- common/autotest_common.sh@1498 -- # bdfs=() 01:14:40.104 05:09:22 -- common/autotest_common.sh@1498 -- # local bdfs 01:14:40.104 05:09:22 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 01:14:40.104 05:09:22 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 01:14:40.104 05:09:22 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 01:14:40.364 05:09:22 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 01:14:40.364 05:09:22 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 01:14:40.364 05:09:22 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 01:14:40.932 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:14:41.191 Waiting for block devices as requested 01:14:41.191 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 01:14:41.451 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 01:14:41.451 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 01:14:41.451 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 01:14:46.719 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 01:14:46.719 05:09:28 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 01:14:46.719 05:09:28 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 01:14:46.719 05:09:28 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 01:14:46.719 05:09:28 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 01:14:46.719 05:09:28 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 01:14:46.719 05:09:28 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 01:14:46.719 05:09:28 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 01:14:46.719 05:09:28 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 01:14:46.719 05:09:28 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 01:14:46.719 05:09:28 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 01:14:46.719 05:09:29 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 01:14:46.719 05:09:29 -- common/autotest_common.sh@1531 -- # grep oacs 01:14:46.719 05:09:29 -- common/autotest_common.sh@1531 -- # cut -d: -f2 01:14:46.719 05:09:29 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 01:14:46.719 05:09:29 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 01:14:46.719 05:09:29 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 01:14:46.719 05:09:29 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 01:14:46.719 05:09:29 -- common/autotest_common.sh@1540 -- # grep unvmcap 01:14:46.719 05:09:29 -- common/autotest_common.sh@1540 -- # cut -d: -f2 01:14:46.719 05:09:29 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 01:14:46.719 05:09:29 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 01:14:46.719 05:09:29 -- common/autotest_common.sh@1543 -- # continue 01:14:46.719 05:09:29 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 01:14:46.719 05:09:29 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 01:14:46.719 05:09:29 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 01:14:46.719 05:09:29 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 01:14:46.719 05:09:29 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 01:14:46.719 05:09:29 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 01:14:46.719 05:09:29 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 01:14:46.719 05:09:29 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 01:14:46.719 05:09:29 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 01:14:46.719 05:09:29 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 01:14:46.719 05:09:29 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 01:14:46.719 05:09:29 -- common/autotest_common.sh@1531 -- # grep oacs 01:14:46.719 05:09:29 -- common/autotest_common.sh@1531 -- # cut -d: -f2 01:14:46.719 05:09:29 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 01:14:46.719 05:09:29 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 01:14:46.719 05:09:29 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 01:14:46.719 05:09:29 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 01:14:46.719 05:09:29 -- common/autotest_common.sh@1540 -- # grep unvmcap 01:14:46.719 05:09:29 -- common/autotest_common.sh@1540 -- # cut -d: -f2 01:14:46.719 05:09:29 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 01:14:46.719 05:09:29 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 01:14:46.719 05:09:29 -- common/autotest_common.sh@1543 -- # continue 01:14:46.719 05:09:29 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 01:14:46.719 05:09:29 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 01:14:46.720 05:09:29 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 01:14:46.720 05:09:29 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 01:14:46.720 05:09:29 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 01:14:46.720 05:09:29 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 01:14:46.720 05:09:29 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 01:14:46.720 05:09:29 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 01:14:46.720 05:09:29 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 01:14:46.720 05:09:29 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 01:14:46.720 05:09:29 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 01:14:46.720 05:09:29 -- common/autotest_common.sh@1531 -- # grep oacs 01:14:46.720 05:09:29 -- common/autotest_common.sh@1531 -- # cut -d: -f2 01:14:46.720 05:09:29 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 01:14:46.720 05:09:29 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 01:14:46.720 05:09:29 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 01:14:46.720 05:09:29 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 01:14:46.720 05:09:29 -- common/autotest_common.sh@1540 -- # cut -d: -f2 01:14:46.720 05:09:29 -- common/autotest_common.sh@1540 -- # grep unvmcap 01:14:46.720 05:09:29 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 01:14:46.720 05:09:29 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 01:14:46.720 05:09:29 -- common/autotest_common.sh@1543 -- # continue 01:14:46.720 05:09:29 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 01:14:46.720 05:09:29 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 01:14:46.720 05:09:29 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 01:14:46.720 05:09:29 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 01:14:46.720 05:09:29 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 01:14:46.720 05:09:29 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 01:14:46.720 05:09:29 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 01:14:46.720 05:09:29 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 01:14:46.720 05:09:29 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 01:14:46.720 05:09:29 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 01:14:46.720 05:09:29 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 01:14:46.720 05:09:29 -- common/autotest_common.sh@1531 -- # grep oacs 01:14:46.979 05:09:29 -- common/autotest_common.sh@1531 -- # cut -d: -f2 01:14:46.979 05:09:29 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 01:14:46.979 05:09:29 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 01:14:46.979 05:09:29 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 01:14:46.979 05:09:29 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 01:14:46.979 05:09:29 -- common/autotest_common.sh@1540 -- # grep unvmcap 01:14:46.979 05:09:29 -- common/autotest_common.sh@1540 -- # cut -d: -f2 01:14:46.979 05:09:29 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 01:14:46.979 05:09:29 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 01:14:46.979 05:09:29 -- common/autotest_common.sh@1543 -- # continue 01:14:46.979 05:09:29 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 01:14:46.979 05:09:29 -- common/autotest_common.sh@732 -- # xtrace_disable 01:14:46.979 05:09:29 -- common/autotest_common.sh@10 -- # set +x 01:14:46.979 05:09:29 -- spdk/autotest.sh@125 -- # timing_enter afterboot 01:14:46.979 05:09:29 -- common/autotest_common.sh@726 -- # xtrace_disable 01:14:46.979 05:09:29 -- common/autotest_common.sh@10 -- # set +x 01:14:46.979 05:09:29 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 01:14:47.548 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:14:48.487 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 01:14:48.487 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 01:14:48.487 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 01:14:48.487 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 01:14:48.487 05:09:30 -- spdk/autotest.sh@127 -- # timing_exit afterboot 01:14:48.487 05:09:30 -- common/autotest_common.sh@732 -- # xtrace_disable 01:14:48.487 05:09:30 -- common/autotest_common.sh@10 -- # set +x 01:14:48.487 05:09:30 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 01:14:48.487 05:09:30 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 01:14:48.487 05:09:30 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 01:14:48.487 05:09:30 -- common/autotest_common.sh@1563 -- # bdfs=() 01:14:48.487 05:09:30 -- common/autotest_common.sh@1563 -- # _bdfs=() 01:14:48.487 05:09:30 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 01:14:48.487 05:09:30 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 01:14:48.487 05:09:30 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 01:14:48.487 05:09:30 -- common/autotest_common.sh@1498 -- # bdfs=() 01:14:48.487 05:09:30 -- common/autotest_common.sh@1498 -- # local bdfs 01:14:48.487 05:09:30 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 01:14:48.487 05:09:30 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 01:14:48.487 05:09:30 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 01:14:48.747 05:09:31 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 01:14:48.747 05:09:31 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 01:14:48.747 05:09:31 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 01:14:48.747 05:09:31 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 01:14:48.747 05:09:31 -- common/autotest_common.sh@1566 -- # device=0x0010 01:14:48.747 05:09:31 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 01:14:48.747 05:09:31 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 01:14:48.747 05:09:31 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 01:14:48.747 05:09:31 -- common/autotest_common.sh@1566 -- # device=0x0010 01:14:48.747 05:09:31 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 01:14:48.747 05:09:31 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 01:14:48.747 05:09:31 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 01:14:48.747 05:09:31 -- common/autotest_common.sh@1566 -- # device=0x0010 01:14:48.747 05:09:31 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 01:14:48.747 05:09:31 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 01:14:48.747 05:09:31 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 01:14:48.747 05:09:31 -- common/autotest_common.sh@1566 -- # device=0x0010 01:14:48.747 05:09:31 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 01:14:48.747 05:09:31 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 01:14:48.747 05:09:31 -- common/autotest_common.sh@1572 -- # return 0 01:14:48.747 05:09:31 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 01:14:48.747 05:09:31 -- common/autotest_common.sh@1580 -- # return 0 01:14:48.747 05:09:31 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 01:14:48.747 05:09:31 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 01:14:48.747 05:09:31 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 01:14:48.747 05:09:31 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 01:14:48.747 05:09:31 -- spdk/autotest.sh@149 -- # timing_enter lib 01:14:48.747 05:09:31 -- common/autotest_common.sh@726 -- # xtrace_disable 01:14:48.747 05:09:31 -- common/autotest_common.sh@10 -- # set +x 01:14:48.747 05:09:31 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 01:14:48.747 05:09:31 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 01:14:48.747 05:09:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:14:48.747 05:09:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:14:48.747 05:09:31 -- common/autotest_common.sh@10 -- # set +x 01:14:48.747 ************************************ 01:14:48.747 START TEST env 01:14:48.747 ************************************ 01:14:48.747 05:09:31 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 01:14:48.747 * Looking for test storage... 01:14:49.007 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 01:14:49.008 05:09:31 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:14:49.008 05:09:31 env -- common/autotest_common.sh@1693 -- # lcov --version 01:14:49.008 05:09:31 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:14:49.008 05:09:31 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:14:49.008 05:09:31 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:14:49.008 05:09:31 env -- scripts/common.sh@333 -- # local ver1 ver1_l 01:14:49.008 05:09:31 env -- scripts/common.sh@334 -- # local ver2 ver2_l 01:14:49.008 05:09:31 env -- scripts/common.sh@336 -- # IFS=.-: 01:14:49.008 05:09:31 env -- scripts/common.sh@336 -- # read -ra ver1 01:14:49.008 05:09:31 env -- scripts/common.sh@337 -- # IFS=.-: 01:14:49.008 05:09:31 env -- scripts/common.sh@337 -- # read -ra ver2 01:14:49.008 05:09:31 env -- scripts/common.sh@338 -- # local 'op=<' 01:14:49.008 05:09:31 env -- scripts/common.sh@340 -- # ver1_l=2 01:14:49.008 05:09:31 env -- scripts/common.sh@341 -- # ver2_l=1 01:14:49.008 05:09:31 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:14:49.008 05:09:31 env -- scripts/common.sh@344 -- # case "$op" in 01:14:49.008 05:09:31 env -- scripts/common.sh@345 -- # : 1 01:14:49.008 05:09:31 env -- scripts/common.sh@364 -- # (( v = 0 )) 01:14:49.008 05:09:31 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:14:49.008 05:09:31 env -- scripts/common.sh@365 -- # decimal 1 01:14:49.008 05:09:31 env -- scripts/common.sh@353 -- # local d=1 01:14:49.008 05:09:31 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:14:49.008 05:09:31 env -- scripts/common.sh@355 -- # echo 1 01:14:49.008 05:09:31 env -- scripts/common.sh@365 -- # ver1[v]=1 01:14:49.008 05:09:31 env -- scripts/common.sh@366 -- # decimal 2 01:14:49.008 05:09:31 env -- scripts/common.sh@353 -- # local d=2 01:14:49.008 05:09:31 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:14:49.008 05:09:31 env -- scripts/common.sh@355 -- # echo 2 01:14:49.008 05:09:31 env -- scripts/common.sh@366 -- # ver2[v]=2 01:14:49.008 05:09:31 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:14:49.008 05:09:31 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:14:49.008 05:09:31 env -- scripts/common.sh@368 -- # return 0 01:14:49.008 05:09:31 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:14:49.008 05:09:31 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:14:49.008 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:14:49.008 --rc genhtml_branch_coverage=1 01:14:49.008 --rc genhtml_function_coverage=1 01:14:49.008 --rc genhtml_legend=1 01:14:49.008 --rc geninfo_all_blocks=1 01:14:49.008 --rc geninfo_unexecuted_blocks=1 01:14:49.008 01:14:49.008 ' 01:14:49.008 05:09:31 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:14:49.008 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:14:49.008 --rc genhtml_branch_coverage=1 01:14:49.008 --rc genhtml_function_coverage=1 01:14:49.008 --rc genhtml_legend=1 01:14:49.008 --rc geninfo_all_blocks=1 01:14:49.008 --rc geninfo_unexecuted_blocks=1 01:14:49.008 01:14:49.008 ' 01:14:49.008 05:09:31 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:14:49.008 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:14:49.008 --rc genhtml_branch_coverage=1 01:14:49.008 --rc genhtml_function_coverage=1 01:14:49.008 --rc genhtml_legend=1 01:14:49.008 --rc geninfo_all_blocks=1 01:14:49.008 --rc geninfo_unexecuted_blocks=1 01:14:49.008 01:14:49.008 ' 01:14:49.008 05:09:31 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:14:49.008 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:14:49.008 --rc genhtml_branch_coverage=1 01:14:49.008 --rc genhtml_function_coverage=1 01:14:49.008 --rc genhtml_legend=1 01:14:49.008 --rc geninfo_all_blocks=1 01:14:49.008 --rc geninfo_unexecuted_blocks=1 01:14:49.008 01:14:49.008 ' 01:14:49.008 05:09:31 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 01:14:49.008 05:09:31 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:14:49.008 05:09:31 env -- common/autotest_common.sh@1111 -- # xtrace_disable 01:14:49.008 05:09:31 env -- common/autotest_common.sh@10 -- # set +x 01:14:49.008 ************************************ 01:14:49.008 START TEST env_memory 01:14:49.008 ************************************ 01:14:49.008 05:09:31 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 01:14:49.008 01:14:49.008 01:14:49.008 CUnit - A unit testing framework for C - Version 2.1-3 01:14:49.008 http://cunit.sourceforge.net/ 01:14:49.008 01:14:49.008 01:14:49.008 Suite: mem_map_2mb 01:14:49.008 Test: alloc and free memory map ...[2024-12-09 05:09:31.383746] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 311:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 01:14:49.008 passed 01:14:49.008 Test: mem map translation ...[2024-12-09 05:09:31.432360] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 629:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 01:14:49.008 [2024-12-09 05:09:31.432529] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 629:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 01:14:49.008 [2024-12-09 05:09:31.432738] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 623:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 01:14:49.008 [2024-12-09 05:09:31.432804] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 639:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 01:14:49.267 passed 01:14:49.267 Test: mem map registration ...[2024-12-09 05:09:31.506582] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 381:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 01:14:49.267 [2024-12-09 05:09:31.506754] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 381:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 01:14:49.267 passed 01:14:49.267 Test: mem map adjacent registrations ...passed 01:14:49.267 Suite: mem_map_4kb 01:14:49.268 Test: alloc and free memory map ...[2024-12-09 05:09:31.690248] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 311:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 01:14:49.527 passed 01:14:49.527 Test: mem map translation ...[2024-12-09 05:09:31.741536] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 629:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=4096 len=1234 01:14:49.527 [2024-12-09 05:09:31.741699] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 629:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=4096 01:14:49.527 [2024-12-09 05:09:31.763832] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 623:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 01:14:49.527 [2024-12-09 05:09:31.763998] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 639:spdk_mem_map_set_translation: *ERROR*: could not get 0xfffffffff000 map 01:14:49.527 passed 01:14:49.527 Test: mem map registration ...[2024-12-09 05:09:31.870844] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 381:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=1000 len=1234 01:14:49.527 [2024-12-09 05:09:31.871022] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 381:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=4096 01:14:49.527 passed 01:14:49.786 Test: mem map adjacent registrations ...passed 01:14:49.786 01:14:49.786 Run Summary: Type Total Ran Passed Failed Inactive 01:14:49.786 suites 2 2 n/a 0 0 01:14:49.786 tests 8 8 8 0 0 01:14:49.786 asserts 304 304 304 0 n/a 01:14:49.786 01:14:49.786 Elapsed time = 0.664 seconds 01:14:49.786 01:14:49.786 real 0m0.722s 01:14:49.786 user 0m0.653s 01:14:49.786 sys 0m0.056s 01:14:49.786 05:09:32 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 01:14:49.786 05:09:32 env.env_memory -- common/autotest_common.sh@10 -- # set +x 01:14:49.786 ************************************ 01:14:49.786 END TEST env_memory 01:14:49.786 ************************************ 01:14:49.786 05:09:32 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 01:14:49.786 05:09:32 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:14:49.786 05:09:32 env -- common/autotest_common.sh@1111 -- # xtrace_disable 01:14:49.786 05:09:32 env -- common/autotest_common.sh@10 -- # set +x 01:14:49.786 ************************************ 01:14:49.786 START TEST env_vtophys 01:14:49.786 ************************************ 01:14:49.786 05:09:32 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 01:14:49.786 EAL: lib.eal log level changed from notice to debug 01:14:49.786 EAL: Detected lcore 0 as core 0 on socket 0 01:14:49.786 EAL: Detected lcore 1 as core 0 on socket 0 01:14:49.786 EAL: Detected lcore 2 as core 0 on socket 0 01:14:49.786 EAL: Detected lcore 3 as core 0 on socket 0 01:14:49.786 EAL: Detected lcore 4 as core 0 on socket 0 01:14:49.787 EAL: Detected lcore 5 as core 0 on socket 0 01:14:49.787 EAL: Detected lcore 6 as core 0 on socket 0 01:14:49.787 EAL: Detected lcore 7 as core 0 on socket 0 01:14:49.787 EAL: Detected lcore 8 as core 0 on socket 0 01:14:49.787 EAL: Detected lcore 9 as core 0 on socket 0 01:14:49.787 EAL: Maximum logical cores by configuration: 128 01:14:49.787 EAL: Detected CPU lcores: 10 01:14:49.787 EAL: Detected NUMA nodes: 1 01:14:49.787 EAL: Checking presence of .so 'librte_eal.so.24.1' 01:14:49.787 EAL: Detected shared linkage of DPDK 01:14:49.787 EAL: No shared files mode enabled, IPC will be disabled 01:14:49.787 EAL: Selected IOVA mode 'PA' 01:14:49.787 EAL: Probing VFIO support... 01:14:49.787 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 01:14:49.787 EAL: VFIO modules not loaded, skipping VFIO support... 01:14:49.787 EAL: Ask a virtual area of 0x2e000 bytes 01:14:49.787 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 01:14:49.787 EAL: Setting up physically contiguous memory... 01:14:49.787 EAL: Setting maximum number of open files to 524288 01:14:49.787 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 01:14:49.787 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 01:14:49.787 EAL: Ask a virtual area of 0x61000 bytes 01:14:49.787 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 01:14:49.787 EAL: Memseg list allocated at socket 0, page size 0x800kB 01:14:49.787 EAL: Ask a virtual area of 0x400000000 bytes 01:14:49.787 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 01:14:49.787 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 01:14:49.787 EAL: Ask a virtual area of 0x61000 bytes 01:14:49.787 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 01:14:49.787 EAL: Memseg list allocated at socket 0, page size 0x800kB 01:14:49.787 EAL: Ask a virtual area of 0x400000000 bytes 01:14:49.787 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 01:14:49.787 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 01:14:49.787 EAL: Ask a virtual area of 0x61000 bytes 01:14:49.787 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 01:14:49.787 EAL: Memseg list allocated at socket 0, page size 0x800kB 01:14:49.787 EAL: Ask a virtual area of 0x400000000 bytes 01:14:49.787 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 01:14:49.787 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 01:14:49.787 EAL: Ask a virtual area of 0x61000 bytes 01:14:49.787 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 01:14:49.787 EAL: Memseg list allocated at socket 0, page size 0x800kB 01:14:49.787 EAL: Ask a virtual area of 0x400000000 bytes 01:14:49.787 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 01:14:49.787 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 01:14:49.787 EAL: Hugepages will be freed exactly as allocated. 01:14:49.787 EAL: No shared files mode enabled, IPC is disabled 01:14:49.787 EAL: No shared files mode enabled, IPC is disabled 01:14:50.046 EAL: TSC frequency is ~2490000 KHz 01:14:50.046 EAL: Main lcore 0 is ready (tid=7fc4a6ec9a40;cpuset=[0]) 01:14:50.046 EAL: Trying to obtain current memory policy. 01:14:50.046 EAL: Setting policy MPOL_PREFERRED for socket 0 01:14:50.046 EAL: Restoring previous memory policy: 0 01:14:50.046 EAL: request: mp_malloc_sync 01:14:50.046 EAL: No shared files mode enabled, IPC is disabled 01:14:50.046 EAL: Heap on socket 0 was expanded by 2MB 01:14:50.046 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 01:14:50.046 EAL: No PCI address specified using 'addr=' in: bus=pci 01:14:50.046 EAL: Mem event callback 'spdk:(nil)' registered 01:14:50.046 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 01:14:50.046 01:14:50.046 01:14:50.046 CUnit - A unit testing framework for C - Version 2.1-3 01:14:50.046 http://cunit.sourceforge.net/ 01:14:50.046 01:14:50.046 01:14:50.046 Suite: components_suite 01:14:50.305 Test: vtophys_malloc_test ...passed 01:14:50.305 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 01:14:50.305 EAL: Setting policy MPOL_PREFERRED for socket 0 01:14:50.305 EAL: Restoring previous memory policy: 4 01:14:50.305 EAL: Calling mem event callback 'spdk:(nil)' 01:14:50.305 EAL: request: mp_malloc_sync 01:14:50.305 EAL: No shared files mode enabled, IPC is disabled 01:14:50.305 EAL: Heap on socket 0 was expanded by 4MB 01:14:50.564 EAL: Calling mem event callback 'spdk:(nil)' 01:14:50.564 EAL: request: mp_malloc_sync 01:14:50.564 EAL: No shared files mode enabled, IPC is disabled 01:14:50.564 EAL: Heap on socket 0 was shrunk by 4MB 01:14:50.564 EAL: Trying to obtain current memory policy. 01:14:50.564 EAL: Setting policy MPOL_PREFERRED for socket 0 01:14:50.564 EAL: Restoring previous memory policy: 4 01:14:50.564 EAL: Calling mem event callback 'spdk:(nil)' 01:14:50.564 EAL: request: mp_malloc_sync 01:14:50.564 EAL: No shared files mode enabled, IPC is disabled 01:14:50.564 EAL: Heap on socket 0 was expanded by 6MB 01:14:50.564 EAL: Calling mem event callback 'spdk:(nil)' 01:14:50.564 EAL: request: mp_malloc_sync 01:14:50.564 EAL: No shared files mode enabled, IPC is disabled 01:14:50.564 EAL: Heap on socket 0 was shrunk by 6MB 01:14:50.564 EAL: Trying to obtain current memory policy. 01:14:50.564 EAL: Setting policy MPOL_PREFERRED for socket 0 01:14:50.564 EAL: Restoring previous memory policy: 4 01:14:50.564 EAL: Calling mem event callback 'spdk:(nil)' 01:14:50.564 EAL: request: mp_malloc_sync 01:14:50.564 EAL: No shared files mode enabled, IPC is disabled 01:14:50.564 EAL: Heap on socket 0 was expanded by 10MB 01:14:50.564 EAL: Calling mem event callback 'spdk:(nil)' 01:14:50.564 EAL: request: mp_malloc_sync 01:14:50.564 EAL: No shared files mode enabled, IPC is disabled 01:14:50.564 EAL: Heap on socket 0 was shrunk by 10MB 01:14:50.564 EAL: Trying to obtain current memory policy. 01:14:50.564 EAL: Setting policy MPOL_PREFERRED for socket 0 01:14:50.564 EAL: Restoring previous memory policy: 4 01:14:50.564 EAL: Calling mem event callback 'spdk:(nil)' 01:14:50.564 EAL: request: mp_malloc_sync 01:14:50.564 EAL: No shared files mode enabled, IPC is disabled 01:14:50.564 EAL: Heap on socket 0 was expanded by 18MB 01:14:50.564 EAL: Calling mem event callback 'spdk:(nil)' 01:14:50.564 EAL: request: mp_malloc_sync 01:14:50.564 EAL: No shared files mode enabled, IPC is disabled 01:14:50.564 EAL: Heap on socket 0 was shrunk by 18MB 01:14:50.564 EAL: Trying to obtain current memory policy. 01:14:50.564 EAL: Setting policy MPOL_PREFERRED for socket 0 01:14:50.564 EAL: Restoring previous memory policy: 4 01:14:50.564 EAL: Calling mem event callback 'spdk:(nil)' 01:14:50.564 EAL: request: mp_malloc_sync 01:14:50.564 EAL: No shared files mode enabled, IPC is disabled 01:14:50.564 EAL: Heap on socket 0 was expanded by 34MB 01:14:50.564 EAL: Calling mem event callback 'spdk:(nil)' 01:14:50.564 EAL: request: mp_malloc_sync 01:14:50.564 EAL: No shared files mode enabled, IPC is disabled 01:14:50.564 EAL: Heap on socket 0 was shrunk by 34MB 01:14:50.564 EAL: Trying to obtain current memory policy. 01:14:50.564 EAL: Setting policy MPOL_PREFERRED for socket 0 01:14:50.564 EAL: Restoring previous memory policy: 4 01:14:50.564 EAL: Calling mem event callback 'spdk:(nil)' 01:14:50.564 EAL: request: mp_malloc_sync 01:14:50.564 EAL: No shared files mode enabled, IPC is disabled 01:14:50.564 EAL: Heap on socket 0 was expanded by 66MB 01:14:50.822 EAL: Calling mem event callback 'spdk:(nil)' 01:14:50.822 EAL: request: mp_malloc_sync 01:14:50.822 EAL: No shared files mode enabled, IPC is disabled 01:14:50.822 EAL: Heap on socket 0 was shrunk by 66MB 01:14:50.822 EAL: Trying to obtain current memory policy. 01:14:50.822 EAL: Setting policy MPOL_PREFERRED for socket 0 01:14:50.822 EAL: Restoring previous memory policy: 4 01:14:50.822 EAL: Calling mem event callback 'spdk:(nil)' 01:14:50.822 EAL: request: mp_malloc_sync 01:14:50.822 EAL: No shared files mode enabled, IPC is disabled 01:14:50.822 EAL: Heap on socket 0 was expanded by 130MB 01:14:51.081 EAL: Calling mem event callback 'spdk:(nil)' 01:14:51.081 EAL: request: mp_malloc_sync 01:14:51.081 EAL: No shared files mode enabled, IPC is disabled 01:14:51.081 EAL: Heap on socket 0 was shrunk by 130MB 01:14:51.340 EAL: Trying to obtain current memory policy. 01:14:51.340 EAL: Setting policy MPOL_PREFERRED for socket 0 01:14:51.340 EAL: Restoring previous memory policy: 4 01:14:51.340 EAL: Calling mem event callback 'spdk:(nil)' 01:14:51.340 EAL: request: mp_malloc_sync 01:14:51.340 EAL: No shared files mode enabled, IPC is disabled 01:14:51.340 EAL: Heap on socket 0 was expanded by 258MB 01:14:51.911 EAL: Calling mem event callback 'spdk:(nil)' 01:14:51.911 EAL: request: mp_malloc_sync 01:14:51.911 EAL: No shared files mode enabled, IPC is disabled 01:14:51.911 EAL: Heap on socket 0 was shrunk by 258MB 01:14:52.480 EAL: Trying to obtain current memory policy. 01:14:52.480 EAL: Setting policy MPOL_PREFERRED for socket 0 01:14:52.480 EAL: Restoring previous memory policy: 4 01:14:52.480 EAL: Calling mem event callback 'spdk:(nil)' 01:14:52.480 EAL: request: mp_malloc_sync 01:14:52.480 EAL: No shared files mode enabled, IPC is disabled 01:14:52.480 EAL: Heap on socket 0 was expanded by 514MB 01:14:53.415 EAL: Calling mem event callback 'spdk:(nil)' 01:14:53.415 EAL: request: mp_malloc_sync 01:14:53.415 EAL: No shared files mode enabled, IPC is disabled 01:14:53.415 EAL: Heap on socket 0 was shrunk by 514MB 01:14:54.349 EAL: Trying to obtain current memory policy. 01:14:54.349 EAL: Setting policy MPOL_PREFERRED for socket 0 01:14:54.349 EAL: Restoring previous memory policy: 4 01:14:54.349 EAL: Calling mem event callback 'spdk:(nil)' 01:14:54.349 EAL: request: mp_malloc_sync 01:14:54.349 EAL: No shared files mode enabled, IPC is disabled 01:14:54.349 EAL: Heap on socket 0 was expanded by 1026MB 01:14:56.253 EAL: Calling mem event callback 'spdk:(nil)' 01:14:56.253 EAL: request: mp_malloc_sync 01:14:56.253 EAL: No shared files mode enabled, IPC is disabled 01:14:56.253 EAL: Heap on socket 0 was shrunk by 1026MB 01:14:58.158 passed 01:14:58.158 01:14:58.158 Run Summary: Type Total Ran Passed Failed Inactive 01:14:58.158 suites 1 1 n/a 0 0 01:14:58.158 tests 2 2 2 0 0 01:14:58.158 asserts 5768 5768 5768 0 n/a 01:14:58.158 01:14:58.158 Elapsed time = 7.986 seconds 01:14:58.158 EAL: Calling mem event callback 'spdk:(nil)' 01:14:58.158 EAL: request: mp_malloc_sync 01:14:58.158 EAL: No shared files mode enabled, IPC is disabled 01:14:58.158 EAL: Heap on socket 0 was shrunk by 2MB 01:14:58.158 EAL: No shared files mode enabled, IPC is disabled 01:14:58.158 EAL: No shared files mode enabled, IPC is disabled 01:14:58.158 EAL: No shared files mode enabled, IPC is disabled 01:14:58.158 01:14:58.158 real 0m8.320s 01:14:58.158 user 0m7.368s 01:14:58.158 sys 0m0.793s 01:14:58.158 05:09:40 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 01:14:58.158 ************************************ 01:14:58.158 END TEST env_vtophys 01:14:58.158 ************************************ 01:14:58.158 05:09:40 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 01:14:58.158 05:09:40 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 01:14:58.158 05:09:40 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:14:58.158 05:09:40 env -- common/autotest_common.sh@1111 -- # xtrace_disable 01:14:58.158 05:09:40 env -- common/autotest_common.sh@10 -- # set +x 01:14:58.158 ************************************ 01:14:58.158 START TEST env_pci 01:14:58.158 ************************************ 01:14:58.158 05:09:40 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 01:14:58.158 01:14:58.158 01:14:58.158 CUnit - A unit testing framework for C - Version 2.1-3 01:14:58.158 http://cunit.sourceforge.net/ 01:14:58.158 01:14:58.158 01:14:58.158 Suite: pci 01:14:58.158 Test: pci_hook ...[2024-12-09 05:09:40.540815] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57602 has claimed it 01:14:58.158 passed 01:14:58.158 01:14:58.158 Run Summary: Type Total Ran Passed Failed Inactive 01:14:58.158 suites 1 1 n/a 0 0 01:14:58.158 tests 1 1 1 0 0 01:14:58.158 asserts 25 25 25 0 n/a 01:14:58.158 01:14:58.158 Elapsed time = 0.007 seconds 01:14:58.158 EAL: Cannot find device (10000:00:01.0) 01:14:58.158 EAL: Failed to attach device on primary process 01:14:58.158 01:14:58.158 real 0m0.109s 01:14:58.158 user 0m0.034s 01:14:58.158 sys 0m0.074s 01:14:58.158 05:09:40 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 01:14:58.158 ************************************ 01:14:58.158 END TEST env_pci 01:14:58.158 ************************************ 01:14:58.158 05:09:40 env.env_pci -- common/autotest_common.sh@10 -- # set +x 01:14:58.418 05:09:40 env -- env/env.sh@14 -- # argv='-c 0x1 ' 01:14:58.418 05:09:40 env -- env/env.sh@15 -- # uname 01:14:58.418 05:09:40 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 01:14:58.418 05:09:40 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 01:14:58.418 05:09:40 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 01:14:58.418 05:09:40 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 01:14:58.418 05:09:40 env -- common/autotest_common.sh@1111 -- # xtrace_disable 01:14:58.418 05:09:40 env -- common/autotest_common.sh@10 -- # set +x 01:14:58.418 ************************************ 01:14:58.418 START TEST env_dpdk_post_init 01:14:58.418 ************************************ 01:14:58.418 05:09:40 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 01:14:58.418 EAL: Detected CPU lcores: 10 01:14:58.418 EAL: Detected NUMA nodes: 1 01:14:58.418 EAL: Detected shared linkage of DPDK 01:14:58.418 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 01:14:58.418 EAL: Selected IOVA mode 'PA' 01:14:58.677 TELEMETRY: No legacy callbacks, legacy socket not created 01:14:58.677 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 01:14:58.678 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 01:14:58.678 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 01:14:58.678 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 01:14:58.678 Starting DPDK initialization... 01:14:58.678 Starting SPDK post initialization... 01:14:58.678 SPDK NVMe probe 01:14:58.678 Attaching to 0000:00:10.0 01:14:58.678 Attaching to 0000:00:11.0 01:14:58.678 Attaching to 0000:00:12.0 01:14:58.678 Attaching to 0000:00:13.0 01:14:58.678 Attached to 0000:00:10.0 01:14:58.678 Attached to 0000:00:11.0 01:14:58.678 Attached to 0000:00:13.0 01:14:58.678 Attached to 0000:00:12.0 01:14:58.678 Cleaning up... 01:14:58.678 01:14:58.678 real 0m0.311s 01:14:58.678 user 0m0.104s 01:14:58.678 sys 0m0.110s 01:14:58.678 05:09:40 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 01:14:58.678 ************************************ 01:14:58.678 END TEST env_dpdk_post_init 01:14:58.678 ************************************ 01:14:58.678 05:09:40 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 01:14:58.678 05:09:41 env -- env/env.sh@26 -- # uname 01:14:58.678 05:09:41 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 01:14:58.678 05:09:41 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 01:14:58.678 05:09:41 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:14:58.678 05:09:41 env -- common/autotest_common.sh@1111 -- # xtrace_disable 01:14:58.678 05:09:41 env -- common/autotest_common.sh@10 -- # set +x 01:14:58.678 ************************************ 01:14:58.678 START TEST env_mem_callbacks 01:14:58.678 ************************************ 01:14:58.678 05:09:41 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 01:14:58.678 EAL: Detected CPU lcores: 10 01:14:58.678 EAL: Detected NUMA nodes: 1 01:14:58.678 EAL: Detected shared linkage of DPDK 01:14:58.936 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 01:14:58.936 EAL: Selected IOVA mode 'PA' 01:14:58.936 01:14:58.936 01:14:58.936 CUnit - A unit testing framework for C - Version 2.1-3 01:14:58.936 http://cunit.sourceforge.net/ 01:14:58.936 01:14:58.936 01:14:58.936 Suite: memory 01:14:58.936 Test: test ... 01:14:58.936 register 0x200000200000 2097152 01:14:58.936 malloc 3145728 01:14:58.936 TELEMETRY: No legacy callbacks, legacy socket not created 01:14:58.936 register 0x200000400000 4194304 01:14:58.936 buf 0x2000004fffc0 len 3145728 PASSED 01:14:58.936 malloc 64 01:14:58.936 buf 0x2000004ffec0 len 64 PASSED 01:14:58.936 malloc 4194304 01:14:58.936 register 0x200000800000 6291456 01:14:58.936 buf 0x2000009fffc0 len 4194304 PASSED 01:14:58.936 free 0x2000004fffc0 3145728 01:14:58.936 free 0x2000004ffec0 64 01:14:58.936 unregister 0x200000400000 4194304 PASSED 01:14:58.936 free 0x2000009fffc0 4194304 01:14:58.936 unregister 0x200000800000 6291456 PASSED 01:14:58.936 malloc 8388608 01:14:58.936 register 0x200000400000 10485760 01:14:58.936 buf 0x2000005fffc0 len 8388608 PASSED 01:14:58.936 free 0x2000005fffc0 8388608 01:14:58.936 unregister 0x200000400000 10485760 PASSED 01:14:58.936 passed 01:14:58.936 01:14:58.936 Run Summary: Type Total Ran Passed Failed Inactive 01:14:58.936 suites 1 1 n/a 0 0 01:14:58.936 tests 1 1 1 0 0 01:14:58.936 asserts 15 15 15 0 n/a 01:14:58.936 01:14:58.936 Elapsed time = 0.081 seconds 01:14:58.936 01:14:58.936 real 0m0.292s 01:14:58.936 user 0m0.107s 01:14:58.936 sys 0m0.082s 01:14:58.936 05:09:41 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 01:14:58.936 ************************************ 01:14:58.936 END TEST env_mem_callbacks 01:14:58.936 ************************************ 01:14:58.936 05:09:41 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 01:14:59.194 ************************************ 01:14:59.194 01:14:59.194 real 0m10.343s 01:14:59.194 user 0m8.493s 01:14:59.194 sys 0m1.486s 01:14:59.194 05:09:41 env -- common/autotest_common.sh@1130 -- # xtrace_disable 01:14:59.194 05:09:41 env -- common/autotest_common.sh@10 -- # set +x 01:14:59.194 END TEST env 01:14:59.194 ************************************ 01:14:59.194 05:09:41 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 01:14:59.194 05:09:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:14:59.194 05:09:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:14:59.194 05:09:41 -- common/autotest_common.sh@10 -- # set +x 01:14:59.194 ************************************ 01:14:59.194 START TEST rpc 01:14:59.194 ************************************ 01:14:59.194 05:09:41 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 01:14:59.194 * Looking for test storage... 01:14:59.194 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 01:14:59.194 05:09:41 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:14:59.194 05:09:41 rpc -- common/autotest_common.sh@1693 -- # lcov --version 01:14:59.194 05:09:41 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:14:59.453 05:09:41 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:14:59.453 05:09:41 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:14:59.453 05:09:41 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 01:14:59.453 05:09:41 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 01:14:59.453 05:09:41 rpc -- scripts/common.sh@336 -- # IFS=.-: 01:14:59.453 05:09:41 rpc -- scripts/common.sh@336 -- # read -ra ver1 01:14:59.453 05:09:41 rpc -- scripts/common.sh@337 -- # IFS=.-: 01:14:59.453 05:09:41 rpc -- scripts/common.sh@337 -- # read -ra ver2 01:14:59.453 05:09:41 rpc -- scripts/common.sh@338 -- # local 'op=<' 01:14:59.453 05:09:41 rpc -- scripts/common.sh@340 -- # ver1_l=2 01:14:59.453 05:09:41 rpc -- scripts/common.sh@341 -- # ver2_l=1 01:14:59.453 05:09:41 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:14:59.453 05:09:41 rpc -- scripts/common.sh@344 -- # case "$op" in 01:14:59.453 05:09:41 rpc -- scripts/common.sh@345 -- # : 1 01:14:59.453 05:09:41 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 01:14:59.453 05:09:41 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:14:59.453 05:09:41 rpc -- scripts/common.sh@365 -- # decimal 1 01:14:59.453 05:09:41 rpc -- scripts/common.sh@353 -- # local d=1 01:14:59.453 05:09:41 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:14:59.453 05:09:41 rpc -- scripts/common.sh@355 -- # echo 1 01:14:59.453 05:09:41 rpc -- scripts/common.sh@365 -- # ver1[v]=1 01:14:59.453 05:09:41 rpc -- scripts/common.sh@366 -- # decimal 2 01:14:59.453 05:09:41 rpc -- scripts/common.sh@353 -- # local d=2 01:14:59.453 05:09:41 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:14:59.453 05:09:41 rpc -- scripts/common.sh@355 -- # echo 2 01:14:59.453 05:09:41 rpc -- scripts/common.sh@366 -- # ver2[v]=2 01:14:59.453 05:09:41 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:14:59.453 05:09:41 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:14:59.453 05:09:41 rpc -- scripts/common.sh@368 -- # return 0 01:14:59.453 05:09:41 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:14:59.453 05:09:41 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:14:59.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:14:59.453 --rc genhtml_branch_coverage=1 01:14:59.453 --rc genhtml_function_coverage=1 01:14:59.453 --rc genhtml_legend=1 01:14:59.453 --rc geninfo_all_blocks=1 01:14:59.453 --rc geninfo_unexecuted_blocks=1 01:14:59.453 01:14:59.453 ' 01:14:59.453 05:09:41 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:14:59.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:14:59.453 --rc genhtml_branch_coverage=1 01:14:59.453 --rc genhtml_function_coverage=1 01:14:59.453 --rc genhtml_legend=1 01:14:59.453 --rc geninfo_all_blocks=1 01:14:59.453 --rc geninfo_unexecuted_blocks=1 01:14:59.453 01:14:59.453 ' 01:14:59.453 05:09:41 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:14:59.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:14:59.453 --rc genhtml_branch_coverage=1 01:14:59.453 --rc genhtml_function_coverage=1 01:14:59.453 --rc genhtml_legend=1 01:14:59.453 --rc geninfo_all_blocks=1 01:14:59.453 --rc geninfo_unexecuted_blocks=1 01:14:59.453 01:14:59.453 ' 01:14:59.453 05:09:41 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:14:59.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:14:59.453 --rc genhtml_branch_coverage=1 01:14:59.453 --rc genhtml_function_coverage=1 01:14:59.453 --rc genhtml_legend=1 01:14:59.453 --rc geninfo_all_blocks=1 01:14:59.453 --rc geninfo_unexecuted_blocks=1 01:14:59.453 01:14:59.453 ' 01:14:59.453 05:09:41 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57729 01:14:59.453 05:09:41 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 01:14:59.453 05:09:41 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 01:14:59.453 05:09:41 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57729 01:14:59.453 05:09:41 rpc -- common/autotest_common.sh@835 -- # '[' -z 57729 ']' 01:14:59.453 05:09:41 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:14:59.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:14:59.453 05:09:41 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 01:14:59.453 05:09:41 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:14:59.453 05:09:41 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 01:14:59.453 05:09:41 rpc -- common/autotest_common.sh@10 -- # set +x 01:14:59.453 [2024-12-09 05:09:41.835631] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:14:59.453 [2024-12-09 05:09:41.835976] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57729 ] 01:14:59.712 [2024-12-09 05:09:42.020245] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:14:59.712 [2024-12-09 05:09:42.126866] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 01:14:59.712 [2024-12-09 05:09:42.126920] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57729' to capture a snapshot of events at runtime. 01:14:59.712 [2024-12-09 05:09:42.126934] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:14:59.712 [2024-12-09 05:09:42.126948] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:14:59.712 [2024-12-09 05:09:42.126958] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57729 for offline analysis/debug. 01:14:59.712 [2024-12-09 05:09:42.128298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:15:00.678 05:09:42 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:15:00.678 05:09:42 rpc -- common/autotest_common.sh@868 -- # return 0 01:15:00.678 05:09:42 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 01:15:00.678 05:09:42 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 01:15:00.678 05:09:42 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 01:15:00.678 05:09:42 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 01:15:00.678 05:09:42 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:15:00.678 05:09:42 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 01:15:00.678 05:09:42 rpc -- common/autotest_common.sh@10 -- # set +x 01:15:00.678 ************************************ 01:15:00.678 START TEST rpc_integrity 01:15:00.678 ************************************ 01:15:00.678 05:09:42 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 01:15:00.678 05:09:42 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 01:15:00.678 05:09:42 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 01:15:00.678 05:09:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 01:15:00.678 05:09:43 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:15:00.678 05:09:43 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 01:15:00.678 05:09:43 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 01:15:00.678 05:09:43 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 01:15:00.678 05:09:43 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 01:15:00.678 05:09:43 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 01:15:00.678 05:09:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 01:15:00.678 05:09:43 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:15:00.678 05:09:43 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 01:15:00.678 05:09:43 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 01:15:00.678 05:09:43 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 01:15:00.678 05:09:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 01:15:00.678 05:09:43 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:15:00.678 05:09:43 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 01:15:00.678 { 01:15:00.678 "name": "Malloc0", 01:15:00.678 "aliases": [ 01:15:00.678 "2bc3a841-edee-41d7-97e7-738a3773e821" 01:15:00.678 ], 01:15:00.679 "product_name": "Malloc disk", 01:15:00.679 "block_size": 512, 01:15:00.679 "num_blocks": 16384, 01:15:00.679 "uuid": "2bc3a841-edee-41d7-97e7-738a3773e821", 01:15:00.679 "assigned_rate_limits": { 01:15:00.679 "rw_ios_per_sec": 0, 01:15:00.679 "rw_mbytes_per_sec": 0, 01:15:00.679 "r_mbytes_per_sec": 0, 01:15:00.679 "w_mbytes_per_sec": 0 01:15:00.679 }, 01:15:00.679 "claimed": false, 01:15:00.679 "zoned": false, 01:15:00.679 "supported_io_types": { 01:15:00.679 "read": true, 01:15:00.679 "write": true, 01:15:00.679 "unmap": true, 01:15:00.679 "flush": true, 01:15:00.679 "reset": true, 01:15:00.679 "nvme_admin": false, 01:15:00.679 "nvme_io": false, 01:15:00.679 "nvme_io_md": false, 01:15:00.679 "write_zeroes": true, 01:15:00.679 "zcopy": true, 01:15:00.679 "get_zone_info": false, 01:15:00.679 "zone_management": false, 01:15:00.679 "zone_append": false, 01:15:00.679 "compare": false, 01:15:00.679 "compare_and_write": false, 01:15:00.679 "abort": true, 01:15:00.679 "seek_hole": false, 01:15:00.679 "seek_data": false, 01:15:00.679 "copy": true, 01:15:00.679 "nvme_iov_md": false 01:15:00.679 }, 01:15:00.679 "memory_domains": [ 01:15:00.679 { 01:15:00.679 "dma_device_id": "system", 01:15:00.679 "dma_device_type": 1 01:15:00.679 }, 01:15:00.679 { 01:15:00.679 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:15:00.679 "dma_device_type": 2 01:15:00.679 } 01:15:00.679 ], 01:15:00.679 "driver_specific": {} 01:15:00.679 } 01:15:00.679 ]' 01:15:00.679 05:09:43 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 01:15:00.679 05:09:43 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 01:15:00.679 05:09:43 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 01:15:00.679 05:09:43 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 01:15:00.679 05:09:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 01:15:00.679 [2024-12-09 05:09:43.123411] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 01:15:00.679 [2024-12-09 05:09:43.123595] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:15:00.679 [2024-12-09 05:09:43.123631] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 01:15:00.679 [2024-12-09 05:09:43.123647] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:15:00.679 [2024-12-09 05:09:43.126095] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:15:00.679 [2024-12-09 05:09:43.126144] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 01:15:00.679 Passthru0 01:15:00.679 05:09:43 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:15:00.679 05:09:43 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 01:15:00.679 05:09:43 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 01:15:00.679 05:09:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 01:15:00.937 05:09:43 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:15:00.937 05:09:43 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 01:15:00.937 { 01:15:00.937 "name": "Malloc0", 01:15:00.937 "aliases": [ 01:15:00.937 "2bc3a841-edee-41d7-97e7-738a3773e821" 01:15:00.937 ], 01:15:00.937 "product_name": "Malloc disk", 01:15:00.937 "block_size": 512, 01:15:00.937 "num_blocks": 16384, 01:15:00.937 "uuid": "2bc3a841-edee-41d7-97e7-738a3773e821", 01:15:00.937 "assigned_rate_limits": { 01:15:00.937 "rw_ios_per_sec": 0, 01:15:00.937 "rw_mbytes_per_sec": 0, 01:15:00.937 "r_mbytes_per_sec": 0, 01:15:00.937 "w_mbytes_per_sec": 0 01:15:00.937 }, 01:15:00.937 "claimed": true, 01:15:00.937 "claim_type": "exclusive_write", 01:15:00.937 "zoned": false, 01:15:00.937 "supported_io_types": { 01:15:00.937 "read": true, 01:15:00.937 "write": true, 01:15:00.937 "unmap": true, 01:15:00.937 "flush": true, 01:15:00.937 "reset": true, 01:15:00.937 "nvme_admin": false, 01:15:00.937 "nvme_io": false, 01:15:00.937 "nvme_io_md": false, 01:15:00.937 "write_zeroes": true, 01:15:00.937 "zcopy": true, 01:15:00.937 "get_zone_info": false, 01:15:00.937 "zone_management": false, 01:15:00.937 "zone_append": false, 01:15:00.937 "compare": false, 01:15:00.937 "compare_and_write": false, 01:15:00.937 "abort": true, 01:15:00.937 "seek_hole": false, 01:15:00.937 "seek_data": false, 01:15:00.937 "copy": true, 01:15:00.937 "nvme_iov_md": false 01:15:00.937 }, 01:15:00.937 "memory_domains": [ 01:15:00.937 { 01:15:00.937 "dma_device_id": "system", 01:15:00.937 "dma_device_type": 1 01:15:00.937 }, 01:15:00.937 { 01:15:00.937 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:15:00.937 "dma_device_type": 2 01:15:00.937 } 01:15:00.937 ], 01:15:00.937 "driver_specific": {} 01:15:00.937 }, 01:15:00.937 { 01:15:00.937 "name": "Passthru0", 01:15:00.937 "aliases": [ 01:15:00.937 "38394b53-cd8a-5c66-9682-a8aae38600b2" 01:15:00.937 ], 01:15:00.937 "product_name": "passthru", 01:15:00.937 "block_size": 512, 01:15:00.937 "num_blocks": 16384, 01:15:00.937 "uuid": "38394b53-cd8a-5c66-9682-a8aae38600b2", 01:15:00.937 "assigned_rate_limits": { 01:15:00.937 "rw_ios_per_sec": 0, 01:15:00.937 "rw_mbytes_per_sec": 0, 01:15:00.937 "r_mbytes_per_sec": 0, 01:15:00.937 "w_mbytes_per_sec": 0 01:15:00.937 }, 01:15:00.937 "claimed": false, 01:15:00.937 "zoned": false, 01:15:00.937 "supported_io_types": { 01:15:00.937 "read": true, 01:15:00.937 "write": true, 01:15:00.937 "unmap": true, 01:15:00.937 "flush": true, 01:15:00.937 "reset": true, 01:15:00.937 "nvme_admin": false, 01:15:00.937 "nvme_io": false, 01:15:00.937 "nvme_io_md": false, 01:15:00.937 "write_zeroes": true, 01:15:00.937 "zcopy": true, 01:15:00.937 "get_zone_info": false, 01:15:00.937 "zone_management": false, 01:15:00.937 "zone_append": false, 01:15:00.937 "compare": false, 01:15:00.937 "compare_and_write": false, 01:15:00.937 "abort": true, 01:15:00.937 "seek_hole": false, 01:15:00.937 "seek_data": false, 01:15:00.937 "copy": true, 01:15:00.937 "nvme_iov_md": false 01:15:00.937 }, 01:15:00.937 "memory_domains": [ 01:15:00.937 { 01:15:00.937 "dma_device_id": "system", 01:15:00.937 "dma_device_type": 1 01:15:00.937 }, 01:15:00.937 { 01:15:00.937 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:15:00.937 "dma_device_type": 2 01:15:00.937 } 01:15:00.937 ], 01:15:00.937 "driver_specific": { 01:15:00.937 "passthru": { 01:15:00.937 "name": "Passthru0", 01:15:00.937 "base_bdev_name": "Malloc0" 01:15:00.937 } 01:15:00.937 } 01:15:00.937 } 01:15:00.937 ]' 01:15:00.937 05:09:43 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 01:15:00.937 05:09:43 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 01:15:00.937 05:09:43 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 01:15:00.937 05:09:43 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 01:15:00.937 05:09:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 01:15:00.937 05:09:43 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:15:00.937 05:09:43 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 01:15:00.937 05:09:43 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 01:15:00.937 05:09:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 01:15:00.937 05:09:43 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:15:00.937 05:09:43 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 01:15:00.937 05:09:43 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 01:15:00.937 05:09:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 01:15:00.937 05:09:43 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:15:00.937 05:09:43 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 01:15:00.937 05:09:43 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 01:15:00.937 ************************************ 01:15:00.937 END TEST rpc_integrity 01:15:00.937 ************************************ 01:15:00.937 05:09:43 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 01:15:00.937 01:15:00.937 real 0m0.329s 01:15:00.937 user 0m0.166s 01:15:00.937 sys 0m0.067s 01:15:00.937 05:09:43 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 01:15:00.937 05:09:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 01:15:00.937 05:09:43 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 01:15:00.937 05:09:43 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:15:00.937 05:09:43 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 01:15:00.937 05:09:43 rpc -- common/autotest_common.sh@10 -- # set +x 01:15:00.937 ************************************ 01:15:00.937 START TEST rpc_plugins 01:15:00.937 ************************************ 01:15:00.937 05:09:43 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 01:15:01.196 05:09:43 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 01:15:01.196 05:09:43 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 01:15:01.196 05:09:43 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 01:15:01.196 05:09:43 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:15:01.196 05:09:43 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 01:15:01.196 05:09:43 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 01:15:01.196 05:09:43 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 01:15:01.196 05:09:43 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 01:15:01.196 05:09:43 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:15:01.196 05:09:43 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 01:15:01.196 { 01:15:01.196 "name": "Malloc1", 01:15:01.196 "aliases": [ 01:15:01.196 "3763e4b1-34c3-4bc6-99c5-a8da5a322e3f" 01:15:01.196 ], 01:15:01.196 "product_name": "Malloc disk", 01:15:01.196 "block_size": 4096, 01:15:01.196 "num_blocks": 256, 01:15:01.196 "uuid": "3763e4b1-34c3-4bc6-99c5-a8da5a322e3f", 01:15:01.196 "assigned_rate_limits": { 01:15:01.196 "rw_ios_per_sec": 0, 01:15:01.196 "rw_mbytes_per_sec": 0, 01:15:01.196 "r_mbytes_per_sec": 0, 01:15:01.196 "w_mbytes_per_sec": 0 01:15:01.196 }, 01:15:01.196 "claimed": false, 01:15:01.196 "zoned": false, 01:15:01.196 "supported_io_types": { 01:15:01.196 "read": true, 01:15:01.196 "write": true, 01:15:01.196 "unmap": true, 01:15:01.196 "flush": true, 01:15:01.196 "reset": true, 01:15:01.196 "nvme_admin": false, 01:15:01.196 "nvme_io": false, 01:15:01.196 "nvme_io_md": false, 01:15:01.196 "write_zeroes": true, 01:15:01.196 "zcopy": true, 01:15:01.196 "get_zone_info": false, 01:15:01.196 "zone_management": false, 01:15:01.196 "zone_append": false, 01:15:01.196 "compare": false, 01:15:01.196 "compare_and_write": false, 01:15:01.196 "abort": true, 01:15:01.196 "seek_hole": false, 01:15:01.196 "seek_data": false, 01:15:01.196 "copy": true, 01:15:01.196 "nvme_iov_md": false 01:15:01.196 }, 01:15:01.196 "memory_domains": [ 01:15:01.196 { 01:15:01.196 "dma_device_id": "system", 01:15:01.196 "dma_device_type": 1 01:15:01.196 }, 01:15:01.196 { 01:15:01.196 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:15:01.196 "dma_device_type": 2 01:15:01.196 } 01:15:01.196 ], 01:15:01.196 "driver_specific": {} 01:15:01.196 } 01:15:01.196 ]' 01:15:01.196 05:09:43 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 01:15:01.196 05:09:43 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 01:15:01.196 05:09:43 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 01:15:01.196 05:09:43 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 01:15:01.196 05:09:43 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 01:15:01.196 05:09:43 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:15:01.196 05:09:43 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 01:15:01.196 05:09:43 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 01:15:01.196 05:09:43 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 01:15:01.196 05:09:43 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:15:01.196 05:09:43 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 01:15:01.196 05:09:43 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 01:15:01.196 ************************************ 01:15:01.196 END TEST rpc_plugins 01:15:01.196 ************************************ 01:15:01.196 05:09:43 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 01:15:01.196 01:15:01.196 real 0m0.158s 01:15:01.196 user 0m0.088s 01:15:01.196 sys 0m0.028s 01:15:01.196 05:09:43 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 01:15:01.196 05:09:43 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 01:15:01.196 05:09:43 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 01:15:01.196 05:09:43 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:15:01.196 05:09:43 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 01:15:01.196 05:09:43 rpc -- common/autotest_common.sh@10 -- # set +x 01:15:01.196 ************************************ 01:15:01.196 START TEST rpc_trace_cmd_test 01:15:01.196 ************************************ 01:15:01.196 05:09:43 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 01:15:01.196 05:09:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 01:15:01.196 05:09:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 01:15:01.196 05:09:43 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:15:01.196 05:09:43 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 01:15:01.454 05:09:43 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:15:01.454 05:09:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 01:15:01.454 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57729", 01:15:01.454 "tpoint_group_mask": "0x8", 01:15:01.454 "iscsi_conn": { 01:15:01.454 "mask": "0x2", 01:15:01.454 "tpoint_mask": "0x0" 01:15:01.454 }, 01:15:01.454 "scsi": { 01:15:01.454 "mask": "0x4", 01:15:01.454 "tpoint_mask": "0x0" 01:15:01.454 }, 01:15:01.454 "bdev": { 01:15:01.454 "mask": "0x8", 01:15:01.454 "tpoint_mask": "0xffffffffffffffff" 01:15:01.454 }, 01:15:01.454 "nvmf_rdma": { 01:15:01.454 "mask": "0x10", 01:15:01.454 "tpoint_mask": "0x0" 01:15:01.454 }, 01:15:01.454 "nvmf_tcp": { 01:15:01.454 "mask": "0x20", 01:15:01.454 "tpoint_mask": "0x0" 01:15:01.454 }, 01:15:01.454 "ftl": { 01:15:01.454 "mask": "0x40", 01:15:01.454 "tpoint_mask": "0x0" 01:15:01.454 }, 01:15:01.454 "blobfs": { 01:15:01.454 "mask": "0x80", 01:15:01.454 "tpoint_mask": "0x0" 01:15:01.454 }, 01:15:01.454 "dsa": { 01:15:01.454 "mask": "0x200", 01:15:01.454 "tpoint_mask": "0x0" 01:15:01.454 }, 01:15:01.454 "thread": { 01:15:01.454 "mask": "0x400", 01:15:01.454 "tpoint_mask": "0x0" 01:15:01.454 }, 01:15:01.454 "nvme_pcie": { 01:15:01.454 "mask": "0x800", 01:15:01.454 "tpoint_mask": "0x0" 01:15:01.454 }, 01:15:01.454 "iaa": { 01:15:01.454 "mask": "0x1000", 01:15:01.454 "tpoint_mask": "0x0" 01:15:01.454 }, 01:15:01.454 "nvme_tcp": { 01:15:01.454 "mask": "0x2000", 01:15:01.454 "tpoint_mask": "0x0" 01:15:01.454 }, 01:15:01.454 "bdev_nvme": { 01:15:01.454 "mask": "0x4000", 01:15:01.454 "tpoint_mask": "0x0" 01:15:01.454 }, 01:15:01.454 "sock": { 01:15:01.454 "mask": "0x8000", 01:15:01.454 "tpoint_mask": "0x0" 01:15:01.454 }, 01:15:01.454 "blob": { 01:15:01.454 "mask": "0x10000", 01:15:01.454 "tpoint_mask": "0x0" 01:15:01.454 }, 01:15:01.454 "bdev_raid": { 01:15:01.454 "mask": "0x20000", 01:15:01.454 "tpoint_mask": "0x0" 01:15:01.454 }, 01:15:01.454 "scheduler": { 01:15:01.455 "mask": "0x40000", 01:15:01.455 "tpoint_mask": "0x0" 01:15:01.455 } 01:15:01.455 }' 01:15:01.455 05:09:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 01:15:01.455 05:09:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 01:15:01.455 05:09:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 01:15:01.455 05:09:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 01:15:01.455 05:09:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 01:15:01.455 05:09:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 01:15:01.455 05:09:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 01:15:01.455 05:09:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 01:15:01.455 05:09:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 01:15:01.455 ************************************ 01:15:01.455 END TEST rpc_trace_cmd_test 01:15:01.455 ************************************ 01:15:01.455 05:09:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 01:15:01.455 01:15:01.455 real 0m0.250s 01:15:01.455 user 0m0.201s 01:15:01.455 sys 0m0.040s 01:15:01.455 05:09:43 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 01:15:01.455 05:09:43 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 01:15:01.713 05:09:43 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 01:15:01.713 05:09:43 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 01:15:01.713 05:09:43 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 01:15:01.713 05:09:43 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:15:01.713 05:09:43 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 01:15:01.713 05:09:43 rpc -- common/autotest_common.sh@10 -- # set +x 01:15:01.713 ************************************ 01:15:01.713 START TEST rpc_daemon_integrity 01:15:01.713 ************************************ 01:15:01.713 05:09:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 01:15:01.713 05:09:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 01:15:01.713 05:09:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 01:15:01.713 05:09:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 01:15:01.713 05:09:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:15:01.713 05:09:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 01:15:01.713 05:09:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 01:15:01.713 05:09:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 01:15:01.713 05:09:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 01:15:01.713 05:09:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 01:15:01.713 05:09:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 01:15:01.713 05:09:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:15:01.713 05:09:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 01:15:01.713 05:09:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 01:15:01.713 05:09:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 01:15:01.713 05:09:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 01:15:01.713 05:09:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:15:01.713 05:09:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 01:15:01.713 { 01:15:01.713 "name": "Malloc2", 01:15:01.713 "aliases": [ 01:15:01.714 "423d047d-e7cd-4166-9292-02f40c46f5f7" 01:15:01.714 ], 01:15:01.714 "product_name": "Malloc disk", 01:15:01.714 "block_size": 512, 01:15:01.714 "num_blocks": 16384, 01:15:01.714 "uuid": "423d047d-e7cd-4166-9292-02f40c46f5f7", 01:15:01.714 "assigned_rate_limits": { 01:15:01.714 "rw_ios_per_sec": 0, 01:15:01.714 "rw_mbytes_per_sec": 0, 01:15:01.714 "r_mbytes_per_sec": 0, 01:15:01.714 "w_mbytes_per_sec": 0 01:15:01.714 }, 01:15:01.714 "claimed": false, 01:15:01.714 "zoned": false, 01:15:01.714 "supported_io_types": { 01:15:01.714 "read": true, 01:15:01.714 "write": true, 01:15:01.714 "unmap": true, 01:15:01.714 "flush": true, 01:15:01.714 "reset": true, 01:15:01.714 "nvme_admin": false, 01:15:01.714 "nvme_io": false, 01:15:01.714 "nvme_io_md": false, 01:15:01.714 "write_zeroes": true, 01:15:01.714 "zcopy": true, 01:15:01.714 "get_zone_info": false, 01:15:01.714 "zone_management": false, 01:15:01.714 "zone_append": false, 01:15:01.714 "compare": false, 01:15:01.714 "compare_and_write": false, 01:15:01.714 "abort": true, 01:15:01.714 "seek_hole": false, 01:15:01.714 "seek_data": false, 01:15:01.714 "copy": true, 01:15:01.714 "nvme_iov_md": false 01:15:01.714 }, 01:15:01.714 "memory_domains": [ 01:15:01.714 { 01:15:01.714 "dma_device_id": "system", 01:15:01.714 "dma_device_type": 1 01:15:01.714 }, 01:15:01.714 { 01:15:01.714 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:15:01.714 "dma_device_type": 2 01:15:01.714 } 01:15:01.714 ], 01:15:01.714 "driver_specific": {} 01:15:01.714 } 01:15:01.714 ]' 01:15:01.714 05:09:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 01:15:01.714 05:09:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 01:15:01.714 05:09:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 01:15:01.714 05:09:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 01:15:01.714 05:09:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 01:15:01.714 [2024-12-09 05:09:44.088862] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 01:15:01.714 [2024-12-09 05:09:44.088925] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:15:01.714 [2024-12-09 05:09:44.088947] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 01:15:01.714 [2024-12-09 05:09:44.088961] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:15:01.714 [2024-12-09 05:09:44.091473] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:15:01.714 [2024-12-09 05:09:44.091516] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 01:15:01.714 Passthru0 01:15:01.714 05:09:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:15:01.714 05:09:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 01:15:01.714 05:09:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 01:15:01.714 05:09:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 01:15:01.714 05:09:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:15:01.714 05:09:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 01:15:01.714 { 01:15:01.714 "name": "Malloc2", 01:15:01.714 "aliases": [ 01:15:01.714 "423d047d-e7cd-4166-9292-02f40c46f5f7" 01:15:01.714 ], 01:15:01.714 "product_name": "Malloc disk", 01:15:01.714 "block_size": 512, 01:15:01.714 "num_blocks": 16384, 01:15:01.714 "uuid": "423d047d-e7cd-4166-9292-02f40c46f5f7", 01:15:01.714 "assigned_rate_limits": { 01:15:01.714 "rw_ios_per_sec": 0, 01:15:01.714 "rw_mbytes_per_sec": 0, 01:15:01.714 "r_mbytes_per_sec": 0, 01:15:01.714 "w_mbytes_per_sec": 0 01:15:01.714 }, 01:15:01.714 "claimed": true, 01:15:01.714 "claim_type": "exclusive_write", 01:15:01.714 "zoned": false, 01:15:01.714 "supported_io_types": { 01:15:01.714 "read": true, 01:15:01.714 "write": true, 01:15:01.714 "unmap": true, 01:15:01.714 "flush": true, 01:15:01.714 "reset": true, 01:15:01.714 "nvme_admin": false, 01:15:01.714 "nvme_io": false, 01:15:01.714 "nvme_io_md": false, 01:15:01.714 "write_zeroes": true, 01:15:01.714 "zcopy": true, 01:15:01.714 "get_zone_info": false, 01:15:01.714 "zone_management": false, 01:15:01.714 "zone_append": false, 01:15:01.714 "compare": false, 01:15:01.714 "compare_and_write": false, 01:15:01.714 "abort": true, 01:15:01.714 "seek_hole": false, 01:15:01.714 "seek_data": false, 01:15:01.714 "copy": true, 01:15:01.714 "nvme_iov_md": false 01:15:01.714 }, 01:15:01.714 "memory_domains": [ 01:15:01.714 { 01:15:01.714 "dma_device_id": "system", 01:15:01.714 "dma_device_type": 1 01:15:01.714 }, 01:15:01.714 { 01:15:01.714 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:15:01.714 "dma_device_type": 2 01:15:01.714 } 01:15:01.714 ], 01:15:01.714 "driver_specific": {} 01:15:01.714 }, 01:15:01.714 { 01:15:01.714 "name": "Passthru0", 01:15:01.714 "aliases": [ 01:15:01.714 "4ef5effa-79d0-57d0-a83f-3cceb8e24826" 01:15:01.714 ], 01:15:01.714 "product_name": "passthru", 01:15:01.714 "block_size": 512, 01:15:01.714 "num_blocks": 16384, 01:15:01.714 "uuid": "4ef5effa-79d0-57d0-a83f-3cceb8e24826", 01:15:01.714 "assigned_rate_limits": { 01:15:01.714 "rw_ios_per_sec": 0, 01:15:01.714 "rw_mbytes_per_sec": 0, 01:15:01.714 "r_mbytes_per_sec": 0, 01:15:01.714 "w_mbytes_per_sec": 0 01:15:01.714 }, 01:15:01.714 "claimed": false, 01:15:01.714 "zoned": false, 01:15:01.714 "supported_io_types": { 01:15:01.714 "read": true, 01:15:01.714 "write": true, 01:15:01.714 "unmap": true, 01:15:01.714 "flush": true, 01:15:01.714 "reset": true, 01:15:01.714 "nvme_admin": false, 01:15:01.714 "nvme_io": false, 01:15:01.714 "nvme_io_md": false, 01:15:01.714 "write_zeroes": true, 01:15:01.714 "zcopy": true, 01:15:01.714 "get_zone_info": false, 01:15:01.714 "zone_management": false, 01:15:01.714 "zone_append": false, 01:15:01.714 "compare": false, 01:15:01.714 "compare_and_write": false, 01:15:01.714 "abort": true, 01:15:01.714 "seek_hole": false, 01:15:01.714 "seek_data": false, 01:15:01.714 "copy": true, 01:15:01.714 "nvme_iov_md": false 01:15:01.714 }, 01:15:01.714 "memory_domains": [ 01:15:01.714 { 01:15:01.714 "dma_device_id": "system", 01:15:01.714 "dma_device_type": 1 01:15:01.714 }, 01:15:01.714 { 01:15:01.714 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:15:01.714 "dma_device_type": 2 01:15:01.714 } 01:15:01.714 ], 01:15:01.714 "driver_specific": { 01:15:01.714 "passthru": { 01:15:01.714 "name": "Passthru0", 01:15:01.714 "base_bdev_name": "Malloc2" 01:15:01.714 } 01:15:01.714 } 01:15:01.714 } 01:15:01.714 ]' 01:15:01.714 05:09:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 01:15:01.973 05:09:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 01:15:01.973 05:09:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 01:15:01.973 05:09:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 01:15:01.973 05:09:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 01:15:01.973 05:09:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:15:01.973 05:09:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 01:15:01.973 05:09:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 01:15:01.973 05:09:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 01:15:01.973 05:09:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:15:01.973 05:09:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 01:15:01.973 05:09:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 01:15:01.973 05:09:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 01:15:01.973 05:09:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:15:01.973 05:09:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 01:15:01.973 05:09:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 01:15:01.973 ************************************ 01:15:01.973 END TEST rpc_daemon_integrity 01:15:01.973 ************************************ 01:15:01.973 05:09:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 01:15:01.973 01:15:01.973 real 0m0.340s 01:15:01.973 user 0m0.186s 01:15:01.973 sys 0m0.065s 01:15:01.973 05:09:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 01:15:01.973 05:09:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 01:15:01.973 05:09:44 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 01:15:01.973 05:09:44 rpc -- rpc/rpc.sh@84 -- # killprocess 57729 01:15:01.973 05:09:44 rpc -- common/autotest_common.sh@954 -- # '[' -z 57729 ']' 01:15:01.973 05:09:44 rpc -- common/autotest_common.sh@958 -- # kill -0 57729 01:15:01.973 05:09:44 rpc -- common/autotest_common.sh@959 -- # uname 01:15:01.973 05:09:44 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:15:01.973 05:09:44 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57729 01:15:01.973 05:09:44 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:15:01.973 05:09:44 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:15:01.973 killing process with pid 57729 01:15:01.973 05:09:44 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57729' 01:15:01.973 05:09:44 rpc -- common/autotest_common.sh@973 -- # kill 57729 01:15:01.973 05:09:44 rpc -- common/autotest_common.sh@978 -- # wait 57729 01:15:04.497 01:15:04.497 real 0m5.309s 01:15:04.497 user 0m5.814s 01:15:04.497 sys 0m0.984s 01:15:04.497 05:09:46 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 01:15:04.497 05:09:46 rpc -- common/autotest_common.sh@10 -- # set +x 01:15:04.497 ************************************ 01:15:04.497 END TEST rpc 01:15:04.497 ************************************ 01:15:04.497 05:09:46 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 01:15:04.497 05:09:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:15:04.497 05:09:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:15:04.497 05:09:46 -- common/autotest_common.sh@10 -- # set +x 01:15:04.497 ************************************ 01:15:04.497 START TEST skip_rpc 01:15:04.497 ************************************ 01:15:04.497 05:09:46 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 01:15:04.755 * Looking for test storage... 01:15:04.755 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 01:15:04.755 05:09:47 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:15:04.755 05:09:47 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 01:15:04.755 05:09:47 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:15:04.755 05:09:47 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:15:04.755 05:09:47 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:15:04.755 05:09:47 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 01:15:04.755 05:09:47 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 01:15:04.755 05:09:47 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 01:15:04.755 05:09:47 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 01:15:04.755 05:09:47 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 01:15:04.755 05:09:47 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 01:15:04.755 05:09:47 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 01:15:04.755 05:09:47 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 01:15:04.755 05:09:47 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 01:15:04.755 05:09:47 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:15:04.756 05:09:47 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 01:15:04.756 05:09:47 skip_rpc -- scripts/common.sh@345 -- # : 1 01:15:04.756 05:09:47 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 01:15:04.756 05:09:47 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:15:04.756 05:09:47 skip_rpc -- scripts/common.sh@365 -- # decimal 1 01:15:04.756 05:09:47 skip_rpc -- scripts/common.sh@353 -- # local d=1 01:15:04.756 05:09:47 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:15:04.756 05:09:47 skip_rpc -- scripts/common.sh@355 -- # echo 1 01:15:04.756 05:09:47 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 01:15:04.756 05:09:47 skip_rpc -- scripts/common.sh@366 -- # decimal 2 01:15:04.756 05:09:47 skip_rpc -- scripts/common.sh@353 -- # local d=2 01:15:04.756 05:09:47 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:15:04.756 05:09:47 skip_rpc -- scripts/common.sh@355 -- # echo 2 01:15:04.756 05:09:47 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 01:15:04.756 05:09:47 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:15:04.756 05:09:47 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:15:04.756 05:09:47 skip_rpc -- scripts/common.sh@368 -- # return 0 01:15:04.756 05:09:47 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:15:04.756 05:09:47 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:15:04.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:15:04.756 --rc genhtml_branch_coverage=1 01:15:04.756 --rc genhtml_function_coverage=1 01:15:04.756 --rc genhtml_legend=1 01:15:04.756 --rc geninfo_all_blocks=1 01:15:04.756 --rc geninfo_unexecuted_blocks=1 01:15:04.756 01:15:04.756 ' 01:15:04.756 05:09:47 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:15:04.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:15:04.756 --rc genhtml_branch_coverage=1 01:15:04.756 --rc genhtml_function_coverage=1 01:15:04.756 --rc genhtml_legend=1 01:15:04.756 --rc geninfo_all_blocks=1 01:15:04.756 --rc geninfo_unexecuted_blocks=1 01:15:04.756 01:15:04.756 ' 01:15:04.756 05:09:47 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:15:04.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:15:04.756 --rc genhtml_branch_coverage=1 01:15:04.756 --rc genhtml_function_coverage=1 01:15:04.756 --rc genhtml_legend=1 01:15:04.756 --rc geninfo_all_blocks=1 01:15:04.756 --rc geninfo_unexecuted_blocks=1 01:15:04.756 01:15:04.756 ' 01:15:04.756 05:09:47 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:15:04.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:15:04.756 --rc genhtml_branch_coverage=1 01:15:04.756 --rc genhtml_function_coverage=1 01:15:04.756 --rc genhtml_legend=1 01:15:04.756 --rc geninfo_all_blocks=1 01:15:04.756 --rc geninfo_unexecuted_blocks=1 01:15:04.756 01:15:04.756 ' 01:15:04.756 05:09:47 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 01:15:04.756 05:09:47 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 01:15:04.756 05:09:47 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 01:15:04.756 05:09:47 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:15:04.756 05:09:47 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 01:15:04.756 05:09:47 skip_rpc -- common/autotest_common.sh@10 -- # set +x 01:15:04.756 ************************************ 01:15:04.756 START TEST skip_rpc 01:15:04.756 ************************************ 01:15:04.756 05:09:47 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 01:15:04.756 05:09:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57958 01:15:04.756 05:09:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 01:15:04.756 05:09:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 01:15:04.756 05:09:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 01:15:05.013 [2024-12-09 05:09:47.225875] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:15:05.013 [2024-12-09 05:09:47.226178] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57958 ] 01:15:05.013 [2024-12-09 05:09:47.410711] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:15:05.270 [2024-12-09 05:09:47.525249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:15:10.542 05:09:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 01:15:10.542 05:09:52 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 01:15:10.542 05:09:52 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 01:15:10.542 05:09:52 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 01:15:10.542 05:09:52 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:15:10.542 05:09:52 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 01:15:10.542 05:09:52 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:15:10.542 05:09:52 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 01:15:10.542 05:09:52 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:15:10.542 05:09:52 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 01:15:10.542 05:09:52 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 01:15:10.542 05:09:52 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 01:15:10.542 05:09:52 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:15:10.542 05:09:52 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:15:10.542 05:09:52 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:15:10.542 05:09:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 01:15:10.542 05:09:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57958 01:15:10.542 05:09:52 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57958 ']' 01:15:10.542 05:09:52 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57958 01:15:10.542 05:09:52 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 01:15:10.542 05:09:52 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:15:10.542 05:09:52 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57958 01:15:10.542 killing process with pid 57958 01:15:10.542 05:09:52 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:15:10.542 05:09:52 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:15:10.542 05:09:52 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57958' 01:15:10.542 05:09:52 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57958 01:15:10.542 05:09:52 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57958 01:15:12.456 01:15:12.456 real 0m7.511s 01:15:12.456 user 0m7.024s 01:15:12.456 sys 0m0.406s 01:15:12.456 ************************************ 01:15:12.456 END TEST skip_rpc 01:15:12.456 ************************************ 01:15:12.456 05:09:54 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 01:15:12.456 05:09:54 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 01:15:12.456 05:09:54 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 01:15:12.456 05:09:54 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:15:12.456 05:09:54 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 01:15:12.456 05:09:54 skip_rpc -- common/autotest_common.sh@10 -- # set +x 01:15:12.456 ************************************ 01:15:12.456 START TEST skip_rpc_with_json 01:15:12.456 ************************************ 01:15:12.456 05:09:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 01:15:12.456 05:09:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 01:15:12.456 05:09:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=58073 01:15:12.456 05:09:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 01:15:12.456 05:09:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 01:15:12.456 05:09:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 58073 01:15:12.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:15:12.456 05:09:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 58073 ']' 01:15:12.456 05:09:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:15:12.456 05:09:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 01:15:12.456 05:09:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:15:12.456 05:09:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 01:15:12.456 05:09:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 01:15:12.456 [2024-12-09 05:09:54.815816] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:15:12.456 [2024-12-09 05:09:54.816137] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58073 ] 01:15:12.714 [2024-12-09 05:09:54.999264] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:15:12.714 [2024-12-09 05:09:55.107774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:15:13.650 05:09:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:15:13.650 05:09:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 01:15:13.650 05:09:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 01:15:13.650 05:09:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 01:15:13.650 05:09:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 01:15:13.650 [2024-12-09 05:09:55.979997] nvmf_rpc.c:2706:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 01:15:13.650 request: 01:15:13.650 { 01:15:13.650 "trtype": "tcp", 01:15:13.650 "method": "nvmf_get_transports", 01:15:13.650 "req_id": 1 01:15:13.650 } 01:15:13.650 Got JSON-RPC error response 01:15:13.650 response: 01:15:13.650 { 01:15:13.650 "code": -19, 01:15:13.650 "message": "No such device" 01:15:13.650 } 01:15:13.650 05:09:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 01:15:13.650 05:09:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 01:15:13.650 05:09:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 01:15:13.650 05:09:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 01:15:13.650 [2024-12-09 05:09:55.992109] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:15:13.650 05:09:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:15:13.650 05:09:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 01:15:13.650 05:09:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 01:15:13.650 05:09:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 01:15:13.909 05:09:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:15:13.909 05:09:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 01:15:13.909 { 01:15:13.909 "subsystems": [ 01:15:13.909 { 01:15:13.909 "subsystem": "fsdev", 01:15:13.909 "config": [ 01:15:13.909 { 01:15:13.909 "method": "fsdev_set_opts", 01:15:13.909 "params": { 01:15:13.909 "fsdev_io_pool_size": 65535, 01:15:13.909 "fsdev_io_cache_size": 256 01:15:13.909 } 01:15:13.909 } 01:15:13.909 ] 01:15:13.909 }, 01:15:13.909 { 01:15:13.909 "subsystem": "keyring", 01:15:13.909 "config": [] 01:15:13.909 }, 01:15:13.909 { 01:15:13.909 "subsystem": "iobuf", 01:15:13.909 "config": [ 01:15:13.909 { 01:15:13.909 "method": "iobuf_set_options", 01:15:13.909 "params": { 01:15:13.909 "small_pool_count": 8192, 01:15:13.909 "large_pool_count": 1024, 01:15:13.909 "small_bufsize": 8192, 01:15:13.909 "large_bufsize": 135168, 01:15:13.909 "enable_numa": false 01:15:13.909 } 01:15:13.909 } 01:15:13.909 ] 01:15:13.909 }, 01:15:13.909 { 01:15:13.909 "subsystem": "sock", 01:15:13.909 "config": [ 01:15:13.909 { 01:15:13.909 "method": "sock_set_default_impl", 01:15:13.909 "params": { 01:15:13.909 "impl_name": "posix" 01:15:13.909 } 01:15:13.909 }, 01:15:13.909 { 01:15:13.909 "method": "sock_impl_set_options", 01:15:13.909 "params": { 01:15:13.909 "impl_name": "ssl", 01:15:13.909 "recv_buf_size": 4096, 01:15:13.909 "send_buf_size": 4096, 01:15:13.909 "enable_recv_pipe": true, 01:15:13.909 "enable_quickack": false, 01:15:13.909 "enable_placement_id": 0, 01:15:13.909 "enable_zerocopy_send_server": true, 01:15:13.909 "enable_zerocopy_send_client": false, 01:15:13.909 "zerocopy_threshold": 0, 01:15:13.909 "tls_version": 0, 01:15:13.909 "enable_ktls": false 01:15:13.909 } 01:15:13.909 }, 01:15:13.909 { 01:15:13.909 "method": "sock_impl_set_options", 01:15:13.909 "params": { 01:15:13.909 "impl_name": "posix", 01:15:13.909 "recv_buf_size": 2097152, 01:15:13.909 "send_buf_size": 2097152, 01:15:13.909 "enable_recv_pipe": true, 01:15:13.909 "enable_quickack": false, 01:15:13.909 "enable_placement_id": 0, 01:15:13.909 "enable_zerocopy_send_server": true, 01:15:13.909 "enable_zerocopy_send_client": false, 01:15:13.909 "zerocopy_threshold": 0, 01:15:13.909 "tls_version": 0, 01:15:13.909 "enable_ktls": false 01:15:13.909 } 01:15:13.909 } 01:15:13.909 ] 01:15:13.909 }, 01:15:13.909 { 01:15:13.909 "subsystem": "vmd", 01:15:13.909 "config": [] 01:15:13.909 }, 01:15:13.909 { 01:15:13.909 "subsystem": "accel", 01:15:13.909 "config": [ 01:15:13.909 { 01:15:13.909 "method": "accel_set_options", 01:15:13.909 "params": { 01:15:13.909 "small_cache_size": 128, 01:15:13.909 "large_cache_size": 16, 01:15:13.909 "task_count": 2048, 01:15:13.909 "sequence_count": 2048, 01:15:13.909 "buf_count": 2048 01:15:13.909 } 01:15:13.909 } 01:15:13.909 ] 01:15:13.909 }, 01:15:13.909 { 01:15:13.909 "subsystem": "bdev", 01:15:13.909 "config": [ 01:15:13.909 { 01:15:13.909 "method": "bdev_set_options", 01:15:13.909 "params": { 01:15:13.909 "bdev_io_pool_size": 65535, 01:15:13.909 "bdev_io_cache_size": 256, 01:15:13.909 "bdev_auto_examine": true, 01:15:13.909 "iobuf_small_cache_size": 128, 01:15:13.909 "iobuf_large_cache_size": 16 01:15:13.909 } 01:15:13.909 }, 01:15:13.909 { 01:15:13.909 "method": "bdev_raid_set_options", 01:15:13.909 "params": { 01:15:13.909 "process_window_size_kb": 1024, 01:15:13.909 "process_max_bandwidth_mb_sec": 0 01:15:13.909 } 01:15:13.909 }, 01:15:13.909 { 01:15:13.909 "method": "bdev_iscsi_set_options", 01:15:13.909 "params": { 01:15:13.909 "timeout_sec": 30 01:15:13.909 } 01:15:13.909 }, 01:15:13.909 { 01:15:13.909 "method": "bdev_nvme_set_options", 01:15:13.909 "params": { 01:15:13.909 "action_on_timeout": "none", 01:15:13.909 "timeout_us": 0, 01:15:13.909 "timeout_admin_us": 0, 01:15:13.909 "keep_alive_timeout_ms": 10000, 01:15:13.909 "arbitration_burst": 0, 01:15:13.909 "low_priority_weight": 0, 01:15:13.909 "medium_priority_weight": 0, 01:15:13.909 "high_priority_weight": 0, 01:15:13.909 "nvme_adminq_poll_period_us": 10000, 01:15:13.909 "nvme_ioq_poll_period_us": 0, 01:15:13.909 "io_queue_requests": 0, 01:15:13.909 "delay_cmd_submit": true, 01:15:13.909 "transport_retry_count": 4, 01:15:13.909 "bdev_retry_count": 3, 01:15:13.909 "transport_ack_timeout": 0, 01:15:13.909 "ctrlr_loss_timeout_sec": 0, 01:15:13.909 "reconnect_delay_sec": 0, 01:15:13.909 "fast_io_fail_timeout_sec": 0, 01:15:13.909 "disable_auto_failback": false, 01:15:13.909 "generate_uuids": false, 01:15:13.909 "transport_tos": 0, 01:15:13.909 "nvme_error_stat": false, 01:15:13.909 "rdma_srq_size": 0, 01:15:13.909 "io_path_stat": false, 01:15:13.909 "allow_accel_sequence": false, 01:15:13.909 "rdma_max_cq_size": 0, 01:15:13.909 "rdma_cm_event_timeout_ms": 0, 01:15:13.909 "dhchap_digests": [ 01:15:13.909 "sha256", 01:15:13.909 "sha384", 01:15:13.909 "sha512" 01:15:13.909 ], 01:15:13.909 "dhchap_dhgroups": [ 01:15:13.909 "null", 01:15:13.909 "ffdhe2048", 01:15:13.909 "ffdhe3072", 01:15:13.909 "ffdhe4096", 01:15:13.909 "ffdhe6144", 01:15:13.909 "ffdhe8192" 01:15:13.909 ] 01:15:13.909 } 01:15:13.909 }, 01:15:13.909 { 01:15:13.909 "method": "bdev_nvme_set_hotplug", 01:15:13.909 "params": { 01:15:13.909 "period_us": 100000, 01:15:13.909 "enable": false 01:15:13.909 } 01:15:13.909 }, 01:15:13.909 { 01:15:13.909 "method": "bdev_wait_for_examine" 01:15:13.909 } 01:15:13.909 ] 01:15:13.909 }, 01:15:13.909 { 01:15:13.909 "subsystem": "scsi", 01:15:13.909 "config": null 01:15:13.909 }, 01:15:13.909 { 01:15:13.910 "subsystem": "scheduler", 01:15:13.910 "config": [ 01:15:13.910 { 01:15:13.910 "method": "framework_set_scheduler", 01:15:13.910 "params": { 01:15:13.910 "name": "static" 01:15:13.910 } 01:15:13.910 } 01:15:13.910 ] 01:15:13.910 }, 01:15:13.910 { 01:15:13.910 "subsystem": "vhost_scsi", 01:15:13.910 "config": [] 01:15:13.910 }, 01:15:13.910 { 01:15:13.910 "subsystem": "vhost_blk", 01:15:13.910 "config": [] 01:15:13.910 }, 01:15:13.910 { 01:15:13.910 "subsystem": "ublk", 01:15:13.910 "config": [] 01:15:13.910 }, 01:15:13.910 { 01:15:13.910 "subsystem": "nbd", 01:15:13.910 "config": [] 01:15:13.910 }, 01:15:13.910 { 01:15:13.910 "subsystem": "nvmf", 01:15:13.910 "config": [ 01:15:13.910 { 01:15:13.910 "method": "nvmf_set_config", 01:15:13.910 "params": { 01:15:13.910 "discovery_filter": "match_any", 01:15:13.910 "admin_cmd_passthru": { 01:15:13.910 "identify_ctrlr": false 01:15:13.910 }, 01:15:13.910 "dhchap_digests": [ 01:15:13.910 "sha256", 01:15:13.910 "sha384", 01:15:13.910 "sha512" 01:15:13.910 ], 01:15:13.910 "dhchap_dhgroups": [ 01:15:13.910 "null", 01:15:13.910 "ffdhe2048", 01:15:13.910 "ffdhe3072", 01:15:13.910 "ffdhe4096", 01:15:13.910 "ffdhe6144", 01:15:13.910 "ffdhe8192" 01:15:13.910 ] 01:15:13.910 } 01:15:13.910 }, 01:15:13.910 { 01:15:13.910 "method": "nvmf_set_max_subsystems", 01:15:13.910 "params": { 01:15:13.910 "max_subsystems": 1024 01:15:13.910 } 01:15:13.910 }, 01:15:13.910 { 01:15:13.910 "method": "nvmf_set_crdt", 01:15:13.910 "params": { 01:15:13.910 "crdt1": 0, 01:15:13.910 "crdt2": 0, 01:15:13.910 "crdt3": 0 01:15:13.910 } 01:15:13.910 }, 01:15:13.910 { 01:15:13.910 "method": "nvmf_create_transport", 01:15:13.910 "params": { 01:15:13.910 "trtype": "TCP", 01:15:13.910 "max_queue_depth": 128, 01:15:13.910 "max_io_qpairs_per_ctrlr": 127, 01:15:13.910 "in_capsule_data_size": 4096, 01:15:13.910 "max_io_size": 131072, 01:15:13.910 "io_unit_size": 131072, 01:15:13.910 "max_aq_depth": 128, 01:15:13.910 "num_shared_buffers": 511, 01:15:13.910 "buf_cache_size": 4294967295, 01:15:13.910 "dif_insert_or_strip": false, 01:15:13.910 "zcopy": false, 01:15:13.910 "c2h_success": true, 01:15:13.910 "sock_priority": 0, 01:15:13.910 "abort_timeout_sec": 1, 01:15:13.910 "ack_timeout": 0, 01:15:13.910 "data_wr_pool_size": 0 01:15:13.910 } 01:15:13.910 } 01:15:13.910 ] 01:15:13.910 }, 01:15:13.910 { 01:15:13.910 "subsystem": "iscsi", 01:15:13.910 "config": [ 01:15:13.910 { 01:15:13.910 "method": "iscsi_set_options", 01:15:13.910 "params": { 01:15:13.910 "node_base": "iqn.2016-06.io.spdk", 01:15:13.910 "max_sessions": 128, 01:15:13.910 "max_connections_per_session": 2, 01:15:13.910 "max_queue_depth": 64, 01:15:13.910 "default_time2wait": 2, 01:15:13.910 "default_time2retain": 20, 01:15:13.910 "first_burst_length": 8192, 01:15:13.910 "immediate_data": true, 01:15:13.910 "allow_duplicated_isid": false, 01:15:13.910 "error_recovery_level": 0, 01:15:13.910 "nop_timeout": 60, 01:15:13.910 "nop_in_interval": 30, 01:15:13.910 "disable_chap": false, 01:15:13.910 "require_chap": false, 01:15:13.910 "mutual_chap": false, 01:15:13.910 "chap_group": 0, 01:15:13.910 "max_large_datain_per_connection": 64, 01:15:13.910 "max_r2t_per_connection": 4, 01:15:13.910 "pdu_pool_size": 36864, 01:15:13.910 "immediate_data_pool_size": 16384, 01:15:13.910 "data_out_pool_size": 2048 01:15:13.910 } 01:15:13.910 } 01:15:13.910 ] 01:15:13.910 } 01:15:13.910 ] 01:15:13.910 } 01:15:13.910 05:09:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 01:15:13.910 05:09:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 58073 01:15:13.910 05:09:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58073 ']' 01:15:13.910 05:09:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58073 01:15:13.910 05:09:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 01:15:13.910 05:09:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:15:13.910 05:09:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58073 01:15:13.910 killing process with pid 58073 01:15:13.910 05:09:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:15:13.910 05:09:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:15:13.910 05:09:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58073' 01:15:13.910 05:09:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58073 01:15:13.910 05:09:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58073 01:15:16.470 05:09:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=58118 01:15:16.470 05:09:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 01:15:16.470 05:09:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 01:15:21.745 05:10:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 58118 01:15:21.745 05:10:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58118 ']' 01:15:21.745 05:10:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58118 01:15:21.745 05:10:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 01:15:21.745 05:10:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:15:21.745 05:10:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58118 01:15:21.745 killing process with pid 58118 01:15:21.745 05:10:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:15:21.745 05:10:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:15:21.745 05:10:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58118' 01:15:21.745 05:10:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58118 01:15:21.745 05:10:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58118 01:15:24.337 05:10:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 01:15:24.337 05:10:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 01:15:24.337 01:15:24.337 real 0m11.486s 01:15:24.337 user 0m10.897s 01:15:24.337 sys 0m0.896s 01:15:24.337 05:10:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 01:15:24.337 05:10:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 01:15:24.337 ************************************ 01:15:24.337 END TEST skip_rpc_with_json 01:15:24.337 ************************************ 01:15:24.337 05:10:06 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 01:15:24.337 05:10:06 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:15:24.337 05:10:06 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 01:15:24.337 05:10:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 01:15:24.337 ************************************ 01:15:24.337 START TEST skip_rpc_with_delay 01:15:24.337 ************************************ 01:15:24.337 05:10:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 01:15:24.337 05:10:06 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 01:15:24.337 05:10:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 01:15:24.337 05:10:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 01:15:24.337 05:10:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:15:24.337 05:10:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:15:24.337 05:10:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:15:24.337 05:10:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:15:24.337 05:10:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:15:24.337 05:10:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:15:24.337 05:10:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:15:24.337 05:10:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 01:15:24.337 05:10:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 01:15:24.337 [2024-12-09 05:10:06.380587] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 01:15:24.337 05:10:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 01:15:24.337 05:10:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:15:24.337 05:10:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:15:24.337 05:10:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:15:24.337 01:15:24.337 real 0m0.180s 01:15:24.337 user 0m0.089s 01:15:24.337 sys 0m0.090s 01:15:24.337 05:10:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 01:15:24.337 ************************************ 01:15:24.337 END TEST skip_rpc_with_delay 01:15:24.337 05:10:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 01:15:24.337 ************************************ 01:15:24.337 05:10:06 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 01:15:24.337 05:10:06 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 01:15:24.337 05:10:06 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 01:15:24.337 05:10:06 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:15:24.337 05:10:06 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 01:15:24.337 05:10:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 01:15:24.337 ************************************ 01:15:24.337 START TEST exit_on_failed_rpc_init 01:15:24.337 ************************************ 01:15:24.337 05:10:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 01:15:24.337 05:10:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=58257 01:15:24.337 05:10:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 01:15:24.337 05:10:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 58257 01:15:24.337 05:10:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 58257 ']' 01:15:24.337 05:10:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:15:24.337 05:10:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 01:15:24.337 05:10:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:15:24.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:15:24.337 05:10:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 01:15:24.337 05:10:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 01:15:24.337 [2024-12-09 05:10:06.631171] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:15:24.337 [2024-12-09 05:10:06.631303] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58257 ] 01:15:24.596 [2024-12-09 05:10:06.816747] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:15:24.596 [2024-12-09 05:10:06.930917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:15:25.531 05:10:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:15:25.531 05:10:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 01:15:25.531 05:10:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 01:15:25.531 05:10:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 01:15:25.531 05:10:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 01:15:25.531 05:10:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 01:15:25.531 05:10:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:15:25.531 05:10:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:15:25.531 05:10:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:15:25.531 05:10:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:15:25.531 05:10:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:15:25.531 05:10:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:15:25.531 05:10:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:15:25.531 05:10:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 01:15:25.531 05:10:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 01:15:25.531 [2024-12-09 05:10:07.898102] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:15:25.531 [2024-12-09 05:10:07.898565] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58275 ] 01:15:25.790 [2024-12-09 05:10:08.083641] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:15:25.790 [2024-12-09 05:10:08.199319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:15:25.790 [2024-12-09 05:10:08.199626] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 01:15:25.790 [2024-12-09 05:10:08.199651] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 01:15:25.790 [2024-12-09 05:10:08.199669] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 01:15:26.357 05:10:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 01:15:26.357 05:10:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:15:26.357 05:10:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 01:15:26.357 05:10:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 01:15:26.357 05:10:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 01:15:26.357 05:10:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:15:26.358 05:10:08 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 01:15:26.358 05:10:08 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 58257 01:15:26.358 05:10:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 58257 ']' 01:15:26.358 05:10:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 58257 01:15:26.358 05:10:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 01:15:26.358 05:10:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:15:26.358 05:10:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58257 01:15:26.358 killing process with pid 58257 01:15:26.358 05:10:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:15:26.358 05:10:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:15:26.358 05:10:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58257' 01:15:26.358 05:10:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 58257 01:15:26.358 05:10:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 58257 01:15:28.897 01:15:28.897 real 0m4.524s 01:15:28.897 user 0m4.879s 01:15:28.897 sys 0m0.635s 01:15:28.897 05:10:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 01:15:28.897 05:10:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 01:15:28.897 ************************************ 01:15:28.897 END TEST exit_on_failed_rpc_init 01:15:28.897 ************************************ 01:15:28.897 05:10:11 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 01:15:28.897 01:15:28.897 real 0m24.237s 01:15:28.897 user 0m23.106s 01:15:28.897 sys 0m2.344s 01:15:28.897 05:10:11 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 01:15:28.897 05:10:11 skip_rpc -- common/autotest_common.sh@10 -- # set +x 01:15:28.897 ************************************ 01:15:28.897 END TEST skip_rpc 01:15:28.897 ************************************ 01:15:28.897 05:10:11 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 01:15:28.897 05:10:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:15:28.897 05:10:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:15:28.897 05:10:11 -- common/autotest_common.sh@10 -- # set +x 01:15:28.897 ************************************ 01:15:28.897 START TEST rpc_client 01:15:28.897 ************************************ 01:15:28.897 05:10:11 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 01:15:28.897 * Looking for test storage... 01:15:28.897 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 01:15:28.897 05:10:11 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:15:28.897 05:10:11 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 01:15:28.897 05:10:11 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:15:29.157 05:10:11 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:15:29.157 05:10:11 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:15:29.157 05:10:11 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 01:15:29.157 05:10:11 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 01:15:29.157 05:10:11 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 01:15:29.157 05:10:11 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 01:15:29.157 05:10:11 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 01:15:29.157 05:10:11 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 01:15:29.157 05:10:11 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 01:15:29.157 05:10:11 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 01:15:29.157 05:10:11 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 01:15:29.157 05:10:11 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:15:29.157 05:10:11 rpc_client -- scripts/common.sh@344 -- # case "$op" in 01:15:29.157 05:10:11 rpc_client -- scripts/common.sh@345 -- # : 1 01:15:29.157 05:10:11 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 01:15:29.157 05:10:11 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:15:29.157 05:10:11 rpc_client -- scripts/common.sh@365 -- # decimal 1 01:15:29.157 05:10:11 rpc_client -- scripts/common.sh@353 -- # local d=1 01:15:29.157 05:10:11 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:15:29.157 05:10:11 rpc_client -- scripts/common.sh@355 -- # echo 1 01:15:29.157 05:10:11 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 01:15:29.157 05:10:11 rpc_client -- scripts/common.sh@366 -- # decimal 2 01:15:29.157 05:10:11 rpc_client -- scripts/common.sh@353 -- # local d=2 01:15:29.157 05:10:11 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:15:29.157 05:10:11 rpc_client -- scripts/common.sh@355 -- # echo 2 01:15:29.157 05:10:11 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 01:15:29.157 05:10:11 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:15:29.157 05:10:11 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:15:29.157 05:10:11 rpc_client -- scripts/common.sh@368 -- # return 0 01:15:29.157 05:10:11 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:15:29.157 05:10:11 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:15:29.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:15:29.157 --rc genhtml_branch_coverage=1 01:15:29.157 --rc genhtml_function_coverage=1 01:15:29.157 --rc genhtml_legend=1 01:15:29.157 --rc geninfo_all_blocks=1 01:15:29.157 --rc geninfo_unexecuted_blocks=1 01:15:29.157 01:15:29.157 ' 01:15:29.157 05:10:11 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:15:29.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:15:29.157 --rc genhtml_branch_coverage=1 01:15:29.157 --rc genhtml_function_coverage=1 01:15:29.157 --rc genhtml_legend=1 01:15:29.157 --rc geninfo_all_blocks=1 01:15:29.157 --rc geninfo_unexecuted_blocks=1 01:15:29.157 01:15:29.157 ' 01:15:29.157 05:10:11 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:15:29.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:15:29.157 --rc genhtml_branch_coverage=1 01:15:29.157 --rc genhtml_function_coverage=1 01:15:29.157 --rc genhtml_legend=1 01:15:29.157 --rc geninfo_all_blocks=1 01:15:29.157 --rc geninfo_unexecuted_blocks=1 01:15:29.157 01:15:29.157 ' 01:15:29.157 05:10:11 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:15:29.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:15:29.157 --rc genhtml_branch_coverage=1 01:15:29.157 --rc genhtml_function_coverage=1 01:15:29.157 --rc genhtml_legend=1 01:15:29.157 --rc geninfo_all_blocks=1 01:15:29.157 --rc geninfo_unexecuted_blocks=1 01:15:29.157 01:15:29.157 ' 01:15:29.157 05:10:11 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 01:15:29.157 OK 01:15:29.157 05:10:11 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 01:15:29.157 01:15:29.157 real 0m0.313s 01:15:29.157 user 0m0.179s 01:15:29.157 sys 0m0.148s 01:15:29.157 05:10:11 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 01:15:29.157 05:10:11 rpc_client -- common/autotest_common.sh@10 -- # set +x 01:15:29.157 ************************************ 01:15:29.157 END TEST rpc_client 01:15:29.157 ************************************ 01:15:29.157 05:10:11 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 01:15:29.157 05:10:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:15:29.157 05:10:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:15:29.157 05:10:11 -- common/autotest_common.sh@10 -- # set +x 01:15:29.157 ************************************ 01:15:29.157 START TEST json_config 01:15:29.157 ************************************ 01:15:29.157 05:10:11 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 01:15:29.418 05:10:11 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:15:29.418 05:10:11 json_config -- common/autotest_common.sh@1693 -- # lcov --version 01:15:29.418 05:10:11 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:15:29.418 05:10:11 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:15:29.418 05:10:11 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:15:29.418 05:10:11 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 01:15:29.418 05:10:11 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 01:15:29.418 05:10:11 json_config -- scripts/common.sh@336 -- # IFS=.-: 01:15:29.418 05:10:11 json_config -- scripts/common.sh@336 -- # read -ra ver1 01:15:29.418 05:10:11 json_config -- scripts/common.sh@337 -- # IFS=.-: 01:15:29.418 05:10:11 json_config -- scripts/common.sh@337 -- # read -ra ver2 01:15:29.418 05:10:11 json_config -- scripts/common.sh@338 -- # local 'op=<' 01:15:29.418 05:10:11 json_config -- scripts/common.sh@340 -- # ver1_l=2 01:15:29.418 05:10:11 json_config -- scripts/common.sh@341 -- # ver2_l=1 01:15:29.418 05:10:11 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:15:29.418 05:10:11 json_config -- scripts/common.sh@344 -- # case "$op" in 01:15:29.418 05:10:11 json_config -- scripts/common.sh@345 -- # : 1 01:15:29.418 05:10:11 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 01:15:29.418 05:10:11 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:15:29.418 05:10:11 json_config -- scripts/common.sh@365 -- # decimal 1 01:15:29.418 05:10:11 json_config -- scripts/common.sh@353 -- # local d=1 01:15:29.418 05:10:11 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:15:29.418 05:10:11 json_config -- scripts/common.sh@355 -- # echo 1 01:15:29.418 05:10:11 json_config -- scripts/common.sh@365 -- # ver1[v]=1 01:15:29.418 05:10:11 json_config -- scripts/common.sh@366 -- # decimal 2 01:15:29.418 05:10:11 json_config -- scripts/common.sh@353 -- # local d=2 01:15:29.418 05:10:11 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:15:29.418 05:10:11 json_config -- scripts/common.sh@355 -- # echo 2 01:15:29.418 05:10:11 json_config -- scripts/common.sh@366 -- # ver2[v]=2 01:15:29.418 05:10:11 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:15:29.418 05:10:11 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:15:29.418 05:10:11 json_config -- scripts/common.sh@368 -- # return 0 01:15:29.418 05:10:11 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:15:29.418 05:10:11 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:15:29.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:15:29.418 --rc genhtml_branch_coverage=1 01:15:29.418 --rc genhtml_function_coverage=1 01:15:29.418 --rc genhtml_legend=1 01:15:29.418 --rc geninfo_all_blocks=1 01:15:29.418 --rc geninfo_unexecuted_blocks=1 01:15:29.418 01:15:29.418 ' 01:15:29.418 05:10:11 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:15:29.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:15:29.418 --rc genhtml_branch_coverage=1 01:15:29.418 --rc genhtml_function_coverage=1 01:15:29.418 --rc genhtml_legend=1 01:15:29.418 --rc geninfo_all_blocks=1 01:15:29.418 --rc geninfo_unexecuted_blocks=1 01:15:29.418 01:15:29.418 ' 01:15:29.418 05:10:11 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:15:29.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:15:29.418 --rc genhtml_branch_coverage=1 01:15:29.418 --rc genhtml_function_coverage=1 01:15:29.418 --rc genhtml_legend=1 01:15:29.418 --rc geninfo_all_blocks=1 01:15:29.418 --rc geninfo_unexecuted_blocks=1 01:15:29.418 01:15:29.418 ' 01:15:29.418 05:10:11 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:15:29.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:15:29.418 --rc genhtml_branch_coverage=1 01:15:29.418 --rc genhtml_function_coverage=1 01:15:29.418 --rc genhtml_legend=1 01:15:29.419 --rc geninfo_all_blocks=1 01:15:29.419 --rc geninfo_unexecuted_blocks=1 01:15:29.419 01:15:29.419 ' 01:15:29.419 05:10:11 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:15:29.419 05:10:11 json_config -- nvmf/common.sh@7 -- # uname -s 01:15:29.419 05:10:11 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:15:29.419 05:10:11 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:15:29.419 05:10:11 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:15:29.419 05:10:11 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:15:29.419 05:10:11 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:15:29.419 05:10:11 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:15:29.419 05:10:11 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:15:29.419 05:10:11 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:15:29.419 05:10:11 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:15:29.419 05:10:11 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:15:29.419 05:10:11 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:342cf203-c16b-474e-b3e5-4e4e1e38bf6e 01:15:29.419 05:10:11 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=342cf203-c16b-474e-b3e5-4e4e1e38bf6e 01:15:29.419 05:10:11 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:15:29.419 05:10:11 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:15:29.419 05:10:11 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 01:15:29.419 05:10:11 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:15:29.419 05:10:11 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:15:29.419 05:10:11 json_config -- scripts/common.sh@15 -- # shopt -s extglob 01:15:29.419 05:10:11 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:15:29.419 05:10:11 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:15:29.419 05:10:11 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:15:29.419 05:10:11 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:15:29.419 05:10:11 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:15:29.419 05:10:11 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:15:29.419 05:10:11 json_config -- paths/export.sh@5 -- # export PATH 01:15:29.419 05:10:11 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:15:29.419 05:10:11 json_config -- nvmf/common.sh@51 -- # : 0 01:15:29.419 05:10:11 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:15:29.419 05:10:11 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:15:29.419 05:10:11 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:15:29.419 05:10:11 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:15:29.419 05:10:11 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:15:29.419 05:10:11 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:15:29.419 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:15:29.419 05:10:11 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:15:29.419 05:10:11 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:15:29.419 05:10:11 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 01:15:29.419 05:10:11 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 01:15:29.419 05:10:11 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 01:15:29.419 05:10:11 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 01:15:29.419 05:10:11 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 01:15:29.419 05:10:11 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 01:15:29.419 05:10:11 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 01:15:29.419 WARNING: No tests are enabled so not running JSON configuration tests 01:15:29.419 05:10:11 json_config -- json_config/json_config.sh@28 -- # exit 0 01:15:29.419 01:15:29.419 real 0m0.228s 01:15:29.419 user 0m0.140s 01:15:29.419 sys 0m0.088s 01:15:29.419 05:10:11 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 01:15:29.419 05:10:11 json_config -- common/autotest_common.sh@10 -- # set +x 01:15:29.419 ************************************ 01:15:29.419 END TEST json_config 01:15:29.419 ************************************ 01:15:29.419 05:10:11 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 01:15:29.419 05:10:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:15:29.419 05:10:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:15:29.419 05:10:11 -- common/autotest_common.sh@10 -- # set +x 01:15:29.419 ************************************ 01:15:29.419 START TEST json_config_extra_key 01:15:29.419 ************************************ 01:15:29.419 05:10:11 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 01:15:29.679 05:10:11 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:15:29.679 05:10:11 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 01:15:29.679 05:10:11 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:15:29.679 05:10:12 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:15:29.679 05:10:12 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:15:29.679 05:10:12 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 01:15:29.679 05:10:12 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 01:15:29.679 05:10:12 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 01:15:29.679 05:10:12 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 01:15:29.679 05:10:12 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 01:15:29.679 05:10:12 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 01:15:29.679 05:10:12 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 01:15:29.679 05:10:12 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 01:15:29.679 05:10:12 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 01:15:29.679 05:10:12 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:15:29.679 05:10:12 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 01:15:29.679 05:10:12 json_config_extra_key -- scripts/common.sh@345 -- # : 1 01:15:29.679 05:10:12 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 01:15:29.679 05:10:12 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:15:29.679 05:10:12 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 01:15:29.679 05:10:12 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 01:15:29.679 05:10:12 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:15:29.679 05:10:12 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 01:15:29.679 05:10:12 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 01:15:29.679 05:10:12 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 01:15:29.679 05:10:12 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 01:15:29.679 05:10:12 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:15:29.679 05:10:12 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 01:15:29.679 05:10:12 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 01:15:29.679 05:10:12 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:15:29.679 05:10:12 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:15:29.679 05:10:12 json_config_extra_key -- scripts/common.sh@368 -- # return 0 01:15:29.679 05:10:12 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:15:29.679 05:10:12 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:15:29.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:15:29.679 --rc genhtml_branch_coverage=1 01:15:29.679 --rc genhtml_function_coverage=1 01:15:29.679 --rc genhtml_legend=1 01:15:29.680 --rc geninfo_all_blocks=1 01:15:29.680 --rc geninfo_unexecuted_blocks=1 01:15:29.680 01:15:29.680 ' 01:15:29.680 05:10:12 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:15:29.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:15:29.680 --rc genhtml_branch_coverage=1 01:15:29.680 --rc genhtml_function_coverage=1 01:15:29.680 --rc genhtml_legend=1 01:15:29.680 --rc geninfo_all_blocks=1 01:15:29.680 --rc geninfo_unexecuted_blocks=1 01:15:29.680 01:15:29.680 ' 01:15:29.680 05:10:12 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:15:29.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:15:29.680 --rc genhtml_branch_coverage=1 01:15:29.680 --rc genhtml_function_coverage=1 01:15:29.680 --rc genhtml_legend=1 01:15:29.680 --rc geninfo_all_blocks=1 01:15:29.680 --rc geninfo_unexecuted_blocks=1 01:15:29.680 01:15:29.680 ' 01:15:29.680 05:10:12 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:15:29.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:15:29.680 --rc genhtml_branch_coverage=1 01:15:29.680 --rc genhtml_function_coverage=1 01:15:29.680 --rc genhtml_legend=1 01:15:29.680 --rc geninfo_all_blocks=1 01:15:29.680 --rc geninfo_unexecuted_blocks=1 01:15:29.680 01:15:29.680 ' 01:15:29.680 05:10:12 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:15:29.680 05:10:12 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 01:15:29.680 05:10:12 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:15:29.680 05:10:12 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:15:29.680 05:10:12 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:15:29.680 05:10:12 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:15:29.680 05:10:12 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:15:29.680 05:10:12 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:15:29.680 05:10:12 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:15:29.680 05:10:12 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:15:29.680 05:10:12 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:15:29.680 05:10:12 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:15:29.680 05:10:12 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:342cf203-c16b-474e-b3e5-4e4e1e38bf6e 01:15:29.680 05:10:12 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=342cf203-c16b-474e-b3e5-4e4e1e38bf6e 01:15:29.680 05:10:12 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:15:29.680 05:10:12 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:15:29.680 05:10:12 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 01:15:29.680 05:10:12 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:15:29.680 05:10:12 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:15:29.680 05:10:12 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 01:15:29.680 05:10:12 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:15:29.680 05:10:12 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:15:29.680 05:10:12 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:15:29.680 05:10:12 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:15:29.680 05:10:12 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:15:29.680 05:10:12 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:15:29.680 05:10:12 json_config_extra_key -- paths/export.sh@5 -- # export PATH 01:15:29.680 05:10:12 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:15:29.680 05:10:12 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 01:15:29.680 05:10:12 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:15:29.680 05:10:12 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:15:29.680 05:10:12 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:15:29.680 05:10:12 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:15:29.680 05:10:12 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:15:29.680 05:10:12 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:15:29.680 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:15:29.680 05:10:12 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:15:29.680 05:10:12 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:15:29.680 05:10:12 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 01:15:29.680 05:10:12 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 01:15:29.680 05:10:12 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 01:15:29.680 05:10:12 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 01:15:29.680 05:10:12 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 01:15:29.680 05:10:12 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 01:15:29.680 05:10:12 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 01:15:29.680 05:10:12 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 01:15:29.680 05:10:12 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 01:15:29.680 05:10:12 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 01:15:29.680 05:10:12 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 01:15:29.680 05:10:12 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 01:15:29.680 INFO: launching applications... 01:15:29.680 05:10:12 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 01:15:29.680 Waiting for target to run... 01:15:29.680 05:10:12 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 01:15:29.680 05:10:12 json_config_extra_key -- json_config/common.sh@10 -- # shift 01:15:29.680 05:10:12 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 01:15:29.680 05:10:12 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 01:15:29.680 05:10:12 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 01:15:29.680 05:10:12 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 01:15:29.680 05:10:12 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 01:15:29.680 05:10:12 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=58485 01:15:29.680 05:10:12 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 01:15:29.680 05:10:12 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 58485 /var/tmp/spdk_tgt.sock 01:15:29.680 05:10:12 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 01:15:29.680 05:10:12 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 58485 ']' 01:15:29.680 05:10:12 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 01:15:29.680 05:10:12 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 01:15:29.680 05:10:12 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 01:15:29.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 01:15:29.680 05:10:12 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 01:15:29.680 05:10:12 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 01:15:29.941 [2024-12-09 05:10:12.193170] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:15:29.941 [2024-12-09 05:10:12.193497] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58485 ] 01:15:30.201 [2024-12-09 05:10:12.600837] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:15:30.461 [2024-12-09 05:10:12.702059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:15:31.029 05:10:13 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:15:31.030 05:10:13 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 01:15:31.030 05:10:13 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 01:15:31.030 01:15:31.030 INFO: shutting down applications... 01:15:31.030 05:10:13 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 01:15:31.030 05:10:13 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 01:15:31.030 05:10:13 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 01:15:31.030 05:10:13 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 01:15:31.030 05:10:13 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 58485 ]] 01:15:31.030 05:10:13 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 58485 01:15:31.030 05:10:13 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 01:15:31.030 05:10:13 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 01:15:31.030 05:10:13 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58485 01:15:31.030 05:10:13 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 01:15:31.598 05:10:13 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 01:15:31.598 05:10:13 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 01:15:31.598 05:10:13 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58485 01:15:31.598 05:10:13 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 01:15:32.167 05:10:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 01:15:32.167 05:10:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 01:15:32.167 05:10:14 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58485 01:15:32.167 05:10:14 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 01:15:32.739 05:10:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 01:15:32.739 05:10:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 01:15:32.739 05:10:14 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58485 01:15:32.739 05:10:14 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 01:15:33.306 05:10:15 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 01:15:33.306 05:10:15 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 01:15:33.306 05:10:15 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58485 01:15:33.306 05:10:15 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 01:15:33.565 05:10:15 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 01:15:33.565 05:10:15 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 01:15:33.565 05:10:15 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58485 01:15:33.565 05:10:15 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 01:15:34.133 05:10:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 01:15:34.133 05:10:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 01:15:34.133 05:10:16 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58485 01:15:34.133 05:10:16 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 01:15:34.133 05:10:16 json_config_extra_key -- json_config/common.sh@43 -- # break 01:15:34.133 SPDK target shutdown done 01:15:34.133 Success 01:15:34.133 05:10:16 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 01:15:34.133 05:10:16 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 01:15:34.133 05:10:16 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 01:15:34.133 01:15:34.133 real 0m4.613s 01:15:34.133 user 0m4.120s 01:15:34.133 sys 0m0.609s 01:15:34.133 05:10:16 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 01:15:34.133 05:10:16 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 01:15:34.133 ************************************ 01:15:34.133 END TEST json_config_extra_key 01:15:34.133 ************************************ 01:15:34.133 05:10:16 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 01:15:34.133 05:10:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:15:34.133 05:10:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:15:34.133 05:10:16 -- common/autotest_common.sh@10 -- # set +x 01:15:34.133 ************************************ 01:15:34.133 START TEST alias_rpc 01:15:34.133 ************************************ 01:15:34.133 05:10:16 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 01:15:34.392 * Looking for test storage... 01:15:34.392 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 01:15:34.392 05:10:16 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:15:34.392 05:10:16 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 01:15:34.392 05:10:16 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:15:34.392 05:10:16 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:15:34.392 05:10:16 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:15:34.392 05:10:16 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 01:15:34.392 05:10:16 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 01:15:34.392 05:10:16 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 01:15:34.392 05:10:16 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 01:15:34.392 05:10:16 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 01:15:34.392 05:10:16 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 01:15:34.392 05:10:16 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 01:15:34.392 05:10:16 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 01:15:34.392 05:10:16 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 01:15:34.392 05:10:16 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:15:34.392 05:10:16 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 01:15:34.392 05:10:16 alias_rpc -- scripts/common.sh@345 -- # : 1 01:15:34.392 05:10:16 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 01:15:34.392 05:10:16 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:15:34.392 05:10:16 alias_rpc -- scripts/common.sh@365 -- # decimal 1 01:15:34.392 05:10:16 alias_rpc -- scripts/common.sh@353 -- # local d=1 01:15:34.392 05:10:16 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:15:34.392 05:10:16 alias_rpc -- scripts/common.sh@355 -- # echo 1 01:15:34.392 05:10:16 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 01:15:34.392 05:10:16 alias_rpc -- scripts/common.sh@366 -- # decimal 2 01:15:34.392 05:10:16 alias_rpc -- scripts/common.sh@353 -- # local d=2 01:15:34.392 05:10:16 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:15:34.392 05:10:16 alias_rpc -- scripts/common.sh@355 -- # echo 2 01:15:34.392 05:10:16 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 01:15:34.392 05:10:16 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:15:34.392 05:10:16 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:15:34.392 05:10:16 alias_rpc -- scripts/common.sh@368 -- # return 0 01:15:34.392 05:10:16 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:15:34.392 05:10:16 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:15:34.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:15:34.392 --rc genhtml_branch_coverage=1 01:15:34.392 --rc genhtml_function_coverage=1 01:15:34.392 --rc genhtml_legend=1 01:15:34.392 --rc geninfo_all_blocks=1 01:15:34.392 --rc geninfo_unexecuted_blocks=1 01:15:34.392 01:15:34.392 ' 01:15:34.392 05:10:16 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:15:34.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:15:34.392 --rc genhtml_branch_coverage=1 01:15:34.392 --rc genhtml_function_coverage=1 01:15:34.392 --rc genhtml_legend=1 01:15:34.392 --rc geninfo_all_blocks=1 01:15:34.392 --rc geninfo_unexecuted_blocks=1 01:15:34.392 01:15:34.392 ' 01:15:34.392 05:10:16 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:15:34.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:15:34.392 --rc genhtml_branch_coverage=1 01:15:34.392 --rc genhtml_function_coverage=1 01:15:34.392 --rc genhtml_legend=1 01:15:34.392 --rc geninfo_all_blocks=1 01:15:34.392 --rc geninfo_unexecuted_blocks=1 01:15:34.392 01:15:34.392 ' 01:15:34.392 05:10:16 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:15:34.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:15:34.392 --rc genhtml_branch_coverage=1 01:15:34.392 --rc genhtml_function_coverage=1 01:15:34.392 --rc genhtml_legend=1 01:15:34.392 --rc geninfo_all_blocks=1 01:15:34.392 --rc geninfo_unexecuted_blocks=1 01:15:34.392 01:15:34.392 ' 01:15:34.392 05:10:16 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 01:15:34.392 05:10:16 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=58602 01:15:34.392 05:10:16 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:15:34.392 05:10:16 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 58602 01:15:34.392 05:10:16 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 58602 ']' 01:15:34.392 05:10:16 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:15:34.392 05:10:16 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 01:15:34.392 05:10:16 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:15:34.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:15:34.392 05:10:16 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 01:15:34.392 05:10:16 alias_rpc -- common/autotest_common.sh@10 -- # set +x 01:15:34.651 [2024-12-09 05:10:16.868122] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:15:34.651 [2024-12-09 05:10:16.868469] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58602 ] 01:15:34.651 [2024-12-09 05:10:17.053294] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:15:34.954 [2024-12-09 05:10:17.171203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:15:35.910 05:10:18 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:15:35.910 05:10:18 alias_rpc -- common/autotest_common.sh@868 -- # return 0 01:15:35.910 05:10:18 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 01:15:35.910 05:10:18 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 58602 01:15:35.910 05:10:18 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 58602 ']' 01:15:35.910 05:10:18 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 58602 01:15:35.910 05:10:18 alias_rpc -- common/autotest_common.sh@959 -- # uname 01:15:35.910 05:10:18 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:15:35.910 05:10:18 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58602 01:15:35.910 killing process with pid 58602 01:15:35.910 05:10:18 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:15:35.910 05:10:18 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:15:35.910 05:10:18 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58602' 01:15:35.910 05:10:18 alias_rpc -- common/autotest_common.sh@973 -- # kill 58602 01:15:35.910 05:10:18 alias_rpc -- common/autotest_common.sh@978 -- # wait 58602 01:15:38.447 ************************************ 01:15:38.447 END TEST alias_rpc 01:15:38.447 ************************************ 01:15:38.447 01:15:38.447 real 0m4.287s 01:15:38.447 user 0m4.217s 01:15:38.447 sys 0m0.646s 01:15:38.447 05:10:20 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 01:15:38.447 05:10:20 alias_rpc -- common/autotest_common.sh@10 -- # set +x 01:15:38.447 05:10:20 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 01:15:38.447 05:10:20 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 01:15:38.447 05:10:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:15:38.447 05:10:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:15:38.447 05:10:20 -- common/autotest_common.sh@10 -- # set +x 01:15:38.447 ************************************ 01:15:38.447 START TEST spdkcli_tcp 01:15:38.447 ************************************ 01:15:38.447 05:10:20 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 01:15:38.716 * Looking for test storage... 01:15:38.716 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 01:15:38.716 05:10:21 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:15:38.716 05:10:21 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 01:15:38.716 05:10:21 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:15:38.716 05:10:21 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:15:38.716 05:10:21 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:15:38.716 05:10:21 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 01:15:38.716 05:10:21 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 01:15:38.716 05:10:21 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 01:15:38.716 05:10:21 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 01:15:38.716 05:10:21 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 01:15:38.716 05:10:21 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 01:15:38.716 05:10:21 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 01:15:38.716 05:10:21 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 01:15:38.716 05:10:21 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 01:15:38.716 05:10:21 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:15:38.716 05:10:21 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 01:15:38.716 05:10:21 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 01:15:38.716 05:10:21 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 01:15:38.716 05:10:21 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:15:38.716 05:10:21 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 01:15:38.716 05:10:21 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 01:15:38.716 05:10:21 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:15:38.716 05:10:21 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 01:15:38.716 05:10:21 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 01:15:38.716 05:10:21 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 01:15:38.716 05:10:21 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 01:15:38.716 05:10:21 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:15:38.716 05:10:21 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 01:15:38.716 05:10:21 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 01:15:38.716 05:10:21 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:15:38.716 05:10:21 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:15:38.716 05:10:21 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 01:15:38.716 05:10:21 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:15:38.716 05:10:21 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:15:38.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:15:38.716 --rc genhtml_branch_coverage=1 01:15:38.716 --rc genhtml_function_coverage=1 01:15:38.716 --rc genhtml_legend=1 01:15:38.716 --rc geninfo_all_blocks=1 01:15:38.716 --rc geninfo_unexecuted_blocks=1 01:15:38.716 01:15:38.716 ' 01:15:38.716 05:10:21 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:15:38.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:15:38.716 --rc genhtml_branch_coverage=1 01:15:38.716 --rc genhtml_function_coverage=1 01:15:38.716 --rc genhtml_legend=1 01:15:38.716 --rc geninfo_all_blocks=1 01:15:38.716 --rc geninfo_unexecuted_blocks=1 01:15:38.716 01:15:38.716 ' 01:15:38.716 05:10:21 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:15:38.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:15:38.717 --rc genhtml_branch_coverage=1 01:15:38.717 --rc genhtml_function_coverage=1 01:15:38.717 --rc genhtml_legend=1 01:15:38.717 --rc geninfo_all_blocks=1 01:15:38.717 --rc geninfo_unexecuted_blocks=1 01:15:38.717 01:15:38.717 ' 01:15:38.717 05:10:21 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:15:38.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:15:38.717 --rc genhtml_branch_coverage=1 01:15:38.717 --rc genhtml_function_coverage=1 01:15:38.717 --rc genhtml_legend=1 01:15:38.717 --rc geninfo_all_blocks=1 01:15:38.717 --rc geninfo_unexecuted_blocks=1 01:15:38.717 01:15:38.717 ' 01:15:38.717 05:10:21 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 01:15:38.717 05:10:21 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 01:15:38.717 05:10:21 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 01:15:38.717 05:10:21 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 01:15:38.717 05:10:21 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 01:15:38.717 05:10:21 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 01:15:38.717 05:10:21 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 01:15:38.717 05:10:21 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 01:15:38.717 05:10:21 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 01:15:38.717 05:10:21 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=58709 01:15:38.717 05:10:21 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 01:15:38.717 05:10:21 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 58709 01:15:38.717 05:10:21 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 58709 ']' 01:15:38.717 05:10:21 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:15:38.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:15:38.717 05:10:21 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 01:15:38.717 05:10:21 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:15:38.717 05:10:21 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 01:15:38.717 05:10:21 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 01:15:38.979 [2024-12-09 05:10:21.253354] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:15:38.979 [2024-12-09 05:10:21.253697] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58709 ] 01:15:39.238 [2024-12-09 05:10:21.438362] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 01:15:39.238 [2024-12-09 05:10:21.561925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:15:39.238 [2024-12-09 05:10:21.561957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:15:40.176 05:10:22 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:15:40.176 05:10:22 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 01:15:40.176 05:10:22 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58726 01:15:40.176 05:10:22 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 01:15:40.176 05:10:22 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 01:15:40.176 [ 01:15:40.176 "bdev_malloc_delete", 01:15:40.176 "bdev_malloc_create", 01:15:40.176 "bdev_null_resize", 01:15:40.176 "bdev_null_delete", 01:15:40.176 "bdev_null_create", 01:15:40.176 "bdev_nvme_cuse_unregister", 01:15:40.176 "bdev_nvme_cuse_register", 01:15:40.176 "bdev_opal_new_user", 01:15:40.176 "bdev_opal_set_lock_state", 01:15:40.176 "bdev_opal_delete", 01:15:40.176 "bdev_opal_get_info", 01:15:40.176 "bdev_opal_create", 01:15:40.176 "bdev_nvme_opal_revert", 01:15:40.176 "bdev_nvme_opal_init", 01:15:40.176 "bdev_nvme_send_cmd", 01:15:40.176 "bdev_nvme_set_keys", 01:15:40.176 "bdev_nvme_get_path_iostat", 01:15:40.176 "bdev_nvme_get_mdns_discovery_info", 01:15:40.176 "bdev_nvme_stop_mdns_discovery", 01:15:40.176 "bdev_nvme_start_mdns_discovery", 01:15:40.176 "bdev_nvme_set_multipath_policy", 01:15:40.176 "bdev_nvme_set_preferred_path", 01:15:40.176 "bdev_nvme_get_io_paths", 01:15:40.176 "bdev_nvme_remove_error_injection", 01:15:40.176 "bdev_nvme_add_error_injection", 01:15:40.176 "bdev_nvme_get_discovery_info", 01:15:40.176 "bdev_nvme_stop_discovery", 01:15:40.176 "bdev_nvme_start_discovery", 01:15:40.176 "bdev_nvme_get_controller_health_info", 01:15:40.176 "bdev_nvme_disable_controller", 01:15:40.176 "bdev_nvme_enable_controller", 01:15:40.176 "bdev_nvme_reset_controller", 01:15:40.176 "bdev_nvme_get_transport_statistics", 01:15:40.176 "bdev_nvme_apply_firmware", 01:15:40.176 "bdev_nvme_detach_controller", 01:15:40.176 "bdev_nvme_get_controllers", 01:15:40.176 "bdev_nvme_attach_controller", 01:15:40.176 "bdev_nvme_set_hotplug", 01:15:40.176 "bdev_nvme_set_options", 01:15:40.176 "bdev_passthru_delete", 01:15:40.176 "bdev_passthru_create", 01:15:40.176 "bdev_lvol_set_parent_bdev", 01:15:40.176 "bdev_lvol_set_parent", 01:15:40.176 "bdev_lvol_check_shallow_copy", 01:15:40.176 "bdev_lvol_start_shallow_copy", 01:15:40.176 "bdev_lvol_grow_lvstore", 01:15:40.176 "bdev_lvol_get_lvols", 01:15:40.176 "bdev_lvol_get_lvstores", 01:15:40.176 "bdev_lvol_delete", 01:15:40.176 "bdev_lvol_set_read_only", 01:15:40.176 "bdev_lvol_resize", 01:15:40.176 "bdev_lvol_decouple_parent", 01:15:40.176 "bdev_lvol_inflate", 01:15:40.176 "bdev_lvol_rename", 01:15:40.176 "bdev_lvol_clone_bdev", 01:15:40.176 "bdev_lvol_clone", 01:15:40.176 "bdev_lvol_snapshot", 01:15:40.176 "bdev_lvol_create", 01:15:40.176 "bdev_lvol_delete_lvstore", 01:15:40.176 "bdev_lvol_rename_lvstore", 01:15:40.176 "bdev_lvol_create_lvstore", 01:15:40.176 "bdev_raid_set_options", 01:15:40.176 "bdev_raid_remove_base_bdev", 01:15:40.176 "bdev_raid_add_base_bdev", 01:15:40.176 "bdev_raid_delete", 01:15:40.176 "bdev_raid_create", 01:15:40.176 "bdev_raid_get_bdevs", 01:15:40.176 "bdev_error_inject_error", 01:15:40.176 "bdev_error_delete", 01:15:40.176 "bdev_error_create", 01:15:40.176 "bdev_split_delete", 01:15:40.176 "bdev_split_create", 01:15:40.176 "bdev_delay_delete", 01:15:40.176 "bdev_delay_create", 01:15:40.176 "bdev_delay_update_latency", 01:15:40.176 "bdev_zone_block_delete", 01:15:40.176 "bdev_zone_block_create", 01:15:40.176 "blobfs_create", 01:15:40.177 "blobfs_detect", 01:15:40.177 "blobfs_set_cache_size", 01:15:40.177 "bdev_xnvme_delete", 01:15:40.177 "bdev_xnvme_create", 01:15:40.177 "bdev_aio_delete", 01:15:40.177 "bdev_aio_rescan", 01:15:40.177 "bdev_aio_create", 01:15:40.177 "bdev_ftl_set_property", 01:15:40.177 "bdev_ftl_get_properties", 01:15:40.177 "bdev_ftl_get_stats", 01:15:40.177 "bdev_ftl_unmap", 01:15:40.177 "bdev_ftl_unload", 01:15:40.177 "bdev_ftl_delete", 01:15:40.177 "bdev_ftl_load", 01:15:40.177 "bdev_ftl_create", 01:15:40.177 "bdev_virtio_attach_controller", 01:15:40.177 "bdev_virtio_scsi_get_devices", 01:15:40.177 "bdev_virtio_detach_controller", 01:15:40.177 "bdev_virtio_blk_set_hotplug", 01:15:40.177 "bdev_iscsi_delete", 01:15:40.177 "bdev_iscsi_create", 01:15:40.177 "bdev_iscsi_set_options", 01:15:40.177 "accel_error_inject_error", 01:15:40.177 "ioat_scan_accel_module", 01:15:40.177 "dsa_scan_accel_module", 01:15:40.177 "iaa_scan_accel_module", 01:15:40.177 "keyring_file_remove_key", 01:15:40.177 "keyring_file_add_key", 01:15:40.177 "keyring_linux_set_options", 01:15:40.177 "fsdev_aio_delete", 01:15:40.177 "fsdev_aio_create", 01:15:40.177 "iscsi_get_histogram", 01:15:40.177 "iscsi_enable_histogram", 01:15:40.177 "iscsi_set_options", 01:15:40.177 "iscsi_get_auth_groups", 01:15:40.177 "iscsi_auth_group_remove_secret", 01:15:40.177 "iscsi_auth_group_add_secret", 01:15:40.177 "iscsi_delete_auth_group", 01:15:40.177 "iscsi_create_auth_group", 01:15:40.177 "iscsi_set_discovery_auth", 01:15:40.177 "iscsi_get_options", 01:15:40.177 "iscsi_target_node_request_logout", 01:15:40.177 "iscsi_target_node_set_redirect", 01:15:40.177 "iscsi_target_node_set_auth", 01:15:40.177 "iscsi_target_node_add_lun", 01:15:40.177 "iscsi_get_stats", 01:15:40.177 "iscsi_get_connections", 01:15:40.177 "iscsi_portal_group_set_auth", 01:15:40.177 "iscsi_start_portal_group", 01:15:40.177 "iscsi_delete_portal_group", 01:15:40.177 "iscsi_create_portal_group", 01:15:40.177 "iscsi_get_portal_groups", 01:15:40.177 "iscsi_delete_target_node", 01:15:40.177 "iscsi_target_node_remove_pg_ig_maps", 01:15:40.177 "iscsi_target_node_add_pg_ig_maps", 01:15:40.177 "iscsi_create_target_node", 01:15:40.177 "iscsi_get_target_nodes", 01:15:40.177 "iscsi_delete_initiator_group", 01:15:40.177 "iscsi_initiator_group_remove_initiators", 01:15:40.177 "iscsi_initiator_group_add_initiators", 01:15:40.177 "iscsi_create_initiator_group", 01:15:40.177 "iscsi_get_initiator_groups", 01:15:40.177 "nvmf_set_crdt", 01:15:40.177 "nvmf_set_config", 01:15:40.177 "nvmf_set_max_subsystems", 01:15:40.177 "nvmf_stop_mdns_prr", 01:15:40.177 "nvmf_publish_mdns_prr", 01:15:40.177 "nvmf_subsystem_get_listeners", 01:15:40.177 "nvmf_subsystem_get_qpairs", 01:15:40.177 "nvmf_subsystem_get_controllers", 01:15:40.177 "nvmf_get_stats", 01:15:40.177 "nvmf_get_transports", 01:15:40.177 "nvmf_create_transport", 01:15:40.177 "nvmf_get_targets", 01:15:40.177 "nvmf_delete_target", 01:15:40.177 "nvmf_create_target", 01:15:40.177 "nvmf_subsystem_allow_any_host", 01:15:40.177 "nvmf_subsystem_set_keys", 01:15:40.177 "nvmf_subsystem_remove_host", 01:15:40.177 "nvmf_subsystem_add_host", 01:15:40.177 "nvmf_ns_remove_host", 01:15:40.177 "nvmf_ns_add_host", 01:15:40.177 "nvmf_subsystem_remove_ns", 01:15:40.177 "nvmf_subsystem_set_ns_ana_group", 01:15:40.177 "nvmf_subsystem_add_ns", 01:15:40.177 "nvmf_subsystem_listener_set_ana_state", 01:15:40.177 "nvmf_discovery_get_referrals", 01:15:40.177 "nvmf_discovery_remove_referral", 01:15:40.177 "nvmf_discovery_add_referral", 01:15:40.177 "nvmf_subsystem_remove_listener", 01:15:40.177 "nvmf_subsystem_add_listener", 01:15:40.177 "nvmf_delete_subsystem", 01:15:40.177 "nvmf_create_subsystem", 01:15:40.177 "nvmf_get_subsystems", 01:15:40.177 "env_dpdk_get_mem_stats", 01:15:40.177 "nbd_get_disks", 01:15:40.177 "nbd_stop_disk", 01:15:40.177 "nbd_start_disk", 01:15:40.177 "ublk_recover_disk", 01:15:40.177 "ublk_get_disks", 01:15:40.177 "ublk_stop_disk", 01:15:40.177 "ublk_start_disk", 01:15:40.177 "ublk_destroy_target", 01:15:40.177 "ublk_create_target", 01:15:40.177 "virtio_blk_create_transport", 01:15:40.177 "virtio_blk_get_transports", 01:15:40.177 "vhost_controller_set_coalescing", 01:15:40.177 "vhost_get_controllers", 01:15:40.177 "vhost_delete_controller", 01:15:40.177 "vhost_create_blk_controller", 01:15:40.177 "vhost_scsi_controller_remove_target", 01:15:40.177 "vhost_scsi_controller_add_target", 01:15:40.177 "vhost_start_scsi_controller", 01:15:40.177 "vhost_create_scsi_controller", 01:15:40.177 "thread_set_cpumask", 01:15:40.177 "scheduler_set_options", 01:15:40.177 "framework_get_governor", 01:15:40.177 "framework_get_scheduler", 01:15:40.177 "framework_set_scheduler", 01:15:40.177 "framework_get_reactors", 01:15:40.177 "thread_get_io_channels", 01:15:40.177 "thread_get_pollers", 01:15:40.177 "thread_get_stats", 01:15:40.177 "framework_monitor_context_switch", 01:15:40.177 "spdk_kill_instance", 01:15:40.177 "log_enable_timestamps", 01:15:40.177 "log_get_flags", 01:15:40.177 "log_clear_flag", 01:15:40.177 "log_set_flag", 01:15:40.177 "log_get_level", 01:15:40.177 "log_set_level", 01:15:40.177 "log_get_print_level", 01:15:40.177 "log_set_print_level", 01:15:40.177 "framework_enable_cpumask_locks", 01:15:40.177 "framework_disable_cpumask_locks", 01:15:40.177 "framework_wait_init", 01:15:40.177 "framework_start_init", 01:15:40.177 "scsi_get_devices", 01:15:40.177 "bdev_get_histogram", 01:15:40.177 "bdev_enable_histogram", 01:15:40.177 "bdev_set_qos_limit", 01:15:40.177 "bdev_set_qd_sampling_period", 01:15:40.177 "bdev_get_bdevs", 01:15:40.177 "bdev_reset_iostat", 01:15:40.177 "bdev_get_iostat", 01:15:40.177 "bdev_examine", 01:15:40.177 "bdev_wait_for_examine", 01:15:40.177 "bdev_set_options", 01:15:40.177 "accel_get_stats", 01:15:40.177 "accel_set_options", 01:15:40.177 "accel_set_driver", 01:15:40.177 "accel_crypto_key_destroy", 01:15:40.177 "accel_crypto_keys_get", 01:15:40.177 "accel_crypto_key_create", 01:15:40.177 "accel_assign_opc", 01:15:40.177 "accel_get_module_info", 01:15:40.177 "accel_get_opc_assignments", 01:15:40.177 "vmd_rescan", 01:15:40.177 "vmd_remove_device", 01:15:40.177 "vmd_enable", 01:15:40.177 "sock_get_default_impl", 01:15:40.177 "sock_set_default_impl", 01:15:40.177 "sock_impl_set_options", 01:15:40.177 "sock_impl_get_options", 01:15:40.177 "iobuf_get_stats", 01:15:40.177 "iobuf_set_options", 01:15:40.177 "keyring_get_keys", 01:15:40.177 "framework_get_pci_devices", 01:15:40.177 "framework_get_config", 01:15:40.177 "framework_get_subsystems", 01:15:40.177 "fsdev_set_opts", 01:15:40.177 "fsdev_get_opts", 01:15:40.177 "trace_get_info", 01:15:40.177 "trace_get_tpoint_group_mask", 01:15:40.177 "trace_disable_tpoint_group", 01:15:40.177 "trace_enable_tpoint_group", 01:15:40.177 "trace_clear_tpoint_mask", 01:15:40.177 "trace_set_tpoint_mask", 01:15:40.177 "notify_get_notifications", 01:15:40.177 "notify_get_types", 01:15:40.177 "spdk_get_version", 01:15:40.177 "rpc_get_methods" 01:15:40.177 ] 01:15:40.437 05:10:22 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 01:15:40.437 05:10:22 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 01:15:40.437 05:10:22 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 01:15:40.437 05:10:22 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 01:15:40.437 05:10:22 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 58709 01:15:40.437 05:10:22 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 58709 ']' 01:15:40.437 05:10:22 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 58709 01:15:40.437 05:10:22 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 01:15:40.437 05:10:22 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:15:40.437 05:10:22 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58709 01:15:40.437 killing process with pid 58709 01:15:40.437 05:10:22 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:15:40.437 05:10:22 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:15:40.437 05:10:22 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58709' 01:15:40.437 05:10:22 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 58709 01:15:40.437 05:10:22 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 58709 01:15:42.974 ************************************ 01:15:42.974 END TEST spdkcli_tcp 01:15:42.974 ************************************ 01:15:42.974 01:15:42.974 real 0m4.354s 01:15:42.974 user 0m7.615s 01:15:42.974 sys 0m0.673s 01:15:42.974 05:10:25 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 01:15:42.974 05:10:25 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 01:15:42.974 05:10:25 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 01:15:42.974 05:10:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:15:42.974 05:10:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:15:42.974 05:10:25 -- common/autotest_common.sh@10 -- # set +x 01:15:42.974 ************************************ 01:15:42.974 START TEST dpdk_mem_utility 01:15:42.974 ************************************ 01:15:42.974 05:10:25 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 01:15:43.232 * Looking for test storage... 01:15:43.232 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 01:15:43.232 05:10:25 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:15:43.233 05:10:25 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:15:43.233 05:10:25 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 01:15:43.233 05:10:25 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:15:43.233 05:10:25 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:15:43.233 05:10:25 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 01:15:43.233 05:10:25 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 01:15:43.233 05:10:25 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 01:15:43.233 05:10:25 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 01:15:43.233 05:10:25 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 01:15:43.233 05:10:25 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 01:15:43.233 05:10:25 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 01:15:43.233 05:10:25 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 01:15:43.233 05:10:25 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 01:15:43.233 05:10:25 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:15:43.233 05:10:25 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 01:15:43.233 05:10:25 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 01:15:43.233 05:10:25 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 01:15:43.233 05:10:25 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:15:43.233 05:10:25 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 01:15:43.233 05:10:25 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 01:15:43.233 05:10:25 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:15:43.233 05:10:25 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 01:15:43.233 05:10:25 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 01:15:43.233 05:10:25 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 01:15:43.233 05:10:25 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 01:15:43.233 05:10:25 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:15:43.233 05:10:25 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 01:15:43.233 05:10:25 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 01:15:43.233 05:10:25 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:15:43.233 05:10:25 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:15:43.233 05:10:25 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 01:15:43.233 05:10:25 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:15:43.233 05:10:25 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:15:43.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:15:43.233 --rc genhtml_branch_coverage=1 01:15:43.233 --rc genhtml_function_coverage=1 01:15:43.233 --rc genhtml_legend=1 01:15:43.233 --rc geninfo_all_blocks=1 01:15:43.233 --rc geninfo_unexecuted_blocks=1 01:15:43.233 01:15:43.233 ' 01:15:43.233 05:10:25 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:15:43.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:15:43.233 --rc genhtml_branch_coverage=1 01:15:43.233 --rc genhtml_function_coverage=1 01:15:43.233 --rc genhtml_legend=1 01:15:43.233 --rc geninfo_all_blocks=1 01:15:43.233 --rc geninfo_unexecuted_blocks=1 01:15:43.233 01:15:43.233 ' 01:15:43.233 05:10:25 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:15:43.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:15:43.233 --rc genhtml_branch_coverage=1 01:15:43.233 --rc genhtml_function_coverage=1 01:15:43.233 --rc genhtml_legend=1 01:15:43.233 --rc geninfo_all_blocks=1 01:15:43.233 --rc geninfo_unexecuted_blocks=1 01:15:43.233 01:15:43.233 ' 01:15:43.233 05:10:25 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:15:43.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:15:43.233 --rc genhtml_branch_coverage=1 01:15:43.233 --rc genhtml_function_coverage=1 01:15:43.233 --rc genhtml_legend=1 01:15:43.233 --rc geninfo_all_blocks=1 01:15:43.233 --rc geninfo_unexecuted_blocks=1 01:15:43.233 01:15:43.233 ' 01:15:43.233 05:10:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 01:15:43.233 05:10:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58831 01:15:43.233 05:10:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:15:43.233 05:10:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58831 01:15:43.233 05:10:25 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 58831 ']' 01:15:43.233 05:10:25 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:15:43.233 05:10:25 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 01:15:43.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:15:43.233 05:10:25 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:15:43.233 05:10:25 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 01:15:43.233 05:10:25 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 01:15:43.233 [2024-12-09 05:10:25.655898] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:15:43.233 [2024-12-09 05:10:25.656022] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58831 ] 01:15:43.490 [2024-12-09 05:10:25.828719] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:15:43.748 [2024-12-09 05:10:25.945912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:15:44.685 05:10:26 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:15:44.685 05:10:26 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 01:15:44.685 05:10:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 01:15:44.685 05:10:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 01:15:44.685 05:10:26 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 01:15:44.685 05:10:26 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 01:15:44.685 { 01:15:44.685 "filename": "/tmp/spdk_mem_dump.txt" 01:15:44.685 } 01:15:44.685 05:10:26 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:15:44.685 05:10:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 01:15:44.685 DPDK memory size 824.000000 MiB in 1 heap(s) 01:15:44.685 1 heaps totaling size 824.000000 MiB 01:15:44.685 size: 824.000000 MiB heap id: 0 01:15:44.685 end heaps---------- 01:15:44.685 9 mempools totaling size 603.782043 MiB 01:15:44.685 size: 212.674988 MiB name: PDU_immediate_data_Pool 01:15:44.685 size: 158.602051 MiB name: PDU_data_out_Pool 01:15:44.685 size: 100.555481 MiB name: bdev_io_58831 01:15:44.685 size: 50.003479 MiB name: msgpool_58831 01:15:44.685 size: 36.509338 MiB name: fsdev_io_58831 01:15:44.685 size: 21.763794 MiB name: PDU_Pool 01:15:44.685 size: 19.513306 MiB name: SCSI_TASK_Pool 01:15:44.685 size: 4.133484 MiB name: evtpool_58831 01:15:44.685 size: 0.026123 MiB name: Session_Pool 01:15:44.685 end mempools------- 01:15:44.685 6 memzones totaling size 4.142822 MiB 01:15:44.685 size: 1.000366 MiB name: RG_ring_0_58831 01:15:44.685 size: 1.000366 MiB name: RG_ring_1_58831 01:15:44.685 size: 1.000366 MiB name: RG_ring_4_58831 01:15:44.685 size: 1.000366 MiB name: RG_ring_5_58831 01:15:44.685 size: 0.125366 MiB name: RG_ring_2_58831 01:15:44.685 size: 0.015991 MiB name: RG_ring_3_58831 01:15:44.685 end memzones------- 01:15:44.685 05:10:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 01:15:44.685 heap id: 0 total size: 824.000000 MiB number of busy elements: 323 number of free elements: 18 01:15:44.685 list of free elements. size: 16.779419 MiB 01:15:44.685 element at address: 0x200006400000 with size: 1.995972 MiB 01:15:44.685 element at address: 0x20000a600000 with size: 1.995972 MiB 01:15:44.685 element at address: 0x200003e00000 with size: 1.991028 MiB 01:15:44.685 element at address: 0x200019500040 with size: 0.999939 MiB 01:15:44.685 element at address: 0x200019900040 with size: 0.999939 MiB 01:15:44.685 element at address: 0x200019a00000 with size: 0.999084 MiB 01:15:44.685 element at address: 0x200032600000 with size: 0.994324 MiB 01:15:44.685 element at address: 0x200000400000 with size: 0.992004 MiB 01:15:44.685 element at address: 0x200019200000 with size: 0.959656 MiB 01:15:44.685 element at address: 0x200019d00040 with size: 0.936401 MiB 01:15:44.685 element at address: 0x200000200000 with size: 0.716980 MiB 01:15:44.685 element at address: 0x20001b400000 with size: 0.560974 MiB 01:15:44.685 element at address: 0x200000c00000 with size: 0.489197 MiB 01:15:44.685 element at address: 0x200019600000 with size: 0.487976 MiB 01:15:44.685 element at address: 0x200019e00000 with size: 0.485413 MiB 01:15:44.685 element at address: 0x200012c00000 with size: 0.433228 MiB 01:15:44.685 element at address: 0x200028800000 with size: 0.390442 MiB 01:15:44.685 element at address: 0x200000800000 with size: 0.350891 MiB 01:15:44.685 list of standard malloc elements. size: 199.289673 MiB 01:15:44.685 element at address: 0x20000a7fef80 with size: 132.000183 MiB 01:15:44.685 element at address: 0x2000065fef80 with size: 64.000183 MiB 01:15:44.685 element at address: 0x2000193fff80 with size: 1.000183 MiB 01:15:44.685 element at address: 0x2000197fff80 with size: 1.000183 MiB 01:15:44.685 element at address: 0x200019bfff80 with size: 1.000183 MiB 01:15:44.685 element at address: 0x2000003d9e80 with size: 0.140808 MiB 01:15:44.685 element at address: 0x200019deff40 with size: 0.062683 MiB 01:15:44.685 element at address: 0x2000003fdf40 with size: 0.007996 MiB 01:15:44.685 element at address: 0x20000a5ff040 with size: 0.000427 MiB 01:15:44.685 element at address: 0x200019defdc0 with size: 0.000366 MiB 01:15:44.685 element at address: 0x200012bff040 with size: 0.000305 MiB 01:15:44.685 element at address: 0x2000002d7b00 with size: 0.000244 MiB 01:15:44.685 element at address: 0x2000003d9d80 with size: 0.000244 MiB 01:15:44.685 element at address: 0x2000004fdf40 with size: 0.000244 MiB 01:15:44.685 element at address: 0x2000004fe040 with size: 0.000244 MiB 01:15:44.685 element at address: 0x2000004fe140 with size: 0.000244 MiB 01:15:44.685 element at address: 0x2000004fe240 with size: 0.000244 MiB 01:15:44.685 element at address: 0x2000004fe340 with size: 0.000244 MiB 01:15:44.685 element at address: 0x2000004fe440 with size: 0.000244 MiB 01:15:44.685 element at address: 0x2000004fe540 with size: 0.000244 MiB 01:15:44.685 element at address: 0x2000004fe640 with size: 0.000244 MiB 01:15:44.685 element at address: 0x2000004fe740 with size: 0.000244 MiB 01:15:44.685 element at address: 0x2000004fe840 with size: 0.000244 MiB 01:15:44.685 element at address: 0x2000004fe940 with size: 0.000244 MiB 01:15:44.685 element at address: 0x2000004fea40 with size: 0.000244 MiB 01:15:44.685 element at address: 0x2000004feb40 with size: 0.000244 MiB 01:15:44.685 element at address: 0x2000004fec40 with size: 0.000244 MiB 01:15:44.685 element at address: 0x2000004fed40 with size: 0.000244 MiB 01:15:44.685 element at address: 0x2000004fee40 with size: 0.000244 MiB 01:15:44.685 element at address: 0x2000004fef40 with size: 0.000244 MiB 01:15:44.685 element at address: 0x2000004ff040 with size: 0.000244 MiB 01:15:44.685 element at address: 0x2000004ff140 with size: 0.000244 MiB 01:15:44.685 element at address: 0x2000004ff240 with size: 0.000244 MiB 01:15:44.685 element at address: 0x2000004ff340 with size: 0.000244 MiB 01:15:44.685 element at address: 0x2000004ff440 with size: 0.000244 MiB 01:15:44.685 element at address: 0x2000004ff540 with size: 0.000244 MiB 01:15:44.685 element at address: 0x2000004ff640 with size: 0.000244 MiB 01:15:44.685 element at address: 0x2000004ff740 with size: 0.000244 MiB 01:15:44.685 element at address: 0x2000004ff840 with size: 0.000244 MiB 01:15:44.685 element at address: 0x2000004ff940 with size: 0.000244 MiB 01:15:44.685 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 01:15:44.685 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 01:15:44.685 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 01:15:44.685 element at address: 0x20000087e1c0 with size: 0.000244 MiB 01:15:44.685 element at address: 0x20000087e2c0 with size: 0.000244 MiB 01:15:44.685 element at address: 0x20000087e3c0 with size: 0.000244 MiB 01:15:44.685 element at address: 0x20000087e4c0 with size: 0.000244 MiB 01:15:44.685 element at address: 0x20000087e5c0 with size: 0.000244 MiB 01:15:44.685 element at address: 0x20000087e6c0 with size: 0.000244 MiB 01:15:44.686 element at address: 0x20000087e7c0 with size: 0.000244 MiB 01:15:44.686 element at address: 0x20000087e8c0 with size: 0.000244 MiB 01:15:44.686 element at address: 0x20000087e9c0 with size: 0.000244 MiB 01:15:44.686 element at address: 0x20000087eac0 with size: 0.000244 MiB 01:15:44.686 element at address: 0x20000087ebc0 with size: 0.000244 MiB 01:15:44.686 element at address: 0x20000087ecc0 with size: 0.000244 MiB 01:15:44.686 element at address: 0x20000087edc0 with size: 0.000244 MiB 01:15:44.686 element at address: 0x20000087eec0 with size: 0.000244 MiB 01:15:44.686 element at address: 0x20000087efc0 with size: 0.000244 MiB 01:15:44.686 element at address: 0x20000087f0c0 with size: 0.000244 MiB 01:15:44.686 element at address: 0x20000087f1c0 with size: 0.000244 MiB 01:15:44.686 element at address: 0x20000087f2c0 with size: 0.000244 MiB 01:15:44.686 element at address: 0x20000087f3c0 with size: 0.000244 MiB 01:15:44.686 element at address: 0x20000087f4c0 with size: 0.000244 MiB 01:15:44.686 element at address: 0x2000008ff800 with size: 0.000244 MiB 01:15:44.686 element at address: 0x2000008ffa80 with size: 0.000244 MiB 01:15:44.686 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 01:15:44.686 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 01:15:44.686 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 01:15:44.686 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 01:15:44.686 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 01:15:44.686 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 01:15:44.686 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 01:15:44.686 element at address: 0x200000c7dac0 with size: 0.000244 MiB 01:15:44.686 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 01:15:44.686 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 01:15:44.686 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 01:15:44.686 element at address: 0x200000c7dec0 with size: 0.000244 MiB 01:15:44.686 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 01:15:44.686 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 01:15:44.686 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 01:15:44.686 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 01:15:44.686 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 01:15:44.686 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 01:15:44.686 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 01:15:44.686 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 01:15:44.686 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 01:15:44.686 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 01:15:44.686 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 01:15:44.686 element at address: 0x200000c7eac0 with size: 0.000244 MiB 01:15:44.686 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 01:15:44.686 element at address: 0x200000cfef00 with size: 0.000244 MiB 01:15:44.686 element at address: 0x200000cff000 with size: 0.000244 MiB 01:15:44.686 element at address: 0x20000a5ff200 with size: 0.000244 MiB 01:15:44.686 element at address: 0x20000a5ff300 with size: 0.000244 MiB 01:15:44.686 element at address: 0x20000a5ff400 with size: 0.000244 MiB 01:15:44.686 element at address: 0x20000a5ff500 with size: 0.000244 MiB 01:15:44.686 element at address: 0x20000a5ff600 with size: 0.000244 MiB 01:15:44.686 element at address: 0x20000a5ff700 with size: 0.000244 MiB 01:15:44.686 element at address: 0x20000a5ff800 with size: 0.000244 MiB 01:15:44.686 element at address: 0x20000a5ff900 with size: 0.000244 MiB 01:15:44.686 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 01:15:44.686 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 01:15:44.686 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 01:15:44.686 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 01:15:44.686 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 01:15:44.686 element at address: 0x20000a5fff00 with size: 0.000244 MiB 01:15:44.686 element at address: 0x200012bff180 with size: 0.000244 MiB 01:15:44.686 element at address: 0x200012bff280 with size: 0.000244 MiB 01:15:44.686 element at address: 0x200012bff380 with size: 0.000244 MiB 01:15:44.686 element at address: 0x200012bff480 with size: 0.000244 MiB 01:15:44.686 element at address: 0x200012bff580 with size: 0.000244 MiB 01:15:44.686 element at address: 0x200012bff680 with size: 0.000244 MiB 01:15:44.686 element at address: 0x200012bff780 with size: 0.000244 MiB 01:15:44.686 element at address: 0x200012bff880 with size: 0.000244 MiB 01:15:44.686 element at address: 0x200012bff980 with size: 0.000244 MiB 01:15:44.686 element at address: 0x200012bffa80 with size: 0.000244 MiB 01:15:44.686 element at address: 0x200012bffb80 with size: 0.000244 MiB 01:15:44.686 element at address: 0x200012bffc80 with size: 0.000244 MiB 01:15:44.686 element at address: 0x200012bfff00 with size: 0.000244 MiB 01:15:44.686 element at address: 0x200012c6ee80 with size: 0.000244 MiB 01:15:44.686 element at address: 0x200012c6ef80 with size: 0.000244 MiB 01:15:44.686 element at address: 0x200012c6f080 with size: 0.000244 MiB 01:15:44.686 element at address: 0x200012c6f180 with size: 0.000244 MiB 01:15:44.686 element at address: 0x200012c6f280 with size: 0.000244 MiB 01:15:44.686 element at address: 0x200012c6f380 with size: 0.000244 MiB 01:15:44.686 element at address: 0x200012c6f480 with size: 0.000244 MiB 01:15:44.686 element at address: 0x200012c6f580 with size: 0.000244 MiB 01:15:44.686 element at address: 0x200012c6f680 with size: 0.000244 MiB 01:15:44.686 element at address: 0x200012c6f780 with size: 0.000244 MiB 01:15:44.686 element at address: 0x200012c6f880 with size: 0.000244 MiB 01:15:44.686 element at address: 0x200012cefbc0 with size: 0.000244 MiB 01:15:44.686 element at address: 0x2000192fdd00 with size: 0.000244 MiB 01:15:44.686 element at address: 0x20001967cec0 with size: 0.000244 MiB 01:15:44.686 element at address: 0x20001967cfc0 with size: 0.000244 MiB 01:15:44.686 element at address: 0x20001967d0c0 with size: 0.000244 MiB 01:15:44.686 element at address: 0x20001967d1c0 with size: 0.000244 MiB 01:15:44.686 element at address: 0x20001967d2c0 with size: 0.000244 MiB 01:15:44.686 element at address: 0x20001967d3c0 with size: 0.000244 MiB 01:15:44.686 element at address: 0x20001967d4c0 with size: 0.000244 MiB 01:15:44.686 element at address: 0x20001967d5c0 with size: 0.000244 MiB 01:15:44.686 element at address: 0x20001967d6c0 with size: 0.000244 MiB 01:15:44.686 element at address: 0x20001967d7c0 with size: 0.000244 MiB 01:15:44.686 element at address: 0x20001967d8c0 with size: 0.000244 MiB 01:15:44.686 element at address: 0x20001967d9c0 with size: 0.000244 MiB 01:15:44.686 element at address: 0x2000196fdd00 with size: 0.000244 MiB 01:15:44.686 element at address: 0x200019affc40 with size: 0.000244 MiB 01:15:44.686 element at address: 0x200019defbc0 with size: 0.000244 MiB 01:15:44.686 element at address: 0x200019defcc0 with size: 0.000244 MiB 01:15:44.686 element at address: 0x200019ebc680 with size: 0.000244 MiB 01:15:44.686 element at address: 0x20001b48f9c0 with size: 0.000244 MiB 01:15:44.686 element at address: 0x20001b48fac0 with size: 0.000244 MiB 01:15:44.686 element at address: 0x20001b48fbc0 with size: 0.000244 MiB 01:15:44.686 element at address: 0x20001b48fcc0 with size: 0.000244 MiB 01:15:44.686 element at address: 0x20001b48fdc0 with size: 0.000244 MiB 01:15:44.686 element at address: 0x20001b48fec0 with size: 0.000244 MiB 01:15:44.686 element at address: 0x20001b48ffc0 with size: 0.000244 MiB 01:15:44.686 element at address: 0x20001b4900c0 with size: 0.000244 MiB 01:15:44.686 element at address: 0x20001b4901c0 with size: 0.000244 MiB 01:15:44.686 element at address: 0x20001b4902c0 with size: 0.000244 MiB 01:15:44.686 element at address: 0x20001b4903c0 with size: 0.000244 MiB 01:15:44.686 element at address: 0x20001b4904c0 with size: 0.000244 MiB 01:15:44.686 element at address: 0x20001b4905c0 with size: 0.000244 MiB 01:15:44.686 element at address: 0x20001b4906c0 with size: 0.000244 MiB 01:15:44.686 element at address: 0x20001b4907c0 with size: 0.000244 MiB 01:15:44.686 element at address: 0x20001b4908c0 with size: 0.000244 MiB 01:15:44.686 element at address: 0x20001b4909c0 with size: 0.000244 MiB 01:15:44.686 element at address: 0x20001b490ac0 with size: 0.000244 MiB 01:15:44.686 element at address: 0x20001b490bc0 with size: 0.000244 MiB 01:15:44.686 element at address: 0x20001b490cc0 with size: 0.000244 MiB 01:15:44.686 element at address: 0x20001b490dc0 with size: 0.000244 MiB 01:15:44.686 element at address: 0x20001b490ec0 with size: 0.000244 MiB 01:15:44.686 element at address: 0x20001b490fc0 with size: 0.000244 MiB 01:15:44.686 element at address: 0x20001b4910c0 with size: 0.000244 MiB 01:15:44.686 element at address: 0x20001b4911c0 with size: 0.000244 MiB 01:15:44.686 element at address: 0x20001b4912c0 with size: 0.000244 MiB 01:15:44.686 element at address: 0x20001b4913c0 with size: 0.000244 MiB 01:15:44.686 element at address: 0x20001b4914c0 with size: 0.000244 MiB 01:15:44.686 element at address: 0x20001b4915c0 with size: 0.000244 MiB 01:15:44.686 element at address: 0x20001b4916c0 with size: 0.000244 MiB 01:15:44.686 element at address: 0x20001b4917c0 with size: 0.000244 MiB 01:15:44.686 element at address: 0x20001b4918c0 with size: 0.000244 MiB 01:15:44.686 element at address: 0x20001b4919c0 with size: 0.000244 MiB 01:15:44.686 element at address: 0x20001b491ac0 with size: 0.000244 MiB 01:15:44.686 element at address: 0x20001b491bc0 with size: 0.000244 MiB 01:15:44.686 element at address: 0x20001b491cc0 with size: 0.000244 MiB 01:15:44.686 element at address: 0x20001b491dc0 with size: 0.000244 MiB 01:15:44.686 element at address: 0x20001b491ec0 with size: 0.000244 MiB 01:15:44.686 element at address: 0x20001b491fc0 with size: 0.000244 MiB 01:15:44.686 element at address: 0x20001b4920c0 with size: 0.000244 MiB 01:15:44.686 element at address: 0x20001b4921c0 with size: 0.000244 MiB 01:15:44.686 element at address: 0x20001b4922c0 with size: 0.000244 MiB 01:15:44.686 element at address: 0x20001b4923c0 with size: 0.000244 MiB 01:15:44.686 element at address: 0x20001b4924c0 with size: 0.000244 MiB 01:15:44.686 element at address: 0x20001b4925c0 with size: 0.000244 MiB 01:15:44.686 element at address: 0x20001b4926c0 with size: 0.000244 MiB 01:15:44.686 element at address: 0x20001b4927c0 with size: 0.000244 MiB 01:15:44.686 element at address: 0x20001b4928c0 with size: 0.000244 MiB 01:15:44.686 element at address: 0x20001b4929c0 with size: 0.000244 MiB 01:15:44.686 element at address: 0x20001b492ac0 with size: 0.000244 MiB 01:15:44.686 element at address: 0x20001b492bc0 with size: 0.000244 MiB 01:15:44.686 element at address: 0x20001b492cc0 with size: 0.000244 MiB 01:15:44.686 element at address: 0x20001b492dc0 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20001b492ec0 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20001b492fc0 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20001b4930c0 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20001b4931c0 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20001b4932c0 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20001b4933c0 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20001b4934c0 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20001b4935c0 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20001b4936c0 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20001b4937c0 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20001b4938c0 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20001b4939c0 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20001b493ac0 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20001b493bc0 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20001b493cc0 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20001b493dc0 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20001b493ec0 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20001b493fc0 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20001b4940c0 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20001b4941c0 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20001b4942c0 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20001b4943c0 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20001b4944c0 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20001b4945c0 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20001b4946c0 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20001b4947c0 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20001b4948c0 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20001b4949c0 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20001b494ac0 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20001b494bc0 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20001b494cc0 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20001b494dc0 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20001b494ec0 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20001b494fc0 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20001b4950c0 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20001b4951c0 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20001b4952c0 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20001b4953c0 with size: 0.000244 MiB 01:15:44.687 element at address: 0x200028863f40 with size: 0.000244 MiB 01:15:44.687 element at address: 0x200028864040 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20002886ad00 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20002886af80 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20002886b080 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20002886b180 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20002886b280 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20002886b380 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20002886b480 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20002886b580 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20002886b680 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20002886b780 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20002886b880 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20002886b980 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20002886ba80 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20002886bb80 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20002886bc80 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20002886bd80 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20002886be80 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20002886bf80 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20002886c080 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20002886c180 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20002886c280 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20002886c380 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20002886c480 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20002886c580 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20002886c680 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20002886c780 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20002886c880 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20002886c980 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20002886ca80 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20002886cb80 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20002886cc80 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20002886cd80 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20002886ce80 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20002886cf80 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20002886d080 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20002886d180 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20002886d280 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20002886d380 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20002886d480 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20002886d580 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20002886d680 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20002886d780 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20002886d880 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20002886d980 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20002886da80 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20002886db80 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20002886dc80 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20002886dd80 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20002886de80 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20002886df80 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20002886e080 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20002886e180 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20002886e280 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20002886e380 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20002886e480 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20002886e580 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20002886e680 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20002886e780 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20002886e880 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20002886e980 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20002886ea80 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20002886eb80 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20002886ec80 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20002886ed80 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20002886ee80 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20002886ef80 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20002886f080 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20002886f180 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20002886f280 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20002886f380 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20002886f480 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20002886f580 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20002886f680 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20002886f780 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20002886f880 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20002886f980 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20002886fa80 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20002886fb80 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20002886fc80 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20002886fd80 with size: 0.000244 MiB 01:15:44.687 element at address: 0x20002886fe80 with size: 0.000244 MiB 01:15:44.687 list of memzone associated elements. size: 607.930908 MiB 01:15:44.687 element at address: 0x20001b4954c0 with size: 211.416809 MiB 01:15:44.687 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 01:15:44.687 element at address: 0x20002886ff80 with size: 157.562622 MiB 01:15:44.687 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 01:15:44.687 element at address: 0x200012df1e40 with size: 100.055115 MiB 01:15:44.687 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_58831_0 01:15:44.687 element at address: 0x200000dff340 with size: 48.003113 MiB 01:15:44.687 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58831_0 01:15:44.687 element at address: 0x200003ffdb40 with size: 36.008972 MiB 01:15:44.687 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58831_0 01:15:44.687 element at address: 0x200019fbe900 with size: 20.255615 MiB 01:15:44.687 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 01:15:44.687 element at address: 0x2000327feb00 with size: 18.005127 MiB 01:15:44.687 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 01:15:44.687 element at address: 0x2000004ffec0 with size: 3.000305 MiB 01:15:44.687 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58831_0 01:15:44.687 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 01:15:44.687 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58831 01:15:44.687 element at address: 0x2000002d7c00 with size: 1.008179 MiB 01:15:44.687 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58831 01:15:44.687 element at address: 0x2000196fde00 with size: 1.008179 MiB 01:15:44.687 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 01:15:44.688 element at address: 0x200019ebc780 with size: 1.008179 MiB 01:15:44.688 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 01:15:44.688 element at address: 0x2000192fde00 with size: 1.008179 MiB 01:15:44.688 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 01:15:44.688 element at address: 0x200012cefcc0 with size: 1.008179 MiB 01:15:44.688 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 01:15:44.688 element at address: 0x200000cff100 with size: 1.000549 MiB 01:15:44.688 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58831 01:15:44.688 element at address: 0x2000008ffb80 with size: 1.000549 MiB 01:15:44.688 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58831 01:15:44.688 element at address: 0x200019affd40 with size: 1.000549 MiB 01:15:44.688 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58831 01:15:44.688 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 01:15:44.688 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58831 01:15:44.688 element at address: 0x20000087f5c0 with size: 0.500549 MiB 01:15:44.688 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58831 01:15:44.688 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 01:15:44.688 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58831 01:15:44.688 element at address: 0x20001967dac0 with size: 0.500549 MiB 01:15:44.688 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 01:15:44.688 element at address: 0x200012c6f980 with size: 0.500549 MiB 01:15:44.688 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 01:15:44.688 element at address: 0x200019e7c440 with size: 0.250549 MiB 01:15:44.688 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 01:15:44.688 element at address: 0x2000002b78c0 with size: 0.125549 MiB 01:15:44.688 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58831 01:15:44.688 element at address: 0x20000085df80 with size: 0.125549 MiB 01:15:44.688 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58831 01:15:44.688 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 01:15:44.688 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 01:15:44.688 element at address: 0x200028864140 with size: 0.023804 MiB 01:15:44.688 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 01:15:44.688 element at address: 0x200000859d40 with size: 0.016174 MiB 01:15:44.688 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58831 01:15:44.688 element at address: 0x20002886a2c0 with size: 0.002502 MiB 01:15:44.688 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 01:15:44.688 element at address: 0x2000004ffa40 with size: 0.000366 MiB 01:15:44.688 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58831 01:15:44.688 element at address: 0x2000008ff900 with size: 0.000366 MiB 01:15:44.688 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58831 01:15:44.688 element at address: 0x200012bffd80 with size: 0.000366 MiB 01:15:44.688 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58831 01:15:44.688 element at address: 0x20002886ae00 with size: 0.000366 MiB 01:15:44.688 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 01:15:44.688 05:10:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 01:15:44.688 05:10:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58831 01:15:44.688 05:10:26 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 58831 ']' 01:15:44.688 05:10:26 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 58831 01:15:44.688 05:10:26 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 01:15:44.688 05:10:26 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:15:44.688 05:10:26 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58831 01:15:44.688 05:10:26 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:15:44.688 05:10:26 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:15:44.688 05:10:26 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58831' 01:15:44.688 killing process with pid 58831 01:15:44.688 05:10:26 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 58831 01:15:44.688 05:10:26 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 58831 01:15:47.219 01:15:47.219 real 0m4.164s 01:15:47.219 user 0m4.021s 01:15:47.219 sys 0m0.596s 01:15:47.219 05:10:29 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 01:15:47.219 ************************************ 01:15:47.219 05:10:29 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 01:15:47.219 END TEST dpdk_mem_utility 01:15:47.219 ************************************ 01:15:47.219 05:10:29 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 01:15:47.219 05:10:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:15:47.219 05:10:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:15:47.219 05:10:29 -- common/autotest_common.sh@10 -- # set +x 01:15:47.219 ************************************ 01:15:47.219 START TEST event 01:15:47.219 ************************************ 01:15:47.219 05:10:29 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 01:15:47.219 * Looking for test storage... 01:15:47.219 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 01:15:47.219 05:10:29 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:15:47.478 05:10:29 event -- common/autotest_common.sh@1693 -- # lcov --version 01:15:47.478 05:10:29 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:15:47.479 05:10:29 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:15:47.479 05:10:29 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:15:47.479 05:10:29 event -- scripts/common.sh@333 -- # local ver1 ver1_l 01:15:47.479 05:10:29 event -- scripts/common.sh@334 -- # local ver2 ver2_l 01:15:47.479 05:10:29 event -- scripts/common.sh@336 -- # IFS=.-: 01:15:47.479 05:10:29 event -- scripts/common.sh@336 -- # read -ra ver1 01:15:47.479 05:10:29 event -- scripts/common.sh@337 -- # IFS=.-: 01:15:47.479 05:10:29 event -- scripts/common.sh@337 -- # read -ra ver2 01:15:47.479 05:10:29 event -- scripts/common.sh@338 -- # local 'op=<' 01:15:47.479 05:10:29 event -- scripts/common.sh@340 -- # ver1_l=2 01:15:47.479 05:10:29 event -- scripts/common.sh@341 -- # ver2_l=1 01:15:47.479 05:10:29 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:15:47.479 05:10:29 event -- scripts/common.sh@344 -- # case "$op" in 01:15:47.479 05:10:29 event -- scripts/common.sh@345 -- # : 1 01:15:47.479 05:10:29 event -- scripts/common.sh@364 -- # (( v = 0 )) 01:15:47.479 05:10:29 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:15:47.479 05:10:29 event -- scripts/common.sh@365 -- # decimal 1 01:15:47.479 05:10:29 event -- scripts/common.sh@353 -- # local d=1 01:15:47.479 05:10:29 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:15:47.479 05:10:29 event -- scripts/common.sh@355 -- # echo 1 01:15:47.479 05:10:29 event -- scripts/common.sh@365 -- # ver1[v]=1 01:15:47.479 05:10:29 event -- scripts/common.sh@366 -- # decimal 2 01:15:47.479 05:10:29 event -- scripts/common.sh@353 -- # local d=2 01:15:47.479 05:10:29 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:15:47.479 05:10:29 event -- scripts/common.sh@355 -- # echo 2 01:15:47.479 05:10:29 event -- scripts/common.sh@366 -- # ver2[v]=2 01:15:47.479 05:10:29 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:15:47.479 05:10:29 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:15:47.479 05:10:29 event -- scripts/common.sh@368 -- # return 0 01:15:47.479 05:10:29 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:15:47.479 05:10:29 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:15:47.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:15:47.479 --rc genhtml_branch_coverage=1 01:15:47.479 --rc genhtml_function_coverage=1 01:15:47.479 --rc genhtml_legend=1 01:15:47.479 --rc geninfo_all_blocks=1 01:15:47.479 --rc geninfo_unexecuted_blocks=1 01:15:47.479 01:15:47.479 ' 01:15:47.479 05:10:29 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:15:47.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:15:47.479 --rc genhtml_branch_coverage=1 01:15:47.479 --rc genhtml_function_coverage=1 01:15:47.479 --rc genhtml_legend=1 01:15:47.479 --rc geninfo_all_blocks=1 01:15:47.479 --rc geninfo_unexecuted_blocks=1 01:15:47.479 01:15:47.479 ' 01:15:47.479 05:10:29 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:15:47.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:15:47.479 --rc genhtml_branch_coverage=1 01:15:47.479 --rc genhtml_function_coverage=1 01:15:47.479 --rc genhtml_legend=1 01:15:47.479 --rc geninfo_all_blocks=1 01:15:47.479 --rc geninfo_unexecuted_blocks=1 01:15:47.479 01:15:47.479 ' 01:15:47.479 05:10:29 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:15:47.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:15:47.479 --rc genhtml_branch_coverage=1 01:15:47.479 --rc genhtml_function_coverage=1 01:15:47.479 --rc genhtml_legend=1 01:15:47.479 --rc geninfo_all_blocks=1 01:15:47.479 --rc geninfo_unexecuted_blocks=1 01:15:47.479 01:15:47.479 ' 01:15:47.479 05:10:29 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 01:15:47.479 05:10:29 event -- bdev/nbd_common.sh@6 -- # set -e 01:15:47.479 05:10:29 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 01:15:47.479 05:10:29 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 01:15:47.479 05:10:29 event -- common/autotest_common.sh@1111 -- # xtrace_disable 01:15:47.479 05:10:29 event -- common/autotest_common.sh@10 -- # set +x 01:15:47.479 ************************************ 01:15:47.479 START TEST event_perf 01:15:47.479 ************************************ 01:15:47.479 05:10:29 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 01:15:47.479 Running I/O for 1 seconds...[2024-12-09 05:10:29.825706] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:15:47.479 [2024-12-09 05:10:29.825815] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58945 ] 01:15:47.738 [2024-12-09 05:10:30.009972] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 01:15:47.738 [2024-12-09 05:10:30.134801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:15:47.738 Running I/O for 1 seconds...[2024-12-09 05:10:30.134919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:15:47.738 [2024-12-09 05:10:30.135035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:15:47.738 [2024-12-09 05:10:30.135075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:15:49.116 01:15:49.116 lcore 0: 199811 01:15:49.116 lcore 1: 199811 01:15:49.116 lcore 2: 199811 01:15:49.116 lcore 3: 199811 01:15:49.116 done. 01:15:49.116 01:15:49.116 real 0m1.678s 01:15:49.116 user 0m4.430s 01:15:49.116 sys 0m0.127s 01:15:49.116 05:10:31 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 01:15:49.116 ************************************ 01:15:49.116 END TEST event_perf 01:15:49.116 05:10:31 event.event_perf -- common/autotest_common.sh@10 -- # set +x 01:15:49.116 ************************************ 01:15:49.116 05:10:31 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 01:15:49.116 05:10:31 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:15:49.117 05:10:31 event -- common/autotest_common.sh@1111 -- # xtrace_disable 01:15:49.117 05:10:31 event -- common/autotest_common.sh@10 -- # set +x 01:15:49.117 ************************************ 01:15:49.117 START TEST event_reactor 01:15:49.117 ************************************ 01:15:49.117 05:10:31 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 01:15:49.376 [2024-12-09 05:10:31.584444] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:15:49.376 [2024-12-09 05:10:31.584576] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58984 ] 01:15:49.376 [2024-12-09 05:10:31.768037] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:15:49.635 [2024-12-09 05:10:31.878153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:15:51.097 test_start 01:15:51.097 oneshot 01:15:51.097 tick 100 01:15:51.097 tick 100 01:15:51.097 tick 250 01:15:51.097 tick 100 01:15:51.097 tick 100 01:15:51.097 tick 100 01:15:51.097 tick 250 01:15:51.097 tick 500 01:15:51.097 tick 100 01:15:51.097 tick 100 01:15:51.097 tick 250 01:15:51.097 tick 100 01:15:51.097 tick 100 01:15:51.097 test_end 01:15:51.097 01:15:51.097 real 0m1.653s 01:15:51.097 user 0m1.427s 01:15:51.097 sys 0m0.117s 01:15:51.097 05:10:33 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 01:15:51.097 05:10:33 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 01:15:51.097 ************************************ 01:15:51.097 END TEST event_reactor 01:15:51.097 ************************************ 01:15:51.097 05:10:33 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 01:15:51.097 05:10:33 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:15:51.097 05:10:33 event -- common/autotest_common.sh@1111 -- # xtrace_disable 01:15:51.097 05:10:33 event -- common/autotest_common.sh@10 -- # set +x 01:15:51.097 ************************************ 01:15:51.097 START TEST event_reactor_perf 01:15:51.097 ************************************ 01:15:51.097 05:10:33 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 01:15:51.097 [2024-12-09 05:10:33.310089] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:15:51.097 [2024-12-09 05:10:33.310238] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59021 ] 01:15:51.097 [2024-12-09 05:10:33.494305] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:15:51.356 [2024-12-09 05:10:33.610402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:15:52.733 test_start 01:15:52.733 test_end 01:15:52.733 Performance: 397215 events per second 01:15:52.733 01:15:52.733 real 0m1.656s 01:15:52.733 user 0m1.440s 01:15:52.733 sys 0m0.108s 01:15:52.733 05:10:34 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 01:15:52.733 05:10:34 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 01:15:52.733 ************************************ 01:15:52.733 END TEST event_reactor_perf 01:15:52.733 ************************************ 01:15:52.733 05:10:34 event -- event/event.sh@49 -- # uname -s 01:15:52.733 05:10:34 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 01:15:52.733 05:10:34 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 01:15:52.733 05:10:34 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:15:52.733 05:10:34 event -- common/autotest_common.sh@1111 -- # xtrace_disable 01:15:52.733 05:10:34 event -- common/autotest_common.sh@10 -- # set +x 01:15:52.733 ************************************ 01:15:52.733 START TEST event_scheduler 01:15:52.733 ************************************ 01:15:52.733 05:10:35 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 01:15:52.733 * Looking for test storage... 01:15:52.733 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 01:15:52.733 05:10:35 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:15:52.733 05:10:35 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 01:15:52.733 05:10:35 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:15:52.991 05:10:35 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:15:52.991 05:10:35 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:15:52.991 05:10:35 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 01:15:52.991 05:10:35 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 01:15:52.991 05:10:35 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 01:15:52.991 05:10:35 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 01:15:52.991 05:10:35 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 01:15:52.991 05:10:35 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 01:15:52.991 05:10:35 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 01:15:52.991 05:10:35 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 01:15:52.991 05:10:35 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 01:15:52.991 05:10:35 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:15:52.991 05:10:35 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 01:15:52.991 05:10:35 event.event_scheduler -- scripts/common.sh@345 -- # : 1 01:15:52.991 05:10:35 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 01:15:52.991 05:10:35 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:15:52.991 05:10:35 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 01:15:52.991 05:10:35 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 01:15:52.991 05:10:35 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:15:52.991 05:10:35 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 01:15:52.991 05:10:35 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 01:15:52.991 05:10:35 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 01:15:52.991 05:10:35 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 01:15:52.991 05:10:35 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:15:52.991 05:10:35 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 01:15:52.991 05:10:35 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 01:15:52.991 05:10:35 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:15:52.991 05:10:35 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:15:52.991 05:10:35 event.event_scheduler -- scripts/common.sh@368 -- # return 0 01:15:52.991 05:10:35 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:15:52.991 05:10:35 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:15:52.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:15:52.991 --rc genhtml_branch_coverage=1 01:15:52.991 --rc genhtml_function_coverage=1 01:15:52.991 --rc genhtml_legend=1 01:15:52.991 --rc geninfo_all_blocks=1 01:15:52.991 --rc geninfo_unexecuted_blocks=1 01:15:52.991 01:15:52.991 ' 01:15:52.991 05:10:35 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:15:52.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:15:52.991 --rc genhtml_branch_coverage=1 01:15:52.991 --rc genhtml_function_coverage=1 01:15:52.991 --rc genhtml_legend=1 01:15:52.991 --rc geninfo_all_blocks=1 01:15:52.991 --rc geninfo_unexecuted_blocks=1 01:15:52.991 01:15:52.991 ' 01:15:52.991 05:10:35 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:15:52.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:15:52.991 --rc genhtml_branch_coverage=1 01:15:52.992 --rc genhtml_function_coverage=1 01:15:52.992 --rc genhtml_legend=1 01:15:52.992 --rc geninfo_all_blocks=1 01:15:52.992 --rc geninfo_unexecuted_blocks=1 01:15:52.992 01:15:52.992 ' 01:15:52.992 05:10:35 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:15:52.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:15:52.992 --rc genhtml_branch_coverage=1 01:15:52.992 --rc genhtml_function_coverage=1 01:15:52.992 --rc genhtml_legend=1 01:15:52.992 --rc geninfo_all_blocks=1 01:15:52.992 --rc geninfo_unexecuted_blocks=1 01:15:52.992 01:15:52.992 ' 01:15:52.992 05:10:35 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 01:15:52.992 05:10:35 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=59097 01:15:52.992 05:10:35 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 01:15:52.992 05:10:35 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 01:15:52.992 05:10:35 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 59097 01:15:52.992 05:10:35 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 59097 ']' 01:15:52.992 05:10:35 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:15:52.992 05:10:35 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 01:15:52.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:15:52.992 05:10:35 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:15:52.992 05:10:35 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 01:15:52.992 05:10:35 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 01:15:52.992 [2024-12-09 05:10:35.319209] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:15:52.992 [2024-12-09 05:10:35.319347] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59097 ] 01:15:53.250 [2024-12-09 05:10:35.503174] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 01:15:53.250 [2024-12-09 05:10:35.615346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:15:53.250 [2024-12-09 05:10:35.615534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:15:53.250 [2024-12-09 05:10:35.615656] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:15:53.250 [2024-12-09 05:10:35.615690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:15:53.817 05:10:36 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:15:53.817 05:10:36 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 01:15:53.817 05:10:36 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 01:15:53.817 05:10:36 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 01:15:53.817 05:10:36 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 01:15:53.817 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 01:15:53.817 POWER: Cannot set governor of lcore 0 to userspace 01:15:53.818 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 01:15:53.818 POWER: Cannot set governor of lcore 0 to performance 01:15:53.818 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 01:15:53.818 POWER: Cannot set governor of lcore 0 to userspace 01:15:53.818 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 01:15:53.818 POWER: Cannot set governor of lcore 0 to userspace 01:15:53.818 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 01:15:53.818 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 01:15:53.818 POWER: Unable to set Power Management Environment for lcore 0 01:15:53.818 [2024-12-09 05:10:36.152280] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 01:15:53.818 [2024-12-09 05:10:36.152307] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 01:15:53.818 [2024-12-09 05:10:36.152320] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 01:15:53.818 [2024-12-09 05:10:36.152341] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 01:15:53.818 [2024-12-09 05:10:36.152352] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 01:15:53.818 [2024-12-09 05:10:36.152364] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 01:15:53.818 05:10:36 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:15:53.818 05:10:36 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 01:15:53.818 05:10:36 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 01:15:53.818 05:10:36 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 01:15:54.077 [2024-12-09 05:10:36.496209] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 01:15:54.077 05:10:36 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:15:54.077 05:10:36 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 01:15:54.077 05:10:36 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:15:54.077 05:10:36 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 01:15:54.077 05:10:36 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 01:15:54.077 ************************************ 01:15:54.077 START TEST scheduler_create_thread 01:15:54.077 ************************************ 01:15:54.077 05:10:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 01:15:54.077 05:10:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 01:15:54.077 05:10:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 01:15:54.077 05:10:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 01:15:54.077 2 01:15:54.077 05:10:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:15:54.077 05:10:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 01:15:54.077 05:10:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 01:15:54.077 05:10:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 01:15:54.335 3 01:15:54.335 05:10:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:15:54.335 05:10:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 01:15:54.335 05:10:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 01:15:54.335 05:10:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 01:15:54.335 4 01:15:54.335 05:10:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:15:54.335 05:10:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 01:15:54.336 05:10:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 01:15:54.336 05:10:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 01:15:54.336 5 01:15:54.336 05:10:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:15:54.336 05:10:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 01:15:54.336 05:10:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 01:15:54.336 05:10:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 01:15:54.336 6 01:15:54.336 05:10:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:15:54.336 05:10:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 01:15:54.336 05:10:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 01:15:54.336 05:10:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 01:15:54.336 7 01:15:54.336 05:10:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:15:54.336 05:10:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 01:15:54.336 05:10:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 01:15:54.336 05:10:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 01:15:54.336 8 01:15:54.336 05:10:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:15:54.336 05:10:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 01:15:54.336 05:10:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 01:15:54.336 05:10:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 01:15:54.336 9 01:15:54.336 05:10:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:15:54.336 05:10:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 01:15:54.336 05:10:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 01:15:54.336 05:10:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 01:15:54.336 10 01:15:54.336 05:10:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:15:54.336 05:10:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 01:15:54.336 05:10:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 01:15:54.336 05:10:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 01:15:55.712 05:10:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:15:55.712 05:10:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 01:15:55.712 05:10:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 01:15:55.712 05:10:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 01:15:55.712 05:10:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 01:15:56.647 05:10:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:15:56.647 05:10:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 01:15:56.647 05:10:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 01:15:56.647 05:10:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 01:15:57.214 05:10:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:15:57.214 05:10:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 01:15:57.214 05:10:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 01:15:57.214 05:10:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 01:15:57.214 05:10:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 01:15:58.152 05:10:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:15:58.152 01:15:58.152 real 0m3.881s 01:15:58.152 user 0m0.028s 01:15:58.152 sys 0m0.006s 01:15:58.152 05:10:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 01:15:58.152 05:10:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 01:15:58.152 ************************************ 01:15:58.152 END TEST scheduler_create_thread 01:15:58.152 ************************************ 01:15:58.152 05:10:40 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 01:15:58.152 05:10:40 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 59097 01:15:58.152 05:10:40 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 59097 ']' 01:15:58.152 05:10:40 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 59097 01:15:58.152 05:10:40 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 01:15:58.152 05:10:40 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:15:58.152 05:10:40 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59097 01:15:58.152 05:10:40 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 01:15:58.152 05:10:40 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 01:15:58.152 killing process with pid 59097 01:15:58.152 05:10:40 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59097' 01:15:58.152 05:10:40 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 59097 01:15:58.152 05:10:40 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 59097 01:15:58.412 [2024-12-09 05:10:40.770345] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 01:15:59.795 01:15:59.795 real 0m7.018s 01:15:59.795 user 0m15.074s 01:15:59.795 sys 0m0.546s 01:15:59.795 05:10:42 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 01:15:59.795 05:10:42 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 01:15:59.795 ************************************ 01:15:59.795 END TEST event_scheduler 01:15:59.795 ************************************ 01:15:59.795 05:10:42 event -- event/event.sh@51 -- # modprobe -n nbd 01:15:59.795 05:10:42 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 01:15:59.795 05:10:42 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:15:59.795 05:10:42 event -- common/autotest_common.sh@1111 -- # xtrace_disable 01:15:59.795 05:10:42 event -- common/autotest_common.sh@10 -- # set +x 01:15:59.795 ************************************ 01:15:59.795 START TEST app_repeat 01:15:59.795 ************************************ 01:15:59.795 05:10:42 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 01:15:59.795 05:10:42 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:15:59.795 05:10:42 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:15:59.795 05:10:42 event.app_repeat -- event/event.sh@13 -- # local nbd_list 01:15:59.795 05:10:42 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 01:15:59.795 05:10:42 event.app_repeat -- event/event.sh@14 -- # local bdev_list 01:15:59.795 05:10:42 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 01:15:59.795 05:10:42 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 01:15:59.795 05:10:42 event.app_repeat -- event/event.sh@19 -- # repeat_pid=59219 01:15:59.795 05:10:42 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 01:15:59.795 05:10:42 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 01:15:59.795 Process app_repeat pid: 59219 01:15:59.795 05:10:42 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 59219' 01:15:59.795 05:10:42 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 01:15:59.795 spdk_app_start Round 0 01:15:59.795 05:10:42 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 01:15:59.795 05:10:42 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59219 /var/tmp/spdk-nbd.sock 01:15:59.795 05:10:42 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59219 ']' 01:15:59.795 05:10:42 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 01:15:59.795 05:10:42 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 01:15:59.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 01:15:59.795 05:10:42 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 01:15:59.795 05:10:42 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 01:15:59.795 05:10:42 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 01:15:59.795 [2024-12-09 05:10:42.167531] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:15:59.795 [2024-12-09 05:10:42.167642] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59219 ] 01:16:00.055 [2024-12-09 05:10:42.337724] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 01:16:00.055 [2024-12-09 05:10:42.453069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:16:00.055 [2024-12-09 05:10:42.453100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:16:00.624 05:10:43 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:16:00.624 05:10:43 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 01:16:00.625 05:10:43 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 01:16:00.884 Malloc0 01:16:01.143 05:10:43 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 01:16:01.403 Malloc1 01:16:01.403 05:10:43 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 01:16:01.403 05:10:43 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:16:01.403 05:10:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 01:16:01.403 05:10:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 01:16:01.403 05:10:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:16:01.403 05:10:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 01:16:01.403 05:10:43 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 01:16:01.403 05:10:43 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:16:01.403 05:10:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 01:16:01.403 05:10:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 01:16:01.403 05:10:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:16:01.403 05:10:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 01:16:01.403 05:10:43 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 01:16:01.403 05:10:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 01:16:01.403 05:10:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 01:16:01.403 05:10:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 01:16:01.662 /dev/nbd0 01:16:01.662 05:10:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 01:16:01.662 05:10:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 01:16:01.662 05:10:43 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 01:16:01.662 05:10:43 event.app_repeat -- common/autotest_common.sh@873 -- # local i 01:16:01.662 05:10:43 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:16:01.662 05:10:43 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:16:01.662 05:10:43 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 01:16:01.662 05:10:43 event.app_repeat -- common/autotest_common.sh@877 -- # break 01:16:01.663 05:10:43 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:16:01.663 05:10:43 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:16:01.663 05:10:43 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 01:16:01.663 1+0 records in 01:16:01.663 1+0 records out 01:16:01.663 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000236118 s, 17.3 MB/s 01:16:01.663 05:10:43 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 01:16:01.663 05:10:43 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 01:16:01.663 05:10:43 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 01:16:01.663 05:10:43 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:16:01.663 05:10:43 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 01:16:01.663 05:10:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:16:01.663 05:10:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 01:16:01.663 05:10:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 01:16:01.922 /dev/nbd1 01:16:01.922 05:10:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 01:16:01.922 05:10:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 01:16:01.922 05:10:44 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 01:16:01.922 05:10:44 event.app_repeat -- common/autotest_common.sh@873 -- # local i 01:16:01.922 05:10:44 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:16:01.922 05:10:44 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:16:01.922 05:10:44 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 01:16:01.922 05:10:44 event.app_repeat -- common/autotest_common.sh@877 -- # break 01:16:01.922 05:10:44 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:16:01.922 05:10:44 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:16:01.922 05:10:44 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 01:16:01.922 1+0 records in 01:16:01.922 1+0 records out 01:16:01.922 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000373606 s, 11.0 MB/s 01:16:01.922 05:10:44 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 01:16:01.922 05:10:44 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 01:16:01.922 05:10:44 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 01:16:01.922 05:10:44 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:16:01.922 05:10:44 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 01:16:01.922 05:10:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:16:01.922 05:10:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 01:16:01.922 05:10:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 01:16:01.922 05:10:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:16:01.922 05:10:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 01:16:02.182 05:10:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 01:16:02.182 { 01:16:02.182 "nbd_device": "/dev/nbd0", 01:16:02.182 "bdev_name": "Malloc0" 01:16:02.182 }, 01:16:02.182 { 01:16:02.182 "nbd_device": "/dev/nbd1", 01:16:02.182 "bdev_name": "Malloc1" 01:16:02.182 } 01:16:02.182 ]' 01:16:02.182 05:10:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 01:16:02.182 { 01:16:02.182 "nbd_device": "/dev/nbd0", 01:16:02.182 "bdev_name": "Malloc0" 01:16:02.182 }, 01:16:02.182 { 01:16:02.182 "nbd_device": "/dev/nbd1", 01:16:02.182 "bdev_name": "Malloc1" 01:16:02.182 } 01:16:02.182 ]' 01:16:02.182 05:10:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 01:16:02.182 05:10:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 01:16:02.182 /dev/nbd1' 01:16:02.182 05:10:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 01:16:02.182 /dev/nbd1' 01:16:02.182 05:10:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 01:16:02.182 05:10:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 01:16:02.182 05:10:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 01:16:02.182 05:10:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 01:16:02.182 05:10:44 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 01:16:02.182 05:10:44 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 01:16:02.182 05:10:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:16:02.182 05:10:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 01:16:02.182 05:10:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 01:16:02.182 05:10:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 01:16:02.182 05:10:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 01:16:02.182 05:10:44 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 01:16:02.182 256+0 records in 01:16:02.182 256+0 records out 01:16:02.182 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0151148 s, 69.4 MB/s 01:16:02.182 05:10:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 01:16:02.182 05:10:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 01:16:02.182 256+0 records in 01:16:02.182 256+0 records out 01:16:02.182 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0281486 s, 37.3 MB/s 01:16:02.182 05:10:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 01:16:02.182 05:10:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 01:16:02.182 256+0 records in 01:16:02.182 256+0 records out 01:16:02.182 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0227321 s, 46.1 MB/s 01:16:02.182 05:10:44 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 01:16:02.182 05:10:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:16:02.182 05:10:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 01:16:02.182 05:10:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 01:16:02.182 05:10:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 01:16:02.182 05:10:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 01:16:02.182 05:10:44 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 01:16:02.182 05:10:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 01:16:02.182 05:10:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 01:16:02.182 05:10:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 01:16:02.182 05:10:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 01:16:02.182 05:10:44 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 01:16:02.182 05:10:44 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 01:16:02.182 05:10:44 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:16:02.182 05:10:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:16:02.182 05:10:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 01:16:02.182 05:10:44 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 01:16:02.182 05:10:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:16:02.182 05:10:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 01:16:02.442 05:10:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 01:16:02.442 05:10:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 01:16:02.442 05:10:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 01:16:02.442 05:10:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:16:02.442 05:10:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:16:02.442 05:10:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 01:16:02.442 05:10:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 01:16:02.442 05:10:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 01:16:02.442 05:10:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:16:02.442 05:10:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 01:16:02.702 05:10:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 01:16:02.702 05:10:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 01:16:02.702 05:10:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 01:16:02.702 05:10:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:16:02.702 05:10:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:16:02.702 05:10:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 01:16:02.702 05:10:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 01:16:02.702 05:10:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 01:16:02.702 05:10:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 01:16:02.702 05:10:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:16:02.702 05:10:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 01:16:02.963 05:10:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 01:16:02.963 05:10:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 01:16:02.963 05:10:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 01:16:02.963 05:10:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 01:16:02.963 05:10:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 01:16:02.963 05:10:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 01:16:02.963 05:10:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 01:16:02.963 05:10:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 01:16:02.963 05:10:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 01:16:02.963 05:10:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 01:16:02.963 05:10:45 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 01:16:02.963 05:10:45 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 01:16:02.963 05:10:45 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 01:16:03.531 05:10:45 event.app_repeat -- event/event.sh@35 -- # sleep 3 01:16:04.930 [2024-12-09 05:10:47.053184] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 01:16:04.930 [2024-12-09 05:10:47.163554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:16:04.930 [2024-12-09 05:10:47.163556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:16:04.930 [2024-12-09 05:10:47.358668] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 01:16:04.930 [2024-12-09 05:10:47.358740] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 01:16:06.835 05:10:48 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 01:16:06.835 spdk_app_start Round 1 01:16:06.835 05:10:48 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 01:16:06.835 05:10:48 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59219 /var/tmp/spdk-nbd.sock 01:16:06.835 05:10:48 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59219 ']' 01:16:06.835 05:10:48 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 01:16:06.835 05:10:48 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 01:16:06.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 01:16:06.835 05:10:48 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 01:16:06.835 05:10:48 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 01:16:06.835 05:10:48 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 01:16:06.835 05:10:49 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:16:06.835 05:10:49 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 01:16:06.835 05:10:49 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 01:16:06.835 Malloc0 01:16:06.835 05:10:49 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 01:16:07.093 Malloc1 01:16:07.352 05:10:49 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 01:16:07.352 05:10:49 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:16:07.352 05:10:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 01:16:07.352 05:10:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 01:16:07.352 05:10:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:16:07.352 05:10:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 01:16:07.352 05:10:49 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 01:16:07.352 05:10:49 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:16:07.352 05:10:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 01:16:07.352 05:10:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 01:16:07.352 05:10:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:16:07.352 05:10:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 01:16:07.352 05:10:49 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 01:16:07.352 05:10:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 01:16:07.352 05:10:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 01:16:07.352 05:10:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 01:16:07.352 /dev/nbd0 01:16:07.352 05:10:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 01:16:07.352 05:10:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 01:16:07.352 05:10:49 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 01:16:07.352 05:10:49 event.app_repeat -- common/autotest_common.sh@873 -- # local i 01:16:07.352 05:10:49 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:16:07.352 05:10:49 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:16:07.352 05:10:49 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 01:16:07.352 05:10:49 event.app_repeat -- common/autotest_common.sh@877 -- # break 01:16:07.352 05:10:49 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:16:07.352 05:10:49 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:16:07.352 05:10:49 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 01:16:07.352 1+0 records in 01:16:07.352 1+0 records out 01:16:07.352 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000332545 s, 12.3 MB/s 01:16:07.352 05:10:49 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 01:16:07.611 05:10:49 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 01:16:07.611 05:10:49 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 01:16:07.611 05:10:49 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:16:07.611 05:10:49 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 01:16:07.611 05:10:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:16:07.611 05:10:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 01:16:07.612 05:10:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 01:16:07.612 /dev/nbd1 01:16:07.612 05:10:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 01:16:07.612 05:10:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 01:16:07.612 05:10:50 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 01:16:07.612 05:10:50 event.app_repeat -- common/autotest_common.sh@873 -- # local i 01:16:07.612 05:10:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:16:07.612 05:10:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:16:07.612 05:10:50 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 01:16:07.612 05:10:50 event.app_repeat -- common/autotest_common.sh@877 -- # break 01:16:07.612 05:10:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:16:07.612 05:10:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:16:07.612 05:10:50 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 01:16:07.612 1+0 records in 01:16:07.612 1+0 records out 01:16:07.612 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000285027 s, 14.4 MB/s 01:16:07.612 05:10:50 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 01:16:07.612 05:10:50 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 01:16:07.612 05:10:50 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 01:16:07.612 05:10:50 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:16:07.870 05:10:50 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 01:16:07.870 05:10:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:16:07.870 05:10:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 01:16:07.870 05:10:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 01:16:07.870 05:10:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:16:07.870 05:10:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 01:16:07.870 05:10:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 01:16:07.870 { 01:16:07.870 "nbd_device": "/dev/nbd0", 01:16:07.870 "bdev_name": "Malloc0" 01:16:07.870 }, 01:16:07.870 { 01:16:07.870 "nbd_device": "/dev/nbd1", 01:16:07.870 "bdev_name": "Malloc1" 01:16:07.870 } 01:16:07.870 ]' 01:16:07.870 05:10:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 01:16:07.870 05:10:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 01:16:07.870 { 01:16:07.870 "nbd_device": "/dev/nbd0", 01:16:07.870 "bdev_name": "Malloc0" 01:16:07.870 }, 01:16:07.870 { 01:16:07.870 "nbd_device": "/dev/nbd1", 01:16:07.870 "bdev_name": "Malloc1" 01:16:07.870 } 01:16:07.870 ]' 01:16:08.130 05:10:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 01:16:08.130 /dev/nbd1' 01:16:08.130 05:10:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 01:16:08.130 /dev/nbd1' 01:16:08.130 05:10:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 01:16:08.130 05:10:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 01:16:08.130 05:10:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 01:16:08.130 05:10:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 01:16:08.130 05:10:50 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 01:16:08.130 05:10:50 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 01:16:08.130 05:10:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:16:08.130 05:10:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 01:16:08.130 05:10:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 01:16:08.130 05:10:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 01:16:08.130 05:10:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 01:16:08.130 05:10:50 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 01:16:08.130 256+0 records in 01:16:08.130 256+0 records out 01:16:08.130 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0052417 s, 200 MB/s 01:16:08.130 05:10:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 01:16:08.130 05:10:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 01:16:08.130 256+0 records in 01:16:08.130 256+0 records out 01:16:08.130 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0261828 s, 40.0 MB/s 01:16:08.130 05:10:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 01:16:08.130 05:10:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 01:16:08.130 256+0 records in 01:16:08.130 256+0 records out 01:16:08.130 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0361881 s, 29.0 MB/s 01:16:08.130 05:10:50 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 01:16:08.130 05:10:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:16:08.130 05:10:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 01:16:08.130 05:10:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 01:16:08.130 05:10:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 01:16:08.130 05:10:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 01:16:08.130 05:10:50 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 01:16:08.130 05:10:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 01:16:08.130 05:10:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 01:16:08.130 05:10:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 01:16:08.130 05:10:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 01:16:08.130 05:10:50 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 01:16:08.130 05:10:50 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 01:16:08.130 05:10:50 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:16:08.130 05:10:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:16:08.130 05:10:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 01:16:08.130 05:10:50 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 01:16:08.130 05:10:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:16:08.130 05:10:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 01:16:08.389 05:10:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 01:16:08.389 05:10:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 01:16:08.389 05:10:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 01:16:08.389 05:10:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:16:08.389 05:10:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:16:08.389 05:10:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 01:16:08.389 05:10:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 01:16:08.389 05:10:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 01:16:08.389 05:10:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:16:08.389 05:10:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 01:16:08.648 05:10:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 01:16:08.648 05:10:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 01:16:08.648 05:10:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 01:16:08.648 05:10:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:16:08.648 05:10:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:16:08.648 05:10:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 01:16:08.648 05:10:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 01:16:08.648 05:10:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 01:16:08.648 05:10:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 01:16:08.648 05:10:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:16:08.648 05:10:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 01:16:08.648 05:10:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 01:16:08.648 05:10:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 01:16:08.648 05:10:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 01:16:08.907 05:10:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 01:16:08.907 05:10:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 01:16:08.907 05:10:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 01:16:08.907 05:10:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 01:16:08.907 05:10:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 01:16:08.907 05:10:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 01:16:08.907 05:10:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 01:16:08.907 05:10:51 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 01:16:08.907 05:10:51 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 01:16:08.908 05:10:51 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 01:16:09.167 05:10:51 event.app_repeat -- event/event.sh@35 -- # sleep 3 01:16:10.545 [2024-12-09 05:10:52.683084] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 01:16:10.545 [2024-12-09 05:10:52.788651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:16:10.545 [2024-12-09 05:10:52.788672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:16:10.545 [2024-12-09 05:10:52.981970] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 01:16:10.545 [2024-12-09 05:10:52.982061] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 01:16:12.469 05:10:54 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 01:16:12.469 spdk_app_start Round 2 01:16:12.469 05:10:54 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 01:16:12.469 05:10:54 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59219 /var/tmp/spdk-nbd.sock 01:16:12.469 05:10:54 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59219 ']' 01:16:12.469 05:10:54 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 01:16:12.469 05:10:54 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 01:16:12.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 01:16:12.469 05:10:54 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 01:16:12.469 05:10:54 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 01:16:12.469 05:10:54 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 01:16:12.469 05:10:54 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:16:12.469 05:10:54 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 01:16:12.469 05:10:54 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 01:16:12.728 Malloc0 01:16:12.728 05:10:54 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 01:16:12.987 Malloc1 01:16:12.987 05:10:55 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 01:16:12.987 05:10:55 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:16:12.987 05:10:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 01:16:12.987 05:10:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 01:16:12.987 05:10:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:16:12.987 05:10:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 01:16:12.987 05:10:55 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 01:16:12.987 05:10:55 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:16:12.987 05:10:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 01:16:12.987 05:10:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 01:16:12.987 05:10:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:16:12.987 05:10:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 01:16:12.987 05:10:55 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 01:16:12.987 05:10:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 01:16:12.987 05:10:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 01:16:12.987 05:10:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 01:16:13.245 /dev/nbd0 01:16:13.245 05:10:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 01:16:13.245 05:10:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 01:16:13.245 05:10:55 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 01:16:13.245 05:10:55 event.app_repeat -- common/autotest_common.sh@873 -- # local i 01:16:13.245 05:10:55 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:16:13.245 05:10:55 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:16:13.245 05:10:55 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 01:16:13.245 05:10:55 event.app_repeat -- common/autotest_common.sh@877 -- # break 01:16:13.245 05:10:55 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:16:13.245 05:10:55 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:16:13.245 05:10:55 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 01:16:13.245 1+0 records in 01:16:13.245 1+0 records out 01:16:13.245 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000424894 s, 9.6 MB/s 01:16:13.245 05:10:55 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 01:16:13.245 05:10:55 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 01:16:13.245 05:10:55 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 01:16:13.245 05:10:55 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:16:13.245 05:10:55 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 01:16:13.245 05:10:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:16:13.245 05:10:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 01:16:13.245 05:10:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 01:16:13.503 /dev/nbd1 01:16:13.503 05:10:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 01:16:13.503 05:10:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 01:16:13.503 05:10:55 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 01:16:13.503 05:10:55 event.app_repeat -- common/autotest_common.sh@873 -- # local i 01:16:13.503 05:10:55 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:16:13.503 05:10:55 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:16:13.503 05:10:55 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 01:16:13.503 05:10:55 event.app_repeat -- common/autotest_common.sh@877 -- # break 01:16:13.504 05:10:55 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:16:13.504 05:10:55 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:16:13.504 05:10:55 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 01:16:13.504 1+0 records in 01:16:13.504 1+0 records out 01:16:13.504 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00035867 s, 11.4 MB/s 01:16:13.504 05:10:55 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 01:16:13.504 05:10:55 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 01:16:13.504 05:10:55 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 01:16:13.504 05:10:55 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:16:13.504 05:10:55 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 01:16:13.504 05:10:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:16:13.504 05:10:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 01:16:13.504 05:10:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 01:16:13.504 05:10:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:16:13.504 05:10:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 01:16:13.762 05:10:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 01:16:13.763 { 01:16:13.763 "nbd_device": "/dev/nbd0", 01:16:13.763 "bdev_name": "Malloc0" 01:16:13.763 }, 01:16:13.763 { 01:16:13.763 "nbd_device": "/dev/nbd1", 01:16:13.763 "bdev_name": "Malloc1" 01:16:13.763 } 01:16:13.763 ]' 01:16:13.763 05:10:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 01:16:13.763 05:10:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 01:16:13.763 { 01:16:13.763 "nbd_device": "/dev/nbd0", 01:16:13.763 "bdev_name": "Malloc0" 01:16:13.763 }, 01:16:13.763 { 01:16:13.763 "nbd_device": "/dev/nbd1", 01:16:13.763 "bdev_name": "Malloc1" 01:16:13.763 } 01:16:13.763 ]' 01:16:13.763 05:10:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 01:16:13.763 /dev/nbd1' 01:16:13.763 05:10:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 01:16:13.763 /dev/nbd1' 01:16:13.763 05:10:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 01:16:13.763 05:10:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 01:16:13.763 05:10:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 01:16:13.763 05:10:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 01:16:13.763 05:10:56 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 01:16:13.763 05:10:56 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 01:16:13.763 05:10:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:16:13.763 05:10:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 01:16:13.763 05:10:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 01:16:13.763 05:10:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 01:16:13.763 05:10:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 01:16:13.763 05:10:56 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 01:16:13.763 256+0 records in 01:16:13.763 256+0 records out 01:16:13.763 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00527568 s, 199 MB/s 01:16:13.763 05:10:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 01:16:13.763 05:10:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 01:16:13.763 256+0 records in 01:16:13.763 256+0 records out 01:16:13.763 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0280444 s, 37.4 MB/s 01:16:13.763 05:10:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 01:16:13.763 05:10:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 01:16:13.763 256+0 records in 01:16:13.763 256+0 records out 01:16:13.763 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0321806 s, 32.6 MB/s 01:16:13.763 05:10:56 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 01:16:13.763 05:10:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:16:13.763 05:10:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 01:16:13.763 05:10:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 01:16:13.763 05:10:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 01:16:13.763 05:10:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 01:16:13.763 05:10:56 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 01:16:13.763 05:10:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 01:16:13.763 05:10:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 01:16:13.763 05:10:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 01:16:13.763 05:10:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 01:16:13.763 05:10:56 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 01:16:13.763 05:10:56 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 01:16:13.763 05:10:56 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:16:13.763 05:10:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:16:13.763 05:10:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 01:16:13.763 05:10:56 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 01:16:13.763 05:10:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:16:13.763 05:10:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 01:16:14.022 05:10:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 01:16:14.022 05:10:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 01:16:14.022 05:10:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 01:16:14.022 05:10:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:16:14.022 05:10:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:16:14.022 05:10:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 01:16:14.022 05:10:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 01:16:14.022 05:10:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 01:16:14.022 05:10:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:16:14.022 05:10:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 01:16:14.280 05:10:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 01:16:14.280 05:10:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 01:16:14.280 05:10:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 01:16:14.280 05:10:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:16:14.280 05:10:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:16:14.280 05:10:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 01:16:14.280 05:10:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 01:16:14.280 05:10:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 01:16:14.280 05:10:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 01:16:14.280 05:10:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:16:14.280 05:10:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 01:16:14.556 05:10:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 01:16:14.556 05:10:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 01:16:14.556 05:10:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 01:16:14.556 05:10:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 01:16:14.556 05:10:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 01:16:14.556 05:10:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 01:16:14.556 05:10:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 01:16:14.556 05:10:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 01:16:14.556 05:10:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 01:16:14.556 05:10:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 01:16:14.556 05:10:56 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 01:16:14.556 05:10:56 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 01:16:14.556 05:10:56 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 01:16:15.123 05:10:57 event.app_repeat -- event/event.sh@35 -- # sleep 3 01:16:16.058 [2024-12-09 05:10:58.443266] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 01:16:16.315 [2024-12-09 05:10:58.548658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:16:16.315 [2024-12-09 05:10:58.548658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:16:16.315 [2024-12-09 05:10:58.747946] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 01:16:16.315 [2024-12-09 05:10:58.748023] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 01:16:18.216 05:11:00 event.app_repeat -- event/event.sh@38 -- # waitforlisten 59219 /var/tmp/spdk-nbd.sock 01:16:18.216 05:11:00 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59219 ']' 01:16:18.216 05:11:00 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 01:16:18.216 05:11:00 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 01:16:18.216 05:11:00 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 01:16:18.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 01:16:18.216 05:11:00 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 01:16:18.216 05:11:00 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 01:16:18.216 05:11:00 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:16:18.216 05:11:00 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 01:16:18.216 05:11:00 event.app_repeat -- event/event.sh@39 -- # killprocess 59219 01:16:18.216 05:11:00 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 59219 ']' 01:16:18.216 05:11:00 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 59219 01:16:18.216 05:11:00 event.app_repeat -- common/autotest_common.sh@959 -- # uname 01:16:18.216 05:11:00 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:16:18.216 05:11:00 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59219 01:16:18.216 killing process with pid 59219 01:16:18.216 05:11:00 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:16:18.216 05:11:00 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:16:18.216 05:11:00 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59219' 01:16:18.216 05:11:00 event.app_repeat -- common/autotest_common.sh@973 -- # kill 59219 01:16:18.216 05:11:00 event.app_repeat -- common/autotest_common.sh@978 -- # wait 59219 01:16:19.180 spdk_app_start is called in Round 0. 01:16:19.180 Shutdown signal received, stop current app iteration 01:16:19.180 Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 reinitialization... 01:16:19.180 spdk_app_start is called in Round 1. 01:16:19.180 Shutdown signal received, stop current app iteration 01:16:19.180 Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 reinitialization... 01:16:19.180 spdk_app_start is called in Round 2. 01:16:19.180 Shutdown signal received, stop current app iteration 01:16:19.180 Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 reinitialization... 01:16:19.180 spdk_app_start is called in Round 3. 01:16:19.180 Shutdown signal received, stop current app iteration 01:16:19.180 ************************************ 01:16:19.180 END TEST app_repeat 01:16:19.180 ************************************ 01:16:19.180 05:11:01 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 01:16:19.180 05:11:01 event.app_repeat -- event/event.sh@42 -- # return 0 01:16:19.180 01:16:19.180 real 0m19.485s 01:16:19.180 user 0m41.355s 01:16:19.180 sys 0m3.180s 01:16:19.180 05:11:01 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 01:16:19.180 05:11:01 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 01:16:19.440 05:11:01 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 01:16:19.440 05:11:01 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 01:16:19.440 05:11:01 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:16:19.440 05:11:01 event -- common/autotest_common.sh@1111 -- # xtrace_disable 01:16:19.440 05:11:01 event -- common/autotest_common.sh@10 -- # set +x 01:16:19.440 ************************************ 01:16:19.440 START TEST cpu_locks 01:16:19.440 ************************************ 01:16:19.440 05:11:01 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 01:16:19.440 * Looking for test storage... 01:16:19.440 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 01:16:19.440 05:11:01 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:16:19.440 05:11:01 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 01:16:19.440 05:11:01 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:16:19.440 05:11:01 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:16:19.440 05:11:01 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:16:19.440 05:11:01 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 01:16:19.440 05:11:01 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 01:16:19.440 05:11:01 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 01:16:19.440 05:11:01 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 01:16:19.440 05:11:01 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 01:16:19.440 05:11:01 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 01:16:19.441 05:11:01 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 01:16:19.441 05:11:01 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 01:16:19.441 05:11:01 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 01:16:19.441 05:11:01 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:16:19.441 05:11:01 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 01:16:19.441 05:11:01 event.cpu_locks -- scripts/common.sh@345 -- # : 1 01:16:19.441 05:11:01 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 01:16:19.441 05:11:01 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:16:19.441 05:11:01 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 01:16:19.441 05:11:01 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 01:16:19.441 05:11:01 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:16:19.441 05:11:01 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 01:16:19.441 05:11:01 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 01:16:19.441 05:11:01 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 01:16:19.441 05:11:01 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 01:16:19.441 05:11:01 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:16:19.441 05:11:01 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 01:16:19.700 05:11:01 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 01:16:19.700 05:11:01 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:16:19.700 05:11:01 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:16:19.700 05:11:01 event.cpu_locks -- scripts/common.sh@368 -- # return 0 01:16:19.700 05:11:01 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:16:19.700 05:11:01 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:16:19.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:16:19.700 --rc genhtml_branch_coverage=1 01:16:19.700 --rc genhtml_function_coverage=1 01:16:19.700 --rc genhtml_legend=1 01:16:19.700 --rc geninfo_all_blocks=1 01:16:19.700 --rc geninfo_unexecuted_blocks=1 01:16:19.700 01:16:19.700 ' 01:16:19.700 05:11:01 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:16:19.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:16:19.700 --rc genhtml_branch_coverage=1 01:16:19.700 --rc genhtml_function_coverage=1 01:16:19.700 --rc genhtml_legend=1 01:16:19.700 --rc geninfo_all_blocks=1 01:16:19.701 --rc geninfo_unexecuted_blocks=1 01:16:19.701 01:16:19.701 ' 01:16:19.701 05:11:01 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:16:19.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:16:19.701 --rc genhtml_branch_coverage=1 01:16:19.701 --rc genhtml_function_coverage=1 01:16:19.701 --rc genhtml_legend=1 01:16:19.701 --rc geninfo_all_blocks=1 01:16:19.701 --rc geninfo_unexecuted_blocks=1 01:16:19.701 01:16:19.701 ' 01:16:19.701 05:11:01 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:16:19.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:16:19.701 --rc genhtml_branch_coverage=1 01:16:19.701 --rc genhtml_function_coverage=1 01:16:19.701 --rc genhtml_legend=1 01:16:19.701 --rc geninfo_all_blocks=1 01:16:19.701 --rc geninfo_unexecuted_blocks=1 01:16:19.701 01:16:19.701 ' 01:16:19.701 05:11:01 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 01:16:19.701 05:11:01 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 01:16:19.701 05:11:01 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 01:16:19.701 05:11:01 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 01:16:19.701 05:11:01 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:16:19.701 05:11:01 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 01:16:19.701 05:11:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 01:16:19.701 ************************************ 01:16:19.701 START TEST default_locks 01:16:19.701 ************************************ 01:16:19.701 05:11:01 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 01:16:19.701 05:11:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 01:16:19.701 05:11:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=59667 01:16:19.701 05:11:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 59667 01:16:19.701 05:11:01 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 59667 ']' 01:16:19.701 05:11:01 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:16:19.701 05:11:01 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 01:16:19.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:16:19.701 05:11:01 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:16:19.701 05:11:01 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 01:16:19.701 05:11:01 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 01:16:19.701 [2024-12-09 05:11:02.031104] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:16:19.701 [2024-12-09 05:11:02.031229] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59667 ] 01:16:19.960 [2024-12-09 05:11:02.214734] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:16:19.960 [2024-12-09 05:11:02.324297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:16:20.898 05:11:03 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:16:20.898 05:11:03 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 01:16:20.898 05:11:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 59667 01:16:20.898 05:11:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 59667 01:16:20.898 05:11:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 01:16:21.157 05:11:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 59667 01:16:21.157 05:11:03 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 59667 ']' 01:16:21.157 05:11:03 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 59667 01:16:21.157 05:11:03 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 01:16:21.157 05:11:03 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:16:21.416 05:11:03 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59667 01:16:21.416 killing process with pid 59667 01:16:21.416 05:11:03 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:16:21.416 05:11:03 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:16:21.416 05:11:03 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59667' 01:16:21.416 05:11:03 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 59667 01:16:21.416 05:11:03 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 59667 01:16:23.952 05:11:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 59667 01:16:23.952 05:11:06 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 01:16:23.952 05:11:06 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59667 01:16:23.952 05:11:06 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 01:16:23.952 05:11:06 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:16:23.952 05:11:06 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 01:16:23.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:16:23.952 ERROR: process (pid: 59667) is no longer running 01:16:23.952 05:11:06 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:16:23.952 05:11:06 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 59667 01:16:23.952 05:11:06 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 59667 ']' 01:16:23.952 05:11:06 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:16:23.952 05:11:06 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 01:16:23.952 05:11:06 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:16:23.952 05:11:06 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 01:16:23.952 05:11:06 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 01:16:23.952 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59667) - No such process 01:16:23.952 05:11:06 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:16:23.952 05:11:06 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 01:16:23.952 05:11:06 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 01:16:23.952 05:11:06 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:16:23.952 05:11:06 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:16:23.952 05:11:06 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:16:23.952 05:11:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 01:16:23.952 ************************************ 01:16:23.952 END TEST default_locks 01:16:23.952 ************************************ 01:16:23.952 05:11:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 01:16:23.953 05:11:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 01:16:23.953 05:11:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 01:16:23.953 01:16:23.953 real 0m4.241s 01:16:23.953 user 0m4.172s 01:16:23.953 sys 0m0.700s 01:16:23.953 05:11:06 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 01:16:23.953 05:11:06 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 01:16:23.953 05:11:06 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 01:16:23.953 05:11:06 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:16:23.953 05:11:06 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 01:16:23.953 05:11:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 01:16:23.953 ************************************ 01:16:23.953 START TEST default_locks_via_rpc 01:16:23.953 ************************************ 01:16:23.953 05:11:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 01:16:23.953 05:11:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=59742 01:16:23.953 05:11:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 01:16:23.953 05:11:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 59742 01:16:23.953 05:11:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59742 ']' 01:16:23.953 05:11:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:16:23.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:16:23.953 05:11:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 01:16:23.953 05:11:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:16:23.953 05:11:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 01:16:23.953 05:11:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 01:16:23.953 [2024-12-09 05:11:06.333007] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:16:23.953 [2024-12-09 05:11:06.333134] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59742 ] 01:16:24.212 [2024-12-09 05:11:06.516975] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:16:24.212 [2024-12-09 05:11:06.628540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:16:25.150 05:11:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:16:25.150 05:11:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 01:16:25.150 05:11:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 01:16:25.150 05:11:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:16:25.150 05:11:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 01:16:25.150 05:11:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:16:25.150 05:11:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 01:16:25.150 05:11:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 01:16:25.150 05:11:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 01:16:25.150 05:11:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 01:16:25.150 05:11:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 01:16:25.150 05:11:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:16:25.150 05:11:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 01:16:25.150 05:11:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:16:25.150 05:11:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 59742 01:16:25.150 05:11:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 59742 01:16:25.150 05:11:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 01:16:25.717 05:11:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 59742 01:16:25.717 05:11:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 59742 ']' 01:16:25.717 05:11:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 59742 01:16:25.717 05:11:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 01:16:25.717 05:11:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:16:25.717 05:11:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59742 01:16:25.717 killing process with pid 59742 01:16:25.717 05:11:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:16:25.717 05:11:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:16:25.717 05:11:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59742' 01:16:25.717 05:11:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 59742 01:16:25.717 05:11:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 59742 01:16:28.248 01:16:28.248 real 0m4.190s 01:16:28.248 user 0m4.114s 01:16:28.248 sys 0m0.691s 01:16:28.248 05:11:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 01:16:28.248 ************************************ 01:16:28.248 05:11:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 01:16:28.248 END TEST default_locks_via_rpc 01:16:28.248 ************************************ 01:16:28.248 05:11:10 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 01:16:28.248 05:11:10 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:16:28.248 05:11:10 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 01:16:28.248 05:11:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 01:16:28.248 ************************************ 01:16:28.248 START TEST non_locking_app_on_locked_coremask 01:16:28.248 ************************************ 01:16:28.248 05:11:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 01:16:28.248 05:11:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59818 01:16:28.248 05:11:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 01:16:28.248 05:11:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59818 /var/tmp/spdk.sock 01:16:28.248 05:11:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59818 ']' 01:16:28.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:16:28.248 05:11:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:16:28.248 05:11:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 01:16:28.248 05:11:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:16:28.248 05:11:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 01:16:28.248 05:11:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 01:16:28.248 [2024-12-09 05:11:10.593720] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:16:28.248 [2024-12-09 05:11:10.594385] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59818 ] 01:16:28.507 [2024-12-09 05:11:10.779416] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:16:28.507 [2024-12-09 05:11:10.890638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:16:29.440 05:11:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:16:29.440 05:11:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 01:16:29.440 05:11:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59839 01:16:29.440 05:11:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59839 /var/tmp/spdk2.sock 01:16:29.440 05:11:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 01:16:29.440 05:11:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59839 ']' 01:16:29.440 05:11:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 01:16:29.440 05:11:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 01:16:29.440 05:11:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 01:16:29.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 01:16:29.440 05:11:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 01:16:29.440 05:11:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 01:16:29.440 [2024-12-09 05:11:11.867926] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:16:29.440 [2024-12-09 05:11:11.868388] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59839 ] 01:16:29.698 [2024-12-09 05:11:12.055664] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 01:16:29.698 [2024-12-09 05:11:12.055713] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:16:29.956 [2024-12-09 05:11:12.283395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:16:32.494 05:11:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:16:32.494 05:11:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 01:16:32.494 05:11:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59818 01:16:32.494 05:11:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59818 01:16:32.494 05:11:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 01:16:33.062 05:11:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59818 01:16:33.062 05:11:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59818 ']' 01:16:33.062 05:11:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59818 01:16:33.062 05:11:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 01:16:33.062 05:11:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:16:33.062 05:11:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59818 01:16:33.062 05:11:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:16:33.062 killing process with pid 59818 01:16:33.062 05:11:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:16:33.062 05:11:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59818' 01:16:33.062 05:11:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59818 01:16:33.062 05:11:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59818 01:16:38.371 05:11:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59839 01:16:38.371 05:11:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59839 ']' 01:16:38.371 05:11:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59839 01:16:38.371 05:11:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 01:16:38.371 05:11:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:16:38.371 05:11:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59839 01:16:38.371 killing process with pid 59839 01:16:38.371 05:11:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:16:38.371 05:11:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:16:38.371 05:11:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59839' 01:16:38.371 05:11:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59839 01:16:38.371 05:11:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59839 01:16:40.908 ************************************ 01:16:40.908 END TEST non_locking_app_on_locked_coremask 01:16:40.908 ************************************ 01:16:40.908 01:16:40.908 real 0m12.296s 01:16:40.908 user 0m12.564s 01:16:40.908 sys 0m1.443s 01:16:40.908 05:11:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 01:16:40.908 05:11:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 01:16:40.908 05:11:22 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 01:16:40.908 05:11:22 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:16:40.908 05:11:22 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 01:16:40.908 05:11:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 01:16:40.908 ************************************ 01:16:40.908 START TEST locking_app_on_unlocked_coremask 01:16:40.908 ************************************ 01:16:40.908 05:11:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 01:16:40.908 05:11:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59996 01:16:40.908 05:11:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 01:16:40.908 05:11:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59996 /var/tmp/spdk.sock 01:16:40.908 05:11:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59996 ']' 01:16:40.908 05:11:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:16:40.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:16:40.908 05:11:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 01:16:40.908 05:11:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:16:40.908 05:11:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 01:16:40.908 05:11:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 01:16:40.908 [2024-12-09 05:11:22.960230] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:16:40.908 [2024-12-09 05:11:22.960587] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59996 ] 01:16:40.908 [2024-12-09 05:11:23.142680] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 01:16:40.908 [2024-12-09 05:11:23.142725] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:16:40.908 [2024-12-09 05:11:23.254790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:16:41.844 05:11:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:16:41.844 05:11:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 01:16:41.844 05:11:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60012 01:16:41.844 05:11:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 60012 /var/tmp/spdk2.sock 01:16:41.844 05:11:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 01:16:41.844 05:11:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60012 ']' 01:16:41.844 05:11:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 01:16:41.844 05:11:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 01:16:41.844 05:11:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 01:16:41.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 01:16:41.844 05:11:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 01:16:41.844 05:11:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 01:16:41.844 [2024-12-09 05:11:24.227515] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:16:41.844 [2024-12-09 05:11:24.228208] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60012 ] 01:16:42.104 [2024-12-09 05:11:24.415711] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:16:42.364 [2024-12-09 05:11:24.638320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:16:44.896 05:11:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:16:44.896 05:11:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 01:16:44.896 05:11:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 60012 01:16:44.896 05:11:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60012 01:16:44.896 05:11:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 01:16:45.155 05:11:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59996 01:16:45.155 05:11:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59996 ']' 01:16:45.155 05:11:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59996 01:16:45.155 05:11:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 01:16:45.155 05:11:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:16:45.155 05:11:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59996 01:16:45.155 killing process with pid 59996 01:16:45.155 05:11:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:16:45.155 05:11:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:16:45.155 05:11:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59996' 01:16:45.155 05:11:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59996 01:16:45.155 05:11:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59996 01:16:50.424 05:11:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 60012 01:16:50.424 05:11:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60012 ']' 01:16:50.424 05:11:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60012 01:16:50.424 05:11:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 01:16:50.424 05:11:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:16:50.424 05:11:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60012 01:16:50.424 killing process with pid 60012 01:16:50.424 05:11:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:16:50.424 05:11:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:16:50.424 05:11:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60012' 01:16:50.424 05:11:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60012 01:16:50.424 05:11:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60012 01:16:53.024 01:16:53.024 real 0m12.144s 01:16:53.024 user 0m12.381s 01:16:53.024 sys 0m1.437s 01:16:53.024 05:11:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 01:16:53.024 ************************************ 01:16:53.024 05:11:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 01:16:53.024 END TEST locking_app_on_unlocked_coremask 01:16:53.024 ************************************ 01:16:53.024 05:11:35 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 01:16:53.024 05:11:35 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:16:53.024 05:11:35 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 01:16:53.024 05:11:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 01:16:53.024 ************************************ 01:16:53.024 START TEST locking_app_on_locked_coremask 01:16:53.024 ************************************ 01:16:53.024 05:11:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 01:16:53.024 05:11:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60171 01:16:53.024 05:11:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 01:16:53.024 05:11:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 60171 /var/tmp/spdk.sock 01:16:53.024 05:11:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60171 ']' 01:16:53.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:16:53.024 05:11:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:16:53.024 05:11:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 01:16:53.024 05:11:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:16:53.024 05:11:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 01:16:53.024 05:11:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 01:16:53.024 [2024-12-09 05:11:35.175811] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:16:53.024 [2024-12-09 05:11:35.175937] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60171 ] 01:16:53.024 [2024-12-09 05:11:35.361134] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:16:53.282 [2024-12-09 05:11:35.476947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:16:54.219 05:11:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:16:54.219 05:11:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 01:16:54.219 05:11:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 01:16:54.219 05:11:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60187 01:16:54.219 05:11:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60187 /var/tmp/spdk2.sock 01:16:54.219 05:11:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 01:16:54.219 05:11:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60187 /var/tmp/spdk2.sock 01:16:54.219 05:11:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 01:16:54.219 05:11:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:16:54.219 05:11:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 01:16:54.219 05:11:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:16:54.219 05:11:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60187 /var/tmp/spdk2.sock 01:16:54.219 05:11:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60187 ']' 01:16:54.219 05:11:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 01:16:54.219 05:11:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 01:16:54.219 05:11:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 01:16:54.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 01:16:54.219 05:11:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 01:16:54.219 05:11:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 01:16:54.219 [2024-12-09 05:11:36.461455] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:16:54.219 [2024-12-09 05:11:36.461805] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60187 ] 01:16:54.219 [2024-12-09 05:11:36.647674] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60171 has claimed it. 01:16:54.219 [2024-12-09 05:11:36.647751] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 01:16:54.785 ERROR: process (pid: 60187) is no longer running 01:16:54.785 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60187) - No such process 01:16:54.785 05:11:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:16:54.785 05:11:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 01:16:54.785 05:11:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 01:16:54.785 05:11:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:16:54.785 05:11:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:16:54.785 05:11:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:16:54.785 05:11:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 60171 01:16:54.785 05:11:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60171 01:16:54.785 05:11:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 01:16:55.351 05:11:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 60171 01:16:55.351 05:11:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60171 ']' 01:16:55.352 05:11:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60171 01:16:55.352 05:11:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 01:16:55.352 05:11:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:16:55.352 05:11:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60171 01:16:55.352 05:11:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:16:55.352 05:11:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:16:55.352 05:11:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60171' 01:16:55.352 killing process with pid 60171 01:16:55.352 05:11:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60171 01:16:55.352 05:11:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60171 01:16:57.883 01:16:57.883 real 0m5.035s 01:16:57.883 user 0m5.196s 01:16:57.883 sys 0m0.853s 01:16:57.883 05:11:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 01:16:57.883 05:11:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 01:16:57.883 ************************************ 01:16:57.883 END TEST locking_app_on_locked_coremask 01:16:57.883 ************************************ 01:16:57.883 05:11:40 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 01:16:57.883 05:11:40 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:16:57.883 05:11:40 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 01:16:57.883 05:11:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 01:16:57.883 ************************************ 01:16:57.883 START TEST locking_overlapped_coremask 01:16:57.883 ************************************ 01:16:57.883 05:11:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 01:16:57.883 05:11:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=60257 01:16:57.883 05:11:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 01:16:57.883 05:11:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 60257 /var/tmp/spdk.sock 01:16:57.884 05:11:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60257 ']' 01:16:57.884 05:11:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:16:57.884 05:11:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 01:16:57.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:16:57.884 05:11:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:16:57.884 05:11:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 01:16:57.884 05:11:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 01:16:57.884 [2024-12-09 05:11:40.284497] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:16:57.884 [2024-12-09 05:11:40.284787] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60257 ] 01:16:58.142 [2024-12-09 05:11:40.471084] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 01:16:58.142 [2024-12-09 05:11:40.587920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:16:58.142 [2024-12-09 05:11:40.587941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:16:58.142 [2024-12-09 05:11:40.587959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:16:59.079 05:11:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:16:59.079 05:11:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 01:16:59.079 05:11:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=60280 01:16:59.079 05:11:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 60280 /var/tmp/spdk2.sock 01:16:59.079 05:11:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 01:16:59.079 05:11:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 01:16:59.079 05:11:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60280 /var/tmp/spdk2.sock 01:16:59.079 05:11:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 01:16:59.079 05:11:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:16:59.079 05:11:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 01:16:59.079 05:11:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:16:59.079 05:11:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60280 /var/tmp/spdk2.sock 01:16:59.079 05:11:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60280 ']' 01:16:59.079 05:11:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 01:16:59.079 05:11:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 01:16:59.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 01:16:59.079 05:11:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 01:16:59.079 05:11:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 01:16:59.079 05:11:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 01:16:59.338 [2024-12-09 05:11:41.561432] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:16:59.338 [2024-12-09 05:11:41.561572] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60280 ] 01:16:59.338 [2024-12-09 05:11:41.747629] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60257 has claimed it. 01:16:59.338 [2024-12-09 05:11:41.747693] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 01:16:59.907 ERROR: process (pid: 60280) is no longer running 01:16:59.907 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60280) - No such process 01:16:59.907 05:11:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:16:59.907 05:11:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 01:16:59.907 05:11:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 01:16:59.907 05:11:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:16:59.907 05:11:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:16:59.907 05:11:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:16:59.907 05:11:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 01:16:59.907 05:11:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 01:16:59.907 05:11:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 01:16:59.907 05:11:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 01:16:59.907 05:11:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 60257 01:16:59.907 05:11:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 60257 ']' 01:16:59.907 05:11:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 60257 01:16:59.907 05:11:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 01:16:59.907 05:11:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:16:59.907 05:11:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60257 01:16:59.907 05:11:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:16:59.907 05:11:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:16:59.907 05:11:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60257' 01:16:59.907 killing process with pid 60257 01:16:59.907 05:11:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 60257 01:16:59.907 05:11:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 60257 01:17:02.442 01:17:02.442 real 0m4.567s 01:17:02.442 user 0m12.173s 01:17:02.442 sys 0m0.644s 01:17:02.442 05:11:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 01:17:02.442 05:11:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 01:17:02.442 ************************************ 01:17:02.442 END TEST locking_overlapped_coremask 01:17:02.442 ************************************ 01:17:02.442 05:11:44 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 01:17:02.442 05:11:44 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:17:02.442 05:11:44 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 01:17:02.442 05:11:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 01:17:02.442 ************************************ 01:17:02.442 START TEST locking_overlapped_coremask_via_rpc 01:17:02.442 ************************************ 01:17:02.442 05:11:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 01:17:02.442 05:11:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=60346 01:17:02.442 05:11:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 01:17:02.442 05:11:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 60346 /var/tmp/spdk.sock 01:17:02.442 05:11:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60346 ']' 01:17:02.442 05:11:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:17:02.442 05:11:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 01:17:02.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:17:02.442 05:11:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:17:02.442 05:11:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 01:17:02.442 05:11:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 01:17:02.701 [2024-12-09 05:11:44.922645] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:17:02.701 [2024-12-09 05:11:44.922769] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60346 ] 01:17:02.701 [2024-12-09 05:11:45.105011] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 01:17:02.701 [2024-12-09 05:11:45.105189] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 01:17:02.960 [2024-12-09 05:11:45.221975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:17:02.960 [2024-12-09 05:11:45.222121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:17:02.960 [2024-12-09 05:11:45.222149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:17:03.892 05:11:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:17:03.892 05:11:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 01:17:03.892 05:11:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=60370 01:17:03.892 05:11:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 60370 /var/tmp/spdk2.sock 01:17:03.892 05:11:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 01:17:03.892 05:11:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60370 ']' 01:17:03.892 05:11:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 01:17:03.892 05:11:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 01:17:03.892 05:11:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 01:17:03.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 01:17:03.892 05:11:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 01:17:03.892 05:11:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 01:17:03.892 [2024-12-09 05:11:46.238642] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:17:03.892 [2024-12-09 05:11:46.239523] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60370 ] 01:17:04.150 [2024-12-09 05:11:46.427969] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 01:17:04.150 [2024-12-09 05:11:46.428024] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 01:17:04.408 [2024-12-09 05:11:46.666655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:17:04.408 [2024-12-09 05:11:46.670649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:17:04.408 [2024-12-09 05:11:46.670681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 01:17:06.939 05:11:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:17:06.939 05:11:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 01:17:06.939 05:11:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 01:17:06.939 05:11:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:17:06.939 05:11:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 01:17:06.939 05:11:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:17:06.939 05:11:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 01:17:06.939 05:11:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 01:17:06.939 05:11:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 01:17:06.939 05:11:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 01:17:06.939 05:11:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:17:06.939 05:11:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 01:17:06.939 05:11:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:17:06.939 05:11:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 01:17:06.939 05:11:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:17:06.939 05:11:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 01:17:06.939 [2024-12-09 05:11:48.822651] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60346 has claimed it. 01:17:06.939 request: 01:17:06.939 { 01:17:06.939 "method": "framework_enable_cpumask_locks", 01:17:06.939 "req_id": 1 01:17:06.939 } 01:17:06.939 Got JSON-RPC error response 01:17:06.939 response: 01:17:06.939 { 01:17:06.939 "code": -32603, 01:17:06.939 "message": "Failed to claim CPU core: 2" 01:17:06.939 } 01:17:06.939 05:11:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 01:17:06.939 05:11:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 01:17:06.939 05:11:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:17:06.939 05:11:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:17:06.939 05:11:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:17:06.939 05:11:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 60346 /var/tmp/spdk.sock 01:17:06.939 05:11:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60346 ']' 01:17:06.939 05:11:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:17:06.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:17:06.939 05:11:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 01:17:06.939 05:11:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:17:06.939 05:11:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 01:17:06.939 05:11:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 01:17:06.939 05:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:17:06.939 05:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 01:17:06.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 01:17:06.939 05:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 60370 /var/tmp/spdk2.sock 01:17:06.939 05:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60370 ']' 01:17:06.939 05:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 01:17:06.939 05:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 01:17:06.939 05:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 01:17:06.939 05:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 01:17:06.939 05:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 01:17:06.939 05:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:17:06.939 05:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 01:17:06.939 05:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 01:17:06.939 ************************************ 01:17:06.939 END TEST locking_overlapped_coremask_via_rpc 01:17:06.939 ************************************ 01:17:06.939 05:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 01:17:06.939 05:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 01:17:06.939 05:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 01:17:06.939 01:17:06.939 real 0m4.460s 01:17:06.939 user 0m1.287s 01:17:06.939 sys 0m0.228s 01:17:06.939 05:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 01:17:06.939 05:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 01:17:06.939 05:11:49 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 01:17:06.939 05:11:49 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60346 ]] 01:17:06.939 05:11:49 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60346 01:17:06.939 05:11:49 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60346 ']' 01:17:06.939 05:11:49 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60346 01:17:06.939 05:11:49 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 01:17:06.940 05:11:49 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:17:06.940 05:11:49 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60346 01:17:06.940 killing process with pid 60346 01:17:06.940 05:11:49 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:17:06.940 05:11:49 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:17:06.940 05:11:49 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60346' 01:17:06.940 05:11:49 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60346 01:17:06.940 05:11:49 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60346 01:17:10.241 05:11:52 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60370 ]] 01:17:10.241 05:11:52 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60370 01:17:10.241 05:11:52 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60370 ']' 01:17:10.241 05:11:52 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60370 01:17:10.241 05:11:52 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 01:17:10.241 05:11:52 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:17:10.241 05:11:52 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60370 01:17:10.241 killing process with pid 60370 01:17:10.241 05:11:52 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 01:17:10.241 05:11:52 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 01:17:10.241 05:11:52 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60370' 01:17:10.241 05:11:52 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60370 01:17:10.241 05:11:52 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60370 01:17:12.794 05:11:54 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 01:17:12.794 05:11:54 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 01:17:12.794 05:11:54 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60346 ]] 01:17:12.794 05:11:54 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60346 01:17:12.794 05:11:54 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60346 ']' 01:17:12.794 Process with pid 60346 is not found 01:17:12.794 Process with pid 60370 is not found 01:17:12.794 05:11:54 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60346 01:17:12.794 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60346) - No such process 01:17:12.794 05:11:54 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60346 is not found' 01:17:12.794 05:11:54 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60370 ]] 01:17:12.794 05:11:54 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60370 01:17:12.794 05:11:54 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60370 ']' 01:17:12.794 05:11:54 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60370 01:17:12.794 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60370) - No such process 01:17:12.794 05:11:54 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60370 is not found' 01:17:12.794 05:11:54 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 01:17:12.794 01:17:12.794 real 0m53.030s 01:17:12.794 user 1m29.878s 01:17:12.794 sys 0m7.282s 01:17:12.794 05:11:54 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 01:17:12.794 05:11:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 01:17:12.794 ************************************ 01:17:12.794 END TEST cpu_locks 01:17:12.794 ************************************ 01:17:12.794 01:17:12.794 real 1m25.215s 01:17:12.794 user 2m33.877s 01:17:12.794 sys 0m11.756s 01:17:12.794 ************************************ 01:17:12.794 END TEST event 01:17:12.794 ************************************ 01:17:12.794 05:11:54 event -- common/autotest_common.sh@1130 -- # xtrace_disable 01:17:12.794 05:11:54 event -- common/autotest_common.sh@10 -- # set +x 01:17:12.794 05:11:54 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 01:17:12.794 05:11:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:17:12.794 05:11:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:17:12.794 05:11:54 -- common/autotest_common.sh@10 -- # set +x 01:17:12.794 ************************************ 01:17:12.794 START TEST thread 01:17:12.794 ************************************ 01:17:12.794 05:11:54 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 01:17:12.794 * Looking for test storage... 01:17:12.794 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 01:17:12.794 05:11:54 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:17:12.794 05:11:54 thread -- common/autotest_common.sh@1693 -- # lcov --version 01:17:12.794 05:11:54 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:17:12.794 05:11:55 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:17:12.794 05:11:55 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:17:12.794 05:11:55 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 01:17:12.794 05:11:55 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 01:17:12.794 05:11:55 thread -- scripts/common.sh@336 -- # IFS=.-: 01:17:12.794 05:11:55 thread -- scripts/common.sh@336 -- # read -ra ver1 01:17:12.794 05:11:55 thread -- scripts/common.sh@337 -- # IFS=.-: 01:17:12.794 05:11:55 thread -- scripts/common.sh@337 -- # read -ra ver2 01:17:12.794 05:11:55 thread -- scripts/common.sh@338 -- # local 'op=<' 01:17:12.794 05:11:55 thread -- scripts/common.sh@340 -- # ver1_l=2 01:17:12.794 05:11:55 thread -- scripts/common.sh@341 -- # ver2_l=1 01:17:12.794 05:11:55 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:17:12.794 05:11:55 thread -- scripts/common.sh@344 -- # case "$op" in 01:17:12.794 05:11:55 thread -- scripts/common.sh@345 -- # : 1 01:17:12.794 05:11:55 thread -- scripts/common.sh@364 -- # (( v = 0 )) 01:17:12.794 05:11:55 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:17:12.794 05:11:55 thread -- scripts/common.sh@365 -- # decimal 1 01:17:12.794 05:11:55 thread -- scripts/common.sh@353 -- # local d=1 01:17:12.794 05:11:55 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:17:12.794 05:11:55 thread -- scripts/common.sh@355 -- # echo 1 01:17:12.794 05:11:55 thread -- scripts/common.sh@365 -- # ver1[v]=1 01:17:12.794 05:11:55 thread -- scripts/common.sh@366 -- # decimal 2 01:17:12.794 05:11:55 thread -- scripts/common.sh@353 -- # local d=2 01:17:12.794 05:11:55 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:17:12.795 05:11:55 thread -- scripts/common.sh@355 -- # echo 2 01:17:12.795 05:11:55 thread -- scripts/common.sh@366 -- # ver2[v]=2 01:17:12.795 05:11:55 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:17:12.795 05:11:55 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:17:12.795 05:11:55 thread -- scripts/common.sh@368 -- # return 0 01:17:12.795 05:11:55 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:17:12.795 05:11:55 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:17:12.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:17:12.795 --rc genhtml_branch_coverage=1 01:17:12.795 --rc genhtml_function_coverage=1 01:17:12.795 --rc genhtml_legend=1 01:17:12.795 --rc geninfo_all_blocks=1 01:17:12.795 --rc geninfo_unexecuted_blocks=1 01:17:12.795 01:17:12.795 ' 01:17:12.795 05:11:55 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:17:12.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:17:12.795 --rc genhtml_branch_coverage=1 01:17:12.795 --rc genhtml_function_coverage=1 01:17:12.795 --rc genhtml_legend=1 01:17:12.795 --rc geninfo_all_blocks=1 01:17:12.795 --rc geninfo_unexecuted_blocks=1 01:17:12.795 01:17:12.795 ' 01:17:12.795 05:11:55 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:17:12.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:17:12.795 --rc genhtml_branch_coverage=1 01:17:12.795 --rc genhtml_function_coverage=1 01:17:12.795 --rc genhtml_legend=1 01:17:12.795 --rc geninfo_all_blocks=1 01:17:12.795 --rc geninfo_unexecuted_blocks=1 01:17:12.795 01:17:12.795 ' 01:17:12.795 05:11:55 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:17:12.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:17:12.795 --rc genhtml_branch_coverage=1 01:17:12.795 --rc genhtml_function_coverage=1 01:17:12.795 --rc genhtml_legend=1 01:17:12.795 --rc geninfo_all_blocks=1 01:17:12.795 --rc geninfo_unexecuted_blocks=1 01:17:12.795 01:17:12.795 ' 01:17:12.795 05:11:55 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 01:17:12.795 05:11:55 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 01:17:12.795 05:11:55 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 01:17:12.795 05:11:55 thread -- common/autotest_common.sh@10 -- # set +x 01:17:12.795 ************************************ 01:17:12.795 START TEST thread_poller_perf 01:17:12.795 ************************************ 01:17:12.795 05:11:55 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 01:17:12.795 [2024-12-09 05:11:55.120479] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:17:12.795 [2024-12-09 05:11:55.120718] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60565 ] 01:17:13.054 [2024-12-09 05:11:55.302960] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:17:13.054 [2024-12-09 05:11:55.450696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:17:13.054 Running 1000 pollers for 1 seconds with 1 microseconds period. 01:17:14.430 [2024-12-09T05:11:56.886Z] ====================================== 01:17:14.430 [2024-12-09T05:11:56.886Z] busy:2502061626 (cyc) 01:17:14.430 [2024-12-09T05:11:56.886Z] total_run_count: 389000 01:17:14.430 [2024-12-09T05:11:56.886Z] tsc_hz: 2490000000 (cyc) 01:17:14.430 [2024-12-09T05:11:56.886Z] ====================================== 01:17:14.430 [2024-12-09T05:11:56.886Z] poller_cost: 6432 (cyc), 2583 (nsec) 01:17:14.430 01:17:14.430 real 0m1.718s 01:17:14.430 user 0m1.483s 01:17:14.430 sys 0m0.124s 01:17:14.430 05:11:56 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 01:17:14.430 05:11:56 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 01:17:14.430 ************************************ 01:17:14.430 END TEST thread_poller_perf 01:17:14.430 ************************************ 01:17:14.430 05:11:56 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 01:17:14.430 05:11:56 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 01:17:14.430 05:11:56 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 01:17:14.430 05:11:56 thread -- common/autotest_common.sh@10 -- # set +x 01:17:14.430 ************************************ 01:17:14.430 START TEST thread_poller_perf 01:17:14.430 ************************************ 01:17:14.430 05:11:56 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 01:17:14.689 [2024-12-09 05:11:56.916208] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:17:14.689 [2024-12-09 05:11:56.916326] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60607 ] 01:17:14.689 [2024-12-09 05:11:57.100403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:17:14.981 Running 1000 pollers for 1 seconds with 0 microseconds period. 01:17:14.981 [2024-12-09 05:11:57.236270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:17:16.358 [2024-12-09T05:11:58.814Z] ====================================== 01:17:16.358 [2024-12-09T05:11:58.814Z] busy:2494115860 (cyc) 01:17:16.358 [2024-12-09T05:11:58.814Z] total_run_count: 5107000 01:17:16.358 [2024-12-09T05:11:58.814Z] tsc_hz: 2490000000 (cyc) 01:17:16.358 [2024-12-09T05:11:58.814Z] ====================================== 01:17:16.359 [2024-12-09T05:11:58.815Z] poller_cost: 488 (cyc), 195 (nsec) 01:17:16.359 ************************************ 01:17:16.359 END TEST thread_poller_perf 01:17:16.359 ************************************ 01:17:16.359 01:17:16.359 real 0m1.699s 01:17:16.359 user 0m1.468s 01:17:16.359 sys 0m0.123s 01:17:16.359 05:11:58 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 01:17:16.359 05:11:58 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 01:17:16.359 05:11:58 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 01:17:16.359 ************************************ 01:17:16.359 END TEST thread 01:17:16.359 ************************************ 01:17:16.359 01:17:16.359 real 0m3.798s 01:17:16.359 user 0m3.122s 01:17:16.359 sys 0m0.463s 01:17:16.359 05:11:58 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 01:17:16.359 05:11:58 thread -- common/autotest_common.sh@10 -- # set +x 01:17:16.359 05:11:58 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 01:17:16.359 05:11:58 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 01:17:16.359 05:11:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:17:16.359 05:11:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:17:16.359 05:11:58 -- common/autotest_common.sh@10 -- # set +x 01:17:16.359 ************************************ 01:17:16.359 START TEST app_cmdline 01:17:16.359 ************************************ 01:17:16.359 05:11:58 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 01:17:16.618 * Looking for test storage... 01:17:16.618 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 01:17:16.618 05:11:58 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:17:16.618 05:11:58 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 01:17:16.618 05:11:58 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:17:16.618 05:11:58 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:17:16.618 05:11:58 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:17:16.618 05:11:58 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 01:17:16.618 05:11:58 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 01:17:16.618 05:11:58 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 01:17:16.618 05:11:58 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 01:17:16.618 05:11:58 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 01:17:16.618 05:11:58 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 01:17:16.618 05:11:58 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 01:17:16.618 05:11:58 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 01:17:16.618 05:11:58 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 01:17:16.618 05:11:58 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:17:16.618 05:11:58 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 01:17:16.618 05:11:58 app_cmdline -- scripts/common.sh@345 -- # : 1 01:17:16.618 05:11:58 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 01:17:16.618 05:11:58 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:17:16.618 05:11:58 app_cmdline -- scripts/common.sh@365 -- # decimal 1 01:17:16.618 05:11:58 app_cmdline -- scripts/common.sh@353 -- # local d=1 01:17:16.618 05:11:58 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:17:16.618 05:11:58 app_cmdline -- scripts/common.sh@355 -- # echo 1 01:17:16.618 05:11:58 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 01:17:16.618 05:11:58 app_cmdline -- scripts/common.sh@366 -- # decimal 2 01:17:16.618 05:11:58 app_cmdline -- scripts/common.sh@353 -- # local d=2 01:17:16.618 05:11:58 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:17:16.618 05:11:58 app_cmdline -- scripts/common.sh@355 -- # echo 2 01:17:16.618 05:11:58 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 01:17:16.618 05:11:58 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:17:16.618 05:11:58 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:17:16.618 05:11:58 app_cmdline -- scripts/common.sh@368 -- # return 0 01:17:16.618 05:11:58 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:17:16.618 05:11:58 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:17:16.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:17:16.618 --rc genhtml_branch_coverage=1 01:17:16.618 --rc genhtml_function_coverage=1 01:17:16.618 --rc genhtml_legend=1 01:17:16.618 --rc geninfo_all_blocks=1 01:17:16.618 --rc geninfo_unexecuted_blocks=1 01:17:16.618 01:17:16.618 ' 01:17:16.618 05:11:58 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:17:16.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:17:16.618 --rc genhtml_branch_coverage=1 01:17:16.618 --rc genhtml_function_coverage=1 01:17:16.618 --rc genhtml_legend=1 01:17:16.618 --rc geninfo_all_blocks=1 01:17:16.618 --rc geninfo_unexecuted_blocks=1 01:17:16.618 01:17:16.618 ' 01:17:16.618 05:11:58 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:17:16.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:17:16.618 --rc genhtml_branch_coverage=1 01:17:16.618 --rc genhtml_function_coverage=1 01:17:16.618 --rc genhtml_legend=1 01:17:16.618 --rc geninfo_all_blocks=1 01:17:16.618 --rc geninfo_unexecuted_blocks=1 01:17:16.618 01:17:16.618 ' 01:17:16.618 05:11:58 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:17:16.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:17:16.618 --rc genhtml_branch_coverage=1 01:17:16.618 --rc genhtml_function_coverage=1 01:17:16.618 --rc genhtml_legend=1 01:17:16.618 --rc geninfo_all_blocks=1 01:17:16.618 --rc geninfo_unexecuted_blocks=1 01:17:16.618 01:17:16.618 ' 01:17:16.618 05:11:58 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 01:17:16.618 05:11:58 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=60696 01:17:16.618 05:11:58 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 01:17:16.618 05:11:58 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 60696 01:17:16.618 05:11:58 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 60696 ']' 01:17:16.618 05:11:58 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:17:16.618 05:11:58 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 01:17:16.618 05:11:58 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:17:16.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:17:16.618 05:11:58 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 01:17:16.618 05:11:58 app_cmdline -- common/autotest_common.sh@10 -- # set +x 01:17:16.618 [2024-12-09 05:11:59.036120] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:17:16.618 [2024-12-09 05:11:59.036449] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60696 ] 01:17:16.878 [2024-12-09 05:11:59.218605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:17:17.137 [2024-12-09 05:11:59.351136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:17:18.074 05:12:00 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:17:18.074 05:12:00 app_cmdline -- common/autotest_common.sh@868 -- # return 0 01:17:18.074 05:12:00 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 01:17:18.334 { 01:17:18.334 "version": "SPDK v25.01-pre git sha1 cabd61f7f", 01:17:18.334 "fields": { 01:17:18.334 "major": 25, 01:17:18.334 "minor": 1, 01:17:18.334 "patch": 0, 01:17:18.334 "suffix": "-pre", 01:17:18.334 "commit": "cabd61f7f" 01:17:18.334 } 01:17:18.334 } 01:17:18.334 05:12:00 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 01:17:18.334 05:12:00 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 01:17:18.334 05:12:00 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 01:17:18.334 05:12:00 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 01:17:18.334 05:12:00 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 01:17:18.334 05:12:00 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 01:17:18.334 05:12:00 app_cmdline -- app/cmdline.sh@26 -- # sort 01:17:18.334 05:12:00 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 01:17:18.334 05:12:00 app_cmdline -- common/autotest_common.sh@10 -- # set +x 01:17:18.334 05:12:00 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:17:18.334 05:12:00 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 01:17:18.334 05:12:00 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 01:17:18.334 05:12:00 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 01:17:18.334 05:12:00 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 01:17:18.334 05:12:00 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 01:17:18.334 05:12:00 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:17:18.334 05:12:00 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:17:18.334 05:12:00 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:17:18.334 05:12:00 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:17:18.334 05:12:00 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:17:18.334 05:12:00 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:17:18.334 05:12:00 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:17:18.334 05:12:00 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 01:17:18.334 05:12:00 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 01:17:18.593 request: 01:17:18.593 { 01:17:18.593 "method": "env_dpdk_get_mem_stats", 01:17:18.593 "req_id": 1 01:17:18.593 } 01:17:18.593 Got JSON-RPC error response 01:17:18.593 response: 01:17:18.593 { 01:17:18.593 "code": -32601, 01:17:18.593 "message": "Method not found" 01:17:18.593 } 01:17:18.593 05:12:00 app_cmdline -- common/autotest_common.sh@655 -- # es=1 01:17:18.593 05:12:00 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:17:18.593 05:12:00 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:17:18.593 05:12:00 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:17:18.593 05:12:00 app_cmdline -- app/cmdline.sh@1 -- # killprocess 60696 01:17:18.593 05:12:00 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 60696 ']' 01:17:18.593 05:12:00 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 60696 01:17:18.593 05:12:00 app_cmdline -- common/autotest_common.sh@959 -- # uname 01:17:18.593 05:12:00 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:17:18.593 05:12:00 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60696 01:17:18.593 killing process with pid 60696 01:17:18.594 05:12:00 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:17:18.594 05:12:00 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:17:18.594 05:12:00 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60696' 01:17:18.594 05:12:00 app_cmdline -- common/autotest_common.sh@973 -- # kill 60696 01:17:18.594 05:12:00 app_cmdline -- common/autotest_common.sh@978 -- # wait 60696 01:17:21.128 01:17:21.128 real 0m4.872s 01:17:21.128 user 0m4.852s 01:17:21.128 sys 0m0.809s 01:17:21.128 ************************************ 01:17:21.128 END TEST app_cmdline 01:17:21.128 ************************************ 01:17:21.128 05:12:03 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 01:17:21.128 05:12:03 app_cmdline -- common/autotest_common.sh@10 -- # set +x 01:17:21.387 05:12:03 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 01:17:21.387 05:12:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:17:21.387 05:12:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:17:21.387 05:12:03 -- common/autotest_common.sh@10 -- # set +x 01:17:21.387 ************************************ 01:17:21.387 START TEST version 01:17:21.387 ************************************ 01:17:21.387 05:12:03 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 01:17:21.387 * Looking for test storage... 01:17:21.387 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 01:17:21.387 05:12:03 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:17:21.387 05:12:03 version -- common/autotest_common.sh@1693 -- # lcov --version 01:17:21.387 05:12:03 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:17:21.646 05:12:03 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:17:21.646 05:12:03 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:17:21.646 05:12:03 version -- scripts/common.sh@333 -- # local ver1 ver1_l 01:17:21.646 05:12:03 version -- scripts/common.sh@334 -- # local ver2 ver2_l 01:17:21.646 05:12:03 version -- scripts/common.sh@336 -- # IFS=.-: 01:17:21.646 05:12:03 version -- scripts/common.sh@336 -- # read -ra ver1 01:17:21.646 05:12:03 version -- scripts/common.sh@337 -- # IFS=.-: 01:17:21.646 05:12:03 version -- scripts/common.sh@337 -- # read -ra ver2 01:17:21.646 05:12:03 version -- scripts/common.sh@338 -- # local 'op=<' 01:17:21.647 05:12:03 version -- scripts/common.sh@340 -- # ver1_l=2 01:17:21.647 05:12:03 version -- scripts/common.sh@341 -- # ver2_l=1 01:17:21.647 05:12:03 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:17:21.647 05:12:03 version -- scripts/common.sh@344 -- # case "$op" in 01:17:21.647 05:12:03 version -- scripts/common.sh@345 -- # : 1 01:17:21.647 05:12:03 version -- scripts/common.sh@364 -- # (( v = 0 )) 01:17:21.647 05:12:03 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:17:21.647 05:12:03 version -- scripts/common.sh@365 -- # decimal 1 01:17:21.647 05:12:03 version -- scripts/common.sh@353 -- # local d=1 01:17:21.647 05:12:03 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:17:21.647 05:12:03 version -- scripts/common.sh@355 -- # echo 1 01:17:21.647 05:12:03 version -- scripts/common.sh@365 -- # ver1[v]=1 01:17:21.647 05:12:03 version -- scripts/common.sh@366 -- # decimal 2 01:17:21.647 05:12:03 version -- scripts/common.sh@353 -- # local d=2 01:17:21.647 05:12:03 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:17:21.647 05:12:03 version -- scripts/common.sh@355 -- # echo 2 01:17:21.647 05:12:03 version -- scripts/common.sh@366 -- # ver2[v]=2 01:17:21.647 05:12:03 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:17:21.647 05:12:03 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:17:21.647 05:12:03 version -- scripts/common.sh@368 -- # return 0 01:17:21.647 05:12:03 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:17:21.647 05:12:03 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:17:21.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:17:21.647 --rc genhtml_branch_coverage=1 01:17:21.647 --rc genhtml_function_coverage=1 01:17:21.647 --rc genhtml_legend=1 01:17:21.647 --rc geninfo_all_blocks=1 01:17:21.647 --rc geninfo_unexecuted_blocks=1 01:17:21.647 01:17:21.647 ' 01:17:21.647 05:12:03 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:17:21.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:17:21.647 --rc genhtml_branch_coverage=1 01:17:21.647 --rc genhtml_function_coverage=1 01:17:21.647 --rc genhtml_legend=1 01:17:21.647 --rc geninfo_all_blocks=1 01:17:21.647 --rc geninfo_unexecuted_blocks=1 01:17:21.647 01:17:21.647 ' 01:17:21.647 05:12:03 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:17:21.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:17:21.647 --rc genhtml_branch_coverage=1 01:17:21.647 --rc genhtml_function_coverage=1 01:17:21.647 --rc genhtml_legend=1 01:17:21.647 --rc geninfo_all_blocks=1 01:17:21.647 --rc geninfo_unexecuted_blocks=1 01:17:21.647 01:17:21.647 ' 01:17:21.647 05:12:03 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:17:21.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:17:21.647 --rc genhtml_branch_coverage=1 01:17:21.647 --rc genhtml_function_coverage=1 01:17:21.647 --rc genhtml_legend=1 01:17:21.647 --rc geninfo_all_blocks=1 01:17:21.647 --rc geninfo_unexecuted_blocks=1 01:17:21.647 01:17:21.647 ' 01:17:21.647 05:12:03 version -- app/version.sh@17 -- # get_header_version major 01:17:21.647 05:12:03 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 01:17:21.647 05:12:03 version -- app/version.sh@14 -- # cut -f2 01:17:21.647 05:12:03 version -- app/version.sh@14 -- # tr -d '"' 01:17:21.647 05:12:03 version -- app/version.sh@17 -- # major=25 01:17:21.647 05:12:03 version -- app/version.sh@18 -- # get_header_version minor 01:17:21.647 05:12:03 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 01:17:21.647 05:12:03 version -- app/version.sh@14 -- # cut -f2 01:17:21.647 05:12:03 version -- app/version.sh@14 -- # tr -d '"' 01:17:21.647 05:12:03 version -- app/version.sh@18 -- # minor=1 01:17:21.647 05:12:03 version -- app/version.sh@19 -- # get_header_version patch 01:17:21.647 05:12:03 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 01:17:21.647 05:12:03 version -- app/version.sh@14 -- # cut -f2 01:17:21.647 05:12:03 version -- app/version.sh@14 -- # tr -d '"' 01:17:21.647 05:12:03 version -- app/version.sh@19 -- # patch=0 01:17:21.647 05:12:03 version -- app/version.sh@20 -- # get_header_version suffix 01:17:21.647 05:12:03 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 01:17:21.647 05:12:03 version -- app/version.sh@14 -- # cut -f2 01:17:21.647 05:12:03 version -- app/version.sh@14 -- # tr -d '"' 01:17:21.647 05:12:03 version -- app/version.sh@20 -- # suffix=-pre 01:17:21.647 05:12:03 version -- app/version.sh@22 -- # version=25.1 01:17:21.647 05:12:03 version -- app/version.sh@25 -- # (( patch != 0 )) 01:17:21.647 05:12:03 version -- app/version.sh@28 -- # version=25.1rc0 01:17:21.647 05:12:03 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 01:17:21.647 05:12:03 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 01:17:21.647 05:12:03 version -- app/version.sh@30 -- # py_version=25.1rc0 01:17:21.647 05:12:03 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 01:17:21.647 ************************************ 01:17:21.647 END TEST version 01:17:21.647 ************************************ 01:17:21.647 01:17:21.647 real 0m0.332s 01:17:21.647 user 0m0.178s 01:17:21.647 sys 0m0.209s 01:17:21.647 05:12:03 version -- common/autotest_common.sh@1130 -- # xtrace_disable 01:17:21.647 05:12:03 version -- common/autotest_common.sh@10 -- # set +x 01:17:21.647 05:12:04 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 01:17:21.647 05:12:04 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 01:17:21.647 05:12:04 -- spdk/autotest.sh@194 -- # uname -s 01:17:21.647 05:12:04 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 01:17:21.647 05:12:04 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 01:17:21.647 05:12:04 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 01:17:21.647 05:12:04 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 01:17:21.647 05:12:04 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 01:17:21.647 05:12:04 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:17:21.647 05:12:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:17:21.647 05:12:04 -- common/autotest_common.sh@10 -- # set +x 01:17:21.647 ************************************ 01:17:21.647 START TEST blockdev_nvme 01:17:21.647 ************************************ 01:17:21.647 05:12:04 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 01:17:21.958 * Looking for test storage... 01:17:21.958 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 01:17:21.958 05:12:04 blockdev_nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:17:21.958 05:12:04 blockdev_nvme -- common/autotest_common.sh@1693 -- # lcov --version 01:17:21.958 05:12:04 blockdev_nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:17:21.958 05:12:04 blockdev_nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:17:21.958 05:12:04 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:17:21.958 05:12:04 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 01:17:21.958 05:12:04 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 01:17:21.958 05:12:04 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 01:17:21.958 05:12:04 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 01:17:21.958 05:12:04 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 01:17:21.958 05:12:04 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 01:17:21.958 05:12:04 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 01:17:21.958 05:12:04 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 01:17:21.958 05:12:04 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 01:17:21.958 05:12:04 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:17:21.958 05:12:04 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 01:17:21.958 05:12:04 blockdev_nvme -- scripts/common.sh@345 -- # : 1 01:17:21.958 05:12:04 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 01:17:21.958 05:12:04 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:17:21.958 05:12:04 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 01:17:21.958 05:12:04 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 01:17:21.958 05:12:04 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:17:21.958 05:12:04 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 01:17:21.958 05:12:04 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 01:17:21.958 05:12:04 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 01:17:21.958 05:12:04 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 01:17:21.958 05:12:04 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:17:21.958 05:12:04 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 01:17:21.958 05:12:04 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 01:17:21.958 05:12:04 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:17:21.958 05:12:04 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:17:21.958 05:12:04 blockdev_nvme -- scripts/common.sh@368 -- # return 0 01:17:21.958 05:12:04 blockdev_nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:17:21.958 05:12:04 blockdev_nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:17:21.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:17:21.958 --rc genhtml_branch_coverage=1 01:17:21.958 --rc genhtml_function_coverage=1 01:17:21.958 --rc genhtml_legend=1 01:17:21.958 --rc geninfo_all_blocks=1 01:17:21.958 --rc geninfo_unexecuted_blocks=1 01:17:21.958 01:17:21.958 ' 01:17:21.958 05:12:04 blockdev_nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:17:21.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:17:21.958 --rc genhtml_branch_coverage=1 01:17:21.958 --rc genhtml_function_coverage=1 01:17:21.958 --rc genhtml_legend=1 01:17:21.958 --rc geninfo_all_blocks=1 01:17:21.958 --rc geninfo_unexecuted_blocks=1 01:17:21.958 01:17:21.958 ' 01:17:21.958 05:12:04 blockdev_nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:17:21.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:17:21.958 --rc genhtml_branch_coverage=1 01:17:21.958 --rc genhtml_function_coverage=1 01:17:21.958 --rc genhtml_legend=1 01:17:21.958 --rc geninfo_all_blocks=1 01:17:21.958 --rc geninfo_unexecuted_blocks=1 01:17:21.958 01:17:21.958 ' 01:17:21.958 05:12:04 blockdev_nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:17:21.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:17:21.958 --rc genhtml_branch_coverage=1 01:17:21.958 --rc genhtml_function_coverage=1 01:17:21.958 --rc genhtml_legend=1 01:17:21.958 --rc geninfo_all_blocks=1 01:17:21.958 --rc geninfo_unexecuted_blocks=1 01:17:21.958 01:17:21.958 ' 01:17:21.958 05:12:04 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 01:17:21.958 05:12:04 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 01:17:21.958 05:12:04 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 01:17:21.958 05:12:04 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 01:17:21.958 05:12:04 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 01:17:21.958 05:12:04 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 01:17:21.958 05:12:04 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 01:17:21.958 05:12:04 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 01:17:21.958 05:12:04 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 01:17:21.958 05:12:04 blockdev_nvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 01:17:21.958 05:12:04 blockdev_nvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 01:17:21.958 05:12:04 blockdev_nvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 01:17:21.958 05:12:04 blockdev_nvme -- bdev/blockdev.sh@711 -- # uname -s 01:17:21.958 05:12:04 blockdev_nvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 01:17:21.958 05:12:04 blockdev_nvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 01:17:21.958 05:12:04 blockdev_nvme -- bdev/blockdev.sh@719 -- # test_type=nvme 01:17:21.958 05:12:04 blockdev_nvme -- bdev/blockdev.sh@720 -- # crypto_device= 01:17:21.958 05:12:04 blockdev_nvme -- bdev/blockdev.sh@721 -- # dek= 01:17:21.959 05:12:04 blockdev_nvme -- bdev/blockdev.sh@722 -- # env_ctx= 01:17:21.959 05:12:04 blockdev_nvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 01:17:21.959 05:12:04 blockdev_nvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 01:17:21.959 05:12:04 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == bdev ]] 01:17:21.959 05:12:04 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == crypto_* ]] 01:17:21.959 05:12:04 blockdev_nvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 01:17:21.959 05:12:04 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=60890 01:17:21.959 05:12:04 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 01:17:21.959 05:12:04 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 01:17:21.959 05:12:04 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 60890 01:17:21.959 05:12:04 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 60890 ']' 01:17:21.959 05:12:04 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:17:21.959 05:12:04 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 01:17:21.959 05:12:04 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:17:21.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:17:21.959 05:12:04 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 01:17:21.959 05:12:04 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 01:17:22.232 [2024-12-09 05:12:04.428314] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:17:22.232 [2024-12-09 05:12:04.428664] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60890 ] 01:17:22.232 [2024-12-09 05:12:04.614424] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:17:22.491 [2024-12-09 05:12:04.743950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:17:23.427 05:12:05 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:17:23.427 05:12:05 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 01:17:23.427 05:12:05 blockdev_nvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 01:17:23.427 05:12:05 blockdev_nvme -- bdev/blockdev.sh@736 -- # setup_nvme_conf 01:17:23.427 05:12:05 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 01:17:23.427 05:12:05 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 01:17:23.427 05:12:05 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 01:17:23.685 05:12:05 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 01:17:23.685 05:12:05 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 01:17:23.685 05:12:05 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 01:17:23.943 05:12:06 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:17:23.943 05:12:06 blockdev_nvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 01:17:23.943 05:12:06 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 01:17:23.943 05:12:06 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 01:17:23.943 05:12:06 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:17:23.943 05:12:06 blockdev_nvme -- bdev/blockdev.sh@777 -- # cat 01:17:23.943 05:12:06 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 01:17:23.943 05:12:06 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 01:17:23.943 05:12:06 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 01:17:23.943 05:12:06 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:17:23.943 05:12:06 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 01:17:23.943 05:12:06 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 01:17:23.943 05:12:06 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 01:17:23.943 05:12:06 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:17:23.943 05:12:06 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 01:17:23.943 05:12:06 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 01:17:23.943 05:12:06 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 01:17:23.943 05:12:06 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:17:23.943 05:12:06 blockdev_nvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 01:17:23.943 05:12:06 blockdev_nvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 01:17:23.943 05:12:06 blockdev_nvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 01:17:23.943 05:12:06 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 01:17:23.943 05:12:06 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 01:17:23.943 05:12:06 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:17:24.203 05:12:06 blockdev_nvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 01:17:24.203 05:12:06 blockdev_nvme -- bdev/blockdev.sh@786 -- # jq -r .name 01:17:24.204 05:12:06 blockdev_nvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "aa6c768c-9ce6-4373-8951-ea12dd71052f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "aa6c768c-9ce6-4373-8951-ea12dd71052f",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "ed4e24ee-68c9-4517-8444-7ca8e100e328"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "ed4e24ee-68c9-4517-8444-7ca8e100e328",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "89da8560-8d67-4e97-923a-559164297a50"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "89da8560-8d67-4e97-923a-559164297a50",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "11d25450-8fe3-4dc4-aa9f-ad2d96b9e07a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "11d25450-8fe3-4dc4-aa9f-ad2d96b9e07a",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "361c9fcb-b456-4af6-a4ca-0570c644684b"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "361c9fcb-b456-4af6-a4ca-0570c644684b",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "db4a928d-4a64-41d7-9c55-3b1434284ba8"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "db4a928d-4a64-41d7-9c55-3b1434284ba8",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 01:17:24.204 05:12:06 blockdev_nvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 01:17:24.204 05:12:06 blockdev_nvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 01:17:24.204 05:12:06 blockdev_nvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 01:17:24.204 05:12:06 blockdev_nvme -- bdev/blockdev.sh@791 -- # killprocess 60890 01:17:24.204 05:12:06 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 60890 ']' 01:17:24.204 05:12:06 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 60890 01:17:24.204 05:12:06 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 01:17:24.204 05:12:06 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:17:24.204 05:12:06 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60890 01:17:24.204 killing process with pid 60890 01:17:24.204 05:12:06 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:17:24.204 05:12:06 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:17:24.204 05:12:06 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60890' 01:17:24.204 05:12:06 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 60890 01:17:24.204 05:12:06 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 60890 01:17:27.490 05:12:09 blockdev_nvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 01:17:27.490 05:12:09 blockdev_nvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 01:17:27.490 05:12:09 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 01:17:27.490 05:12:09 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 01:17:27.490 05:12:09 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 01:17:27.490 ************************************ 01:17:27.490 START TEST bdev_hello_world 01:17:27.490 ************************************ 01:17:27.490 05:12:09 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 01:17:27.490 [2024-12-09 05:12:09.332244] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:17:27.490 [2024-12-09 05:12:09.332564] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60996 ] 01:17:27.490 [2024-12-09 05:12:09.522069] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:17:27.490 [2024-12-09 05:12:09.664944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:17:28.058 [2024-12-09 05:12:10.402426] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 01:17:28.058 [2024-12-09 05:12:10.402500] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 01:17:28.058 [2024-12-09 05:12:10.402523] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 01:17:28.058 [2024-12-09 05:12:10.405703] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 01:17:28.058 [2024-12-09 05:12:10.406249] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 01:17:28.058 [2024-12-09 05:12:10.406287] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 01:17:28.058 [2024-12-09 05:12:10.406511] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 01:17:28.058 01:17:28.058 [2024-12-09 05:12:10.406535] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 01:17:29.433 01:17:29.433 real 0m2.508s 01:17:29.433 user 0m2.062s 01:17:29.433 sys 0m0.336s 01:17:29.433 ************************************ 01:17:29.433 END TEST bdev_hello_world 01:17:29.433 ************************************ 01:17:29.433 05:12:11 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 01:17:29.433 05:12:11 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 01:17:29.433 05:12:11 blockdev_nvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 01:17:29.433 05:12:11 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:17:29.433 05:12:11 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 01:17:29.433 05:12:11 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 01:17:29.433 ************************************ 01:17:29.433 START TEST bdev_bounds 01:17:29.433 ************************************ 01:17:29.433 05:12:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 01:17:29.433 05:12:11 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61038 01:17:29.433 05:12:11 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 01:17:29.433 Process bdevio pid: 61038 01:17:29.433 05:12:11 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 01:17:29.433 05:12:11 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61038' 01:17:29.433 05:12:11 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61038 01:17:29.433 05:12:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 61038 ']' 01:17:29.433 05:12:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:17:29.433 05:12:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 01:17:29.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:17:29.433 05:12:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:17:29.433 05:12:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 01:17:29.433 05:12:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 01:17:29.692 [2024-12-09 05:12:11.911116] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:17:29.692 [2024-12-09 05:12:11.911246] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61038 ] 01:17:29.692 [2024-12-09 05:12:12.097578] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 01:17:29.949 [2024-12-09 05:12:12.239708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:17:29.949 [2024-12-09 05:12:12.239866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:17:29.949 [2024-12-09 05:12:12.239888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:17:30.883 05:12:13 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:17:30.883 05:12:13 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 01:17:30.883 05:12:13 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 01:17:30.883 I/O targets: 01:17:30.883 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 01:17:30.883 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 01:17:30.883 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 01:17:30.883 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 01:17:30.883 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 01:17:30.883 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 01:17:30.883 01:17:30.883 01:17:30.883 CUnit - A unit testing framework for C - Version 2.1-3 01:17:30.883 http://cunit.sourceforge.net/ 01:17:30.883 01:17:30.883 01:17:30.883 Suite: bdevio tests on: Nvme3n1 01:17:30.883 Test: blockdev write read block ...passed 01:17:30.883 Test: blockdev write zeroes read block ...passed 01:17:30.883 Test: blockdev write zeroes read no split ...passed 01:17:30.883 Test: blockdev write zeroes read split ...passed 01:17:30.883 Test: blockdev write zeroes read split partial ...passed 01:17:30.883 Test: blockdev reset ...[2024-12-09 05:12:13.187955] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 01:17:30.883 passed 01:17:30.883 Test: blockdev write read 8 blocks ...[2024-12-09 05:12:13.192363] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 01:17:30.883 passed 01:17:30.883 Test: blockdev write read size > 128k ...passed 01:17:30.883 Test: blockdev write read invalid size ...passed 01:17:30.883 Test: blockdev write read offset + nbytes == size of blockdev ...passed 01:17:30.883 Test: blockdev write read offset + nbytes > size of blockdev ...passed 01:17:30.883 Test: blockdev write read max offset ...passed 01:17:30.883 Test: blockdev write read 2 blocks on overlapped address offset ...passed 01:17:30.883 Test: blockdev writev readv 8 blocks ...passed 01:17:30.883 Test: blockdev writev readv 30 x 1block ...passed 01:17:30.883 Test: blockdev writev readv block ...passed 01:17:30.883 Test: blockdev writev readv size > 128k ...passed 01:17:30.883 Test: blockdev writev readv size > 128k in two iovs ...passed 01:17:30.883 Test: blockdev comparev and writev ...[2024-12-09 05:12:13.202579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 passed 01:17:30.883 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2bf00a000 len:0x1000 01:17:30.883 [2024-12-09 05:12:13.202745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 01:17:30.883 passed 01:17:30.883 Test: blockdev nvme passthru vendor specific ...passed 01:17:30.883 Test: blockdev nvme admin passthru ...[2024-12-09 05:12:13.203626] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 01:17:30.883 [2024-12-09 05:12:13.203672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 01:17:30.883 passed 01:17:30.883 Test: blockdev copy ...passed 01:17:30.883 Suite: bdevio tests on: Nvme2n3 01:17:30.883 Test: blockdev write read block ...passed 01:17:30.883 Test: blockdev write zeroes read block ...passed 01:17:30.883 Test: blockdev write zeroes read no split ...passed 01:17:30.883 Test: blockdev write zeroes read split ...passed 01:17:30.883 Test: blockdev write zeroes read split partial ...passed 01:17:30.883 Test: blockdev reset ...[2024-12-09 05:12:13.284208] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 01:17:30.883 [2024-12-09 05:12:13.288809] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 01:17:30.883 Test: blockdev write read 8 blocks ...uccessful. 01:17:30.883 passed 01:17:30.883 Test: blockdev write read size > 128k ...passed 01:17:30.883 Test: blockdev write read invalid size ...passed 01:17:30.884 Test: blockdev write read offset + nbytes == size of blockdev ...passed 01:17:30.884 Test: blockdev write read offset + nbytes > size of blockdev ...passed 01:17:30.884 Test: blockdev write read max offset ...passed 01:17:30.884 Test: blockdev write read 2 blocks on overlapped address offset ...passed 01:17:30.884 Test: blockdev writev readv 8 blocks ...passed 01:17:30.884 Test: blockdev writev readv 30 x 1block ...passed 01:17:30.884 Test: blockdev writev readv block ...passed 01:17:30.884 Test: blockdev writev readv size > 128k ...passed 01:17:30.884 Test: blockdev writev readv size > 128k in two iovs ...passed 01:17:30.884 Test: blockdev comparev and writev ...[2024-12-09 05:12:13.299029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2a2206000 len:0x1000 01:17:30.884 [2024-12-09 05:12:13.299202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 01:17:30.884 passed 01:17:30.884 Test: blockdev nvme passthru rw ...passed 01:17:30.884 Test: blockdev nvme passthru vendor specific ...[2024-12-09 05:12:13.300478] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 01:17:30.884 [2024-12-09 05:12:13.300625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0passed sqhd:001c p:1 m:0 dnr:1 01:17:30.884 01:17:30.884 Test: blockdev nvme admin passthru ...passed 01:17:30.884 Test: blockdev copy ...passed 01:17:30.884 Suite: bdevio tests on: Nvme2n2 01:17:30.884 Test: blockdev write read block ...passed 01:17:30.884 Test: blockdev write zeroes read block ...passed 01:17:30.884 Test: blockdev write zeroes read no split ...passed 01:17:31.143 Test: blockdev write zeroes read split ...passed 01:17:31.143 Test: blockdev write zeroes read split partial ...passed 01:17:31.143 Test: blockdev reset ...[2024-12-09 05:12:13.375480] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 01:17:31.143 [2024-12-09 05:12:13.380046] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 01:17:31.143 Test: blockdev write read 8 blocks ...uccessful. 01:17:31.143 passed 01:17:31.143 Test: blockdev write read size > 128k ...passed 01:17:31.143 Test: blockdev write read invalid size ...passed 01:17:31.143 Test: blockdev write read offset + nbytes == size of blockdev ...passed 01:17:31.143 Test: blockdev write read offset + nbytes > size of blockdev ...passed 01:17:31.143 Test: blockdev write read max offset ...passed 01:17:31.143 Test: blockdev write read 2 blocks on overlapped address offset ...passed 01:17:31.143 Test: blockdev writev readv 8 blocks ...passed 01:17:31.143 Test: blockdev writev readv 30 x 1block ...passed 01:17:31.143 Test: blockdev writev readv block ...passed 01:17:31.143 Test: blockdev writev readv size > 128k ...passed 01:17:31.143 Test: blockdev writev readv size > 128k in two iovs ...passed 01:17:31.143 Test: blockdev comparev and writev ...[2024-12-09 05:12:13.389564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 passed 01:17:31.143 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2cf03c000 len:0x1000 01:17:31.143 [2024-12-09 05:12:13.389711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 01:17:31.143 passed 01:17:31.143 Test: blockdev nvme passthru vendor specific ...passed 01:17:31.143 Test: blockdev nvme admin passthru ...[2024-12-09 05:12:13.390628] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 01:17:31.143 [2024-12-09 05:12:13.390663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 01:17:31.143 passed 01:17:31.143 Test: blockdev copy ...passed 01:17:31.143 Suite: bdevio tests on: Nvme2n1 01:17:31.143 Test: blockdev write read block ...passed 01:17:31.143 Test: blockdev write zeroes read block ...passed 01:17:31.143 Test: blockdev write zeroes read no split ...passed 01:17:31.143 Test: blockdev write zeroes read split ...passed 01:17:31.143 Test: blockdev write zeroes read split partial ...passed 01:17:31.143 Test: blockdev reset ...[2024-12-09 05:12:13.467520] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 01:17:31.143 passed 01:17:31.144 Test: blockdev write read 8 blocks ...[2024-12-09 05:12:13.471716] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 01:17:31.144 passed 01:17:31.144 Test: blockdev write read size > 128k ...passed 01:17:31.144 Test: blockdev write read invalid size ...passed 01:17:31.144 Test: blockdev write read offset + nbytes == size of blockdev ...passed 01:17:31.144 Test: blockdev write read offset + nbytes > size of blockdev ...passed 01:17:31.144 Test: blockdev write read max offset ...passed 01:17:31.144 Test: blockdev write read 2 blocks on overlapped address offset ...passed 01:17:31.144 Test: blockdev writev readv 8 blocks ...passed 01:17:31.144 Test: blockdev writev readv 30 x 1block ...passed 01:17:31.144 Test: blockdev writev readv block ...passed 01:17:31.144 Test: blockdev writev readv size > 128k ...passed 01:17:31.144 Test: blockdev writev readv size > 128k in two iovs ...passed 01:17:31.144 Test: blockdev comparev and writev ...[2024-12-09 05:12:13.481048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 passed 01:17:31.144 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2cf038000 len:0x1000 01:17:31.144 [2024-12-09 05:12:13.481196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 01:17:31.144 passed 01:17:31.144 Test: blockdev nvme passthru vendor specific ...passed 01:17:31.144 Test: blockdev nvme admin passthru ...[2024-12-09 05:12:13.482253] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 01:17:31.144 [2024-12-09 05:12:13.482290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 01:17:31.144 passed 01:17:31.144 Test: blockdev copy ...passed 01:17:31.144 Suite: bdevio tests on: Nvme1n1 01:17:31.144 Test: blockdev write read block ...passed 01:17:31.144 Test: blockdev write zeroes read block ...passed 01:17:31.144 Test: blockdev write zeroes read no split ...passed 01:17:31.144 Test: blockdev write zeroes read split ...passed 01:17:31.144 Test: blockdev write zeroes read split partial ...passed 01:17:31.144 Test: blockdev reset ...[2024-12-09 05:12:13.573962] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 01:17:31.144 [2024-12-09 05:12:13.578119] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 01:17:31.144 passed 01:17:31.144 Test: blockdev write read 8 blocks ...passed 01:17:31.144 Test: blockdev write read size > 128k ...passed 01:17:31.144 Test: blockdev write read invalid size ...passed 01:17:31.144 Test: blockdev write read offset + nbytes == size of blockdev ...passed 01:17:31.144 Test: blockdev write read offset + nbytes > size of blockdev ...passed 01:17:31.144 Test: blockdev write read max offset ...passed 01:17:31.144 Test: blockdev write read 2 blocks on overlapped address offset ...passed 01:17:31.144 Test: blockdev writev readv 8 blocks ...passed 01:17:31.144 Test: blockdev writev readv 30 x 1block ...passed 01:17:31.144 Test: blockdev writev readv block ...passed 01:17:31.144 Test: blockdev writev readv size > 128k ...passed 01:17:31.144 Test: blockdev writev readv size > 128k in two iovs ...passed 01:17:31.144 Test: blockdev comparev and writev ...[2024-12-09 05:12:13.589205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cf034000 len:0x1000 01:17:31.144 [2024-12-09 05:12:13.589376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 01:17:31.144 passed 01:17:31.144 Test: blockdev nvme passthru rw ...passed 01:17:31.144 Test: blockdev nvme passthru vendor specific ...[2024-12-09 05:12:13.590728] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 01:17:31.144 [2024-12-09 05:12:13.590872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 01:17:31.144 passed 01:17:31.144 Test: blockdev nvme admin passthru ...passed 01:17:31.144 Test: blockdev copy ...passed 01:17:31.144 Suite: bdevio tests on: Nvme0n1 01:17:31.403 Test: blockdev write read block ...passed 01:17:31.403 Test: blockdev write zeroes read block ...passed 01:17:31.403 Test: blockdev write zeroes read no split ...passed 01:17:31.403 Test: blockdev write zeroes read split ...passed 01:17:31.403 Test: blockdev write zeroes read split partial ...passed 01:17:31.403 Test: blockdev reset ...[2024-12-09 05:12:13.671261] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 01:17:31.403 passed 01:17:31.403 Test: blockdev write read 8 blocks ...[2024-12-09 05:12:13.675541] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 01:17:31.403 passed 01:17:31.403 Test: blockdev write read size > 128k ...passed 01:17:31.403 Test: blockdev write read invalid size ...passed 01:17:31.403 Test: blockdev write read offset + nbytes == size of blockdev ...passed 01:17:31.403 Test: blockdev write read offset + nbytes > size of blockdev ...passed 01:17:31.403 Test: blockdev write read max offset ...passed 01:17:31.403 Test: blockdev write read 2 blocks on overlapped address offset ...passed 01:17:31.403 Test: blockdev writev readv 8 blocks ...passed 01:17:31.403 Test: blockdev writev readv 30 x 1block ...passed 01:17:31.403 Test: blockdev writev readv block ...passed 01:17:31.403 Test: blockdev writev readv size > 128k ...passed 01:17:31.403 Test: blockdev writev readv size > 128k in two iovs ...passed 01:17:31.403 Test: blockdev comparev and writev ...passed 01:17:31.403 Test: blockdev nvme passthru rw ...[2024-12-09 05:12:13.683820] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 01:17:31.403 separate metadata which is not supported yet. 01:17:31.403 passed 01:17:31.403 Test: blockdev nvme passthru vendor specific ...[2024-12-09 05:12:13.684433] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 Ppassed 01:17:31.403 Test: blockdev nvme admin passthru ...RP2 0x0 01:17:31.403 [2024-12-09 05:12:13.684578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 01:17:31.403 passed 01:17:31.403 Test: blockdev copy ...passed 01:17:31.403 01:17:31.403 Run Summary: Type Total Ran Passed Failed Inactive 01:17:31.403 suites 6 6 n/a 0 0 01:17:31.403 tests 138 138 138 0 0 01:17:31.403 asserts 893 893 893 0 n/a 01:17:31.403 01:17:31.404 Elapsed time = 1.554 seconds 01:17:31.404 0 01:17:31.404 05:12:13 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61038 01:17:31.404 05:12:13 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 61038 ']' 01:17:31.404 05:12:13 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 61038 01:17:31.404 05:12:13 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 01:17:31.404 05:12:13 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:17:31.404 05:12:13 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61038 01:17:31.404 killing process with pid 61038 01:17:31.404 05:12:13 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:17:31.404 05:12:13 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:17:31.404 05:12:13 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61038' 01:17:31.404 05:12:13 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 61038 01:17:31.404 05:12:13 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 61038 01:17:32.783 05:12:14 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 01:17:32.783 01:17:32.783 real 0m3.175s 01:17:32.783 user 0m7.887s 01:17:32.783 sys 0m0.497s 01:17:32.783 ************************************ 01:17:32.783 END TEST bdev_bounds 01:17:32.783 ************************************ 01:17:32.783 05:12:14 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 01:17:32.783 05:12:14 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 01:17:32.783 05:12:15 blockdev_nvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 01:17:32.783 05:12:15 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 01:17:32.783 05:12:15 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 01:17:32.783 05:12:15 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 01:17:32.783 ************************************ 01:17:32.783 START TEST bdev_nbd 01:17:32.783 ************************************ 01:17:32.783 05:12:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 01:17:32.783 05:12:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 01:17:32.783 05:12:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 01:17:32.783 05:12:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:17:32.783 05:12:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 01:17:32.783 05:12:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 01:17:32.783 05:12:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 01:17:32.783 05:12:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 01:17:32.783 05:12:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 01:17:32.783 05:12:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 01:17:32.783 05:12:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 01:17:32.783 05:12:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 01:17:32.783 05:12:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 01:17:32.783 05:12:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 01:17:32.783 05:12:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 01:17:32.783 05:12:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 01:17:32.783 05:12:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61109 01:17:32.783 05:12:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 01:17:32.783 05:12:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 01:17:32.783 05:12:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61109 /var/tmp/spdk-nbd.sock 01:17:32.783 05:12:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 61109 ']' 01:17:32.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 01:17:32.783 05:12:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 01:17:32.783 05:12:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 01:17:32.783 05:12:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 01:17:32.783 05:12:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 01:17:32.783 05:12:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 01:17:32.783 [2024-12-09 05:12:15.168943] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:17:32.783 [2024-12-09 05:12:15.169232] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:17:33.041 [2024-12-09 05:12:15.355515] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:17:33.299 [2024-12-09 05:12:15.501401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:17:33.864 05:12:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:17:33.864 05:12:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 01:17:33.864 05:12:16 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 01:17:33.864 05:12:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:17:33.864 05:12:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 01:17:33.864 05:12:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 01:17:33.864 05:12:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 01:17:33.864 05:12:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:17:33.864 05:12:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 01:17:33.864 05:12:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 01:17:33.864 05:12:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 01:17:33.864 05:12:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 01:17:33.864 05:12:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 01:17:33.864 05:12:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 01:17:33.864 05:12:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 01:17:34.122 05:12:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 01:17:34.122 05:12:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 01:17:34.122 05:12:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 01:17:34.122 05:12:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 01:17:34.122 05:12:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 01:17:34.122 05:12:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:17:34.122 05:12:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:17:34.122 05:12:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 01:17:34.122 05:12:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 01:17:34.122 05:12:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:17:34.122 05:12:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:17:34.122 05:12:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:17:34.122 1+0 records in 01:17:34.122 1+0 records out 01:17:34.122 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000707699 s, 5.8 MB/s 01:17:34.122 05:12:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:17:34.122 05:12:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 01:17:34.122 05:12:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:17:34.122 05:12:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:17:34.122 05:12:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 01:17:34.122 05:12:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 01:17:34.122 05:12:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 01:17:34.122 05:12:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 01:17:34.380 05:12:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 01:17:34.380 05:12:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 01:17:34.380 05:12:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 01:17:34.380 05:12:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 01:17:34.380 05:12:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 01:17:34.380 05:12:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:17:34.380 05:12:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:17:34.380 05:12:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 01:17:34.380 05:12:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 01:17:34.380 05:12:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:17:34.380 05:12:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:17:34.380 05:12:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:17:34.380 1+0 records in 01:17:34.380 1+0 records out 01:17:34.380 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000635877 s, 6.4 MB/s 01:17:34.380 05:12:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:17:34.380 05:12:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 01:17:34.380 05:12:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:17:34.380 05:12:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:17:34.380 05:12:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 01:17:34.380 05:12:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 01:17:34.380 05:12:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 01:17:34.380 05:12:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 01:17:34.638 05:12:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 01:17:34.638 05:12:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 01:17:34.638 05:12:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 01:17:34.638 05:12:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 01:17:34.638 05:12:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 01:17:34.638 05:12:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:17:34.638 05:12:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:17:34.638 05:12:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 01:17:34.638 05:12:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 01:17:34.638 05:12:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:17:34.638 05:12:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:17:34.638 05:12:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:17:34.638 1+0 records in 01:17:34.638 1+0 records out 01:17:34.638 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000655737 s, 6.2 MB/s 01:17:34.638 05:12:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:17:34.638 05:12:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 01:17:34.638 05:12:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:17:34.638 05:12:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:17:34.638 05:12:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 01:17:34.638 05:12:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 01:17:34.638 05:12:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 01:17:34.638 05:12:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 01:17:34.896 05:12:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 01:17:34.896 05:12:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 01:17:34.896 05:12:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 01:17:34.896 05:12:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 01:17:34.896 05:12:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 01:17:34.896 05:12:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:17:34.896 05:12:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:17:34.896 05:12:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 01:17:34.896 05:12:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 01:17:34.896 05:12:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:17:34.896 05:12:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:17:34.896 05:12:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:17:34.896 1+0 records in 01:17:34.896 1+0 records out 01:17:34.896 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000791227 s, 5.2 MB/s 01:17:34.896 05:12:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:17:34.896 05:12:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 01:17:34.896 05:12:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:17:34.896 05:12:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:17:34.896 05:12:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 01:17:34.896 05:12:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 01:17:34.896 05:12:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 01:17:34.896 05:12:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 01:17:35.154 05:12:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 01:17:35.154 05:12:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 01:17:35.154 05:12:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 01:17:35.154 05:12:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 01:17:35.154 05:12:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 01:17:35.154 05:12:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:17:35.154 05:12:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:17:35.154 05:12:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 01:17:35.154 05:12:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 01:17:35.154 05:12:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:17:35.154 05:12:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:17:35.154 05:12:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:17:35.154 1+0 records in 01:17:35.154 1+0 records out 01:17:35.154 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000677236 s, 6.0 MB/s 01:17:35.154 05:12:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:17:35.154 05:12:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 01:17:35.154 05:12:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:17:35.154 05:12:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:17:35.154 05:12:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 01:17:35.154 05:12:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 01:17:35.154 05:12:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 01:17:35.154 05:12:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 01:17:35.413 05:12:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 01:17:35.413 05:12:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 01:17:35.413 05:12:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 01:17:35.413 05:12:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 01:17:35.413 05:12:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 01:17:35.413 05:12:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:17:35.413 05:12:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:17:35.413 05:12:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 01:17:35.413 05:12:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 01:17:35.413 05:12:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:17:35.413 05:12:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:17:35.413 05:12:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:17:35.413 1+0 records in 01:17:35.413 1+0 records out 01:17:35.413 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00167569 s, 2.4 MB/s 01:17:35.413 05:12:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:17:35.413 05:12:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 01:17:35.413 05:12:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:17:35.413 05:12:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:17:35.413 05:12:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 01:17:35.413 05:12:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 01:17:35.413 05:12:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 01:17:35.413 05:12:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 01:17:35.670 05:12:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 01:17:35.670 { 01:17:35.670 "nbd_device": "/dev/nbd0", 01:17:35.670 "bdev_name": "Nvme0n1" 01:17:35.670 }, 01:17:35.670 { 01:17:35.670 "nbd_device": "/dev/nbd1", 01:17:35.670 "bdev_name": "Nvme1n1" 01:17:35.670 }, 01:17:35.670 { 01:17:35.670 "nbd_device": "/dev/nbd2", 01:17:35.670 "bdev_name": "Nvme2n1" 01:17:35.670 }, 01:17:35.670 { 01:17:35.670 "nbd_device": "/dev/nbd3", 01:17:35.670 "bdev_name": "Nvme2n2" 01:17:35.670 }, 01:17:35.670 { 01:17:35.670 "nbd_device": "/dev/nbd4", 01:17:35.670 "bdev_name": "Nvme2n3" 01:17:35.670 }, 01:17:35.670 { 01:17:35.670 "nbd_device": "/dev/nbd5", 01:17:35.670 "bdev_name": "Nvme3n1" 01:17:35.670 } 01:17:35.670 ]' 01:17:35.670 05:12:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 01:17:35.670 05:12:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 01:17:35.670 { 01:17:35.670 "nbd_device": "/dev/nbd0", 01:17:35.670 "bdev_name": "Nvme0n1" 01:17:35.670 }, 01:17:35.670 { 01:17:35.670 "nbd_device": "/dev/nbd1", 01:17:35.670 "bdev_name": "Nvme1n1" 01:17:35.670 }, 01:17:35.670 { 01:17:35.670 "nbd_device": "/dev/nbd2", 01:17:35.670 "bdev_name": "Nvme2n1" 01:17:35.670 }, 01:17:35.670 { 01:17:35.670 "nbd_device": "/dev/nbd3", 01:17:35.671 "bdev_name": "Nvme2n2" 01:17:35.671 }, 01:17:35.671 { 01:17:35.671 "nbd_device": "/dev/nbd4", 01:17:35.671 "bdev_name": "Nvme2n3" 01:17:35.671 }, 01:17:35.671 { 01:17:35.671 "nbd_device": "/dev/nbd5", 01:17:35.671 "bdev_name": "Nvme3n1" 01:17:35.671 } 01:17:35.671 ]' 01:17:35.671 05:12:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 01:17:35.671 05:12:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 01:17:35.671 05:12:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:17:35.671 05:12:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 01:17:35.671 05:12:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 01:17:35.671 05:12:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 01:17:35.671 05:12:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:17:35.671 05:12:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 01:17:35.928 05:12:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 01:17:35.928 05:12:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 01:17:35.928 05:12:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 01:17:35.928 05:12:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:17:35.928 05:12:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:17:35.928 05:12:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 01:17:35.928 05:12:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 01:17:35.928 05:12:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 01:17:35.928 05:12:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:17:35.928 05:12:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 01:17:36.187 05:12:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 01:17:36.187 05:12:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 01:17:36.187 05:12:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 01:17:36.187 05:12:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:17:36.187 05:12:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:17:36.187 05:12:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 01:17:36.187 05:12:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 01:17:36.187 05:12:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 01:17:36.187 05:12:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:17:36.187 05:12:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 01:17:36.444 05:12:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 01:17:36.444 05:12:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 01:17:36.444 05:12:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 01:17:36.444 05:12:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:17:36.444 05:12:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:17:36.444 05:12:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 01:17:36.444 05:12:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 01:17:36.445 05:12:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 01:17:36.445 05:12:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:17:36.445 05:12:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 01:17:36.708 05:12:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 01:17:36.708 05:12:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 01:17:36.708 05:12:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 01:17:36.708 05:12:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:17:36.708 05:12:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:17:36.708 05:12:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 01:17:36.708 05:12:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 01:17:36.708 05:12:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 01:17:36.708 05:12:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:17:36.708 05:12:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 01:17:36.967 05:12:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 01:17:36.967 05:12:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 01:17:36.967 05:12:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 01:17:36.967 05:12:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:17:36.967 05:12:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:17:36.967 05:12:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 01:17:36.967 05:12:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 01:17:36.967 05:12:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 01:17:36.967 05:12:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:17:36.967 05:12:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 01:17:37.226 05:12:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 01:17:37.226 05:12:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 01:17:37.226 05:12:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 01:17:37.226 05:12:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:17:37.226 05:12:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:17:37.226 05:12:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 01:17:37.226 05:12:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 01:17:37.226 05:12:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 01:17:37.226 05:12:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 01:17:37.226 05:12:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:17:37.226 05:12:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 01:17:37.226 05:12:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 01:17:37.226 05:12:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 01:17:37.226 05:12:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 01:17:37.485 05:12:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 01:17:37.485 05:12:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 01:17:37.485 05:12:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 01:17:37.485 05:12:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 01:17:37.485 05:12:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 01:17:37.485 05:12:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 01:17:37.485 05:12:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 01:17:37.485 05:12:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 01:17:37.485 05:12:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 01:17:37.485 05:12:19 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 01:17:37.485 05:12:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:17:37.485 05:12:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 01:17:37.485 05:12:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 01:17:37.485 05:12:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 01:17:37.485 05:12:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 01:17:37.485 05:12:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 01:17:37.485 05:12:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:17:37.485 05:12:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 01:17:37.485 05:12:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 01:17:37.485 05:12:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 01:17:37.485 05:12:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 01:17:37.485 05:12:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 01:17:37.485 05:12:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 01:17:37.485 05:12:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 01:17:37.485 05:12:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 01:17:37.485 /dev/nbd0 01:17:37.744 05:12:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 01:17:37.744 05:12:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 01:17:37.744 05:12:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 01:17:37.744 05:12:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 01:17:37.744 05:12:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:17:37.744 05:12:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:17:37.744 05:12:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 01:17:37.744 05:12:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 01:17:37.744 05:12:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:17:37.744 05:12:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:17:37.744 05:12:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:17:37.744 1+0 records in 01:17:37.744 1+0 records out 01:17:37.744 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000882752 s, 4.6 MB/s 01:17:37.744 05:12:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:17:37.744 05:12:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 01:17:37.744 05:12:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:17:37.744 05:12:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:17:37.744 05:12:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 01:17:37.744 05:12:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:17:37.744 05:12:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 01:17:37.744 05:12:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 01:17:37.744 /dev/nbd1 01:17:38.004 05:12:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 01:17:38.004 05:12:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 01:17:38.004 05:12:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 01:17:38.004 05:12:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 01:17:38.004 05:12:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:17:38.004 05:12:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:17:38.004 05:12:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 01:17:38.004 05:12:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 01:17:38.004 05:12:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:17:38.004 05:12:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:17:38.005 05:12:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:17:38.005 1+0 records in 01:17:38.005 1+0 records out 01:17:38.005 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000724047 s, 5.7 MB/s 01:17:38.005 05:12:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:17:38.005 05:12:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 01:17:38.005 05:12:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:17:38.005 05:12:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:17:38.005 05:12:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 01:17:38.005 05:12:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:17:38.005 05:12:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 01:17:38.005 05:12:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 01:17:38.005 /dev/nbd10 01:17:38.265 05:12:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 01:17:38.265 05:12:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 01:17:38.265 05:12:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 01:17:38.265 05:12:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 01:17:38.265 05:12:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:17:38.265 05:12:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:17:38.265 05:12:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 01:17:38.265 05:12:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 01:17:38.265 05:12:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:17:38.265 05:12:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:17:38.265 05:12:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:17:38.265 1+0 records in 01:17:38.265 1+0 records out 01:17:38.265 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000753216 s, 5.4 MB/s 01:17:38.265 05:12:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:17:38.265 05:12:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 01:17:38.265 05:12:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:17:38.265 05:12:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:17:38.265 05:12:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 01:17:38.265 05:12:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:17:38.265 05:12:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 01:17:38.265 05:12:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 01:17:38.265 /dev/nbd11 01:17:38.538 05:12:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 01:17:38.538 05:12:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 01:17:38.538 05:12:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 01:17:38.538 05:12:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 01:17:38.538 05:12:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:17:38.538 05:12:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:17:38.538 05:12:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 01:17:38.538 05:12:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 01:17:38.538 05:12:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:17:38.538 05:12:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:17:38.538 05:12:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:17:38.538 1+0 records in 01:17:38.538 1+0 records out 01:17:38.538 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000761464 s, 5.4 MB/s 01:17:38.538 05:12:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:17:38.538 05:12:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 01:17:38.538 05:12:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:17:38.538 05:12:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:17:38.538 05:12:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 01:17:38.538 05:12:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:17:38.538 05:12:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 01:17:38.538 05:12:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 01:17:38.538 /dev/nbd12 01:17:38.538 05:12:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 01:17:38.538 05:12:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 01:17:38.538 05:12:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 01:17:38.538 05:12:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 01:17:38.538 05:12:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:17:38.538 05:12:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:17:38.538 05:12:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 01:17:38.538 05:12:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 01:17:38.538 05:12:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:17:38.538 05:12:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:17:38.538 05:12:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:17:38.538 1+0 records in 01:17:38.538 1+0 records out 01:17:38.538 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00109164 s, 3.8 MB/s 01:17:38.797 05:12:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:17:38.797 05:12:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 01:17:38.797 05:12:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:17:38.797 05:12:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:17:38.797 05:12:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 01:17:38.797 05:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:17:38.797 05:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 01:17:38.797 05:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 01:17:38.797 /dev/nbd13 01:17:38.797 05:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 01:17:38.797 05:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 01:17:38.797 05:12:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 01:17:38.797 05:12:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 01:17:38.797 05:12:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:17:38.797 05:12:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:17:38.797 05:12:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 01:17:38.797 05:12:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 01:17:38.797 05:12:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:17:38.797 05:12:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:17:38.797 05:12:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:17:38.797 1+0 records in 01:17:38.797 1+0 records out 01:17:38.797 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000877551 s, 4.7 MB/s 01:17:38.797 05:12:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:17:38.797 05:12:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 01:17:38.797 05:12:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:17:39.057 05:12:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:17:39.057 05:12:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 01:17:39.057 05:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:17:39.057 05:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 01:17:39.057 05:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 01:17:39.057 05:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:17:39.057 05:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 01:17:39.057 05:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 01:17:39.057 { 01:17:39.057 "nbd_device": "/dev/nbd0", 01:17:39.057 "bdev_name": "Nvme0n1" 01:17:39.057 }, 01:17:39.057 { 01:17:39.057 "nbd_device": "/dev/nbd1", 01:17:39.057 "bdev_name": "Nvme1n1" 01:17:39.057 }, 01:17:39.057 { 01:17:39.057 "nbd_device": "/dev/nbd10", 01:17:39.057 "bdev_name": "Nvme2n1" 01:17:39.057 }, 01:17:39.057 { 01:17:39.057 "nbd_device": "/dev/nbd11", 01:17:39.057 "bdev_name": "Nvme2n2" 01:17:39.057 }, 01:17:39.057 { 01:17:39.057 "nbd_device": "/dev/nbd12", 01:17:39.057 "bdev_name": "Nvme2n3" 01:17:39.057 }, 01:17:39.057 { 01:17:39.057 "nbd_device": "/dev/nbd13", 01:17:39.057 "bdev_name": "Nvme3n1" 01:17:39.057 } 01:17:39.057 ]' 01:17:39.057 05:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 01:17:39.057 05:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 01:17:39.057 { 01:17:39.057 "nbd_device": "/dev/nbd0", 01:17:39.057 "bdev_name": "Nvme0n1" 01:17:39.057 }, 01:17:39.057 { 01:17:39.057 "nbd_device": "/dev/nbd1", 01:17:39.057 "bdev_name": "Nvme1n1" 01:17:39.057 }, 01:17:39.057 { 01:17:39.057 "nbd_device": "/dev/nbd10", 01:17:39.057 "bdev_name": "Nvme2n1" 01:17:39.057 }, 01:17:39.057 { 01:17:39.057 "nbd_device": "/dev/nbd11", 01:17:39.057 "bdev_name": "Nvme2n2" 01:17:39.057 }, 01:17:39.057 { 01:17:39.057 "nbd_device": "/dev/nbd12", 01:17:39.057 "bdev_name": "Nvme2n3" 01:17:39.057 }, 01:17:39.057 { 01:17:39.057 "nbd_device": "/dev/nbd13", 01:17:39.057 "bdev_name": "Nvme3n1" 01:17:39.057 } 01:17:39.057 ]' 01:17:39.316 05:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 01:17:39.316 /dev/nbd1 01:17:39.316 /dev/nbd10 01:17:39.316 /dev/nbd11 01:17:39.316 /dev/nbd12 01:17:39.316 /dev/nbd13' 01:17:39.316 05:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 01:17:39.316 /dev/nbd1 01:17:39.316 /dev/nbd10 01:17:39.316 /dev/nbd11 01:17:39.316 /dev/nbd12 01:17:39.316 /dev/nbd13' 01:17:39.316 05:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 01:17:39.316 05:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 01:17:39.316 05:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 01:17:39.316 05:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 01:17:39.316 05:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 01:17:39.316 05:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 01:17:39.316 05:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 01:17:39.316 05:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 01:17:39.316 05:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 01:17:39.316 05:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 01:17:39.316 05:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 01:17:39.316 05:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 01:17:39.316 256+0 records in 01:17:39.317 256+0 records out 01:17:39.317 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0137848 s, 76.1 MB/s 01:17:39.317 05:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 01:17:39.317 05:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 01:17:39.317 256+0 records in 01:17:39.317 256+0 records out 01:17:39.317 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.127428 s, 8.2 MB/s 01:17:39.317 05:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 01:17:39.317 05:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 01:17:39.576 256+0 records in 01:17:39.576 256+0 records out 01:17:39.576 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.131978 s, 7.9 MB/s 01:17:39.576 05:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 01:17:39.576 05:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 01:17:39.576 256+0 records in 01:17:39.576 256+0 records out 01:17:39.576 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.131483 s, 8.0 MB/s 01:17:39.576 05:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 01:17:39.576 05:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 01:17:39.836 256+0 records in 01:17:39.836 256+0 records out 01:17:39.836 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.134629 s, 7.8 MB/s 01:17:39.836 05:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 01:17:39.836 05:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 01:17:39.836 256+0 records in 01:17:39.836 256+0 records out 01:17:39.836 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.133642 s, 7.8 MB/s 01:17:39.836 05:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 01:17:39.836 05:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 01:17:40.096 256+0 records in 01:17:40.096 256+0 records out 01:17:40.096 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.127486 s, 8.2 MB/s 01:17:40.096 05:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 01:17:40.096 05:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 01:17:40.096 05:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 01:17:40.096 05:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 01:17:40.096 05:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 01:17:40.096 05:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 01:17:40.096 05:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 01:17:40.096 05:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 01:17:40.096 05:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 01:17:40.096 05:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 01:17:40.096 05:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 01:17:40.096 05:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 01:17:40.096 05:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 01:17:40.096 05:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 01:17:40.096 05:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 01:17:40.096 05:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 01:17:40.096 05:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 01:17:40.096 05:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 01:17:40.096 05:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 01:17:40.096 05:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 01:17:40.096 05:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 01:17:40.096 05:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:17:40.096 05:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 01:17:40.096 05:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 01:17:40.096 05:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 01:17:40.096 05:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:17:40.096 05:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 01:17:40.355 05:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 01:17:40.355 05:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 01:17:40.355 05:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 01:17:40.355 05:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:17:40.355 05:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:17:40.355 05:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 01:17:40.355 05:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 01:17:40.355 05:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 01:17:40.355 05:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:17:40.355 05:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 01:17:40.614 05:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 01:17:40.614 05:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 01:17:40.614 05:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 01:17:40.614 05:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:17:40.614 05:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:17:40.615 05:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 01:17:40.615 05:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 01:17:40.615 05:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 01:17:40.615 05:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:17:40.615 05:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 01:17:40.873 05:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 01:17:40.873 05:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 01:17:40.873 05:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 01:17:40.873 05:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:17:40.873 05:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:17:40.873 05:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 01:17:40.873 05:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 01:17:40.873 05:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 01:17:40.873 05:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:17:40.873 05:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 01:17:41.132 05:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 01:17:41.132 05:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 01:17:41.132 05:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 01:17:41.132 05:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:17:41.132 05:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:17:41.132 05:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 01:17:41.132 05:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 01:17:41.132 05:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 01:17:41.132 05:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:17:41.132 05:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 01:17:41.132 05:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 01:17:41.132 05:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 01:17:41.132 05:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 01:17:41.132 05:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:17:41.132 05:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:17:41.132 05:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 01:17:41.132 05:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 01:17:41.132 05:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 01:17:41.132 05:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:17:41.132 05:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 01:17:41.391 05:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 01:17:41.391 05:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 01:17:41.391 05:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 01:17:41.391 05:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:17:41.391 05:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:17:41.391 05:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 01:17:41.391 05:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 01:17:41.391 05:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 01:17:41.391 05:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 01:17:41.391 05:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:17:41.391 05:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 01:17:41.650 05:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 01:17:41.650 05:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 01:17:41.650 05:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 01:17:41.650 05:12:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 01:17:41.650 05:12:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 01:17:41.650 05:12:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 01:17:41.650 05:12:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 01:17:41.650 05:12:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 01:17:41.650 05:12:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 01:17:41.650 05:12:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 01:17:41.650 05:12:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 01:17:41.650 05:12:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 01:17:41.650 05:12:24 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 01:17:41.650 05:12:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:17:41.650 05:12:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 01:17:41.650 05:12:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 01:17:41.907 malloc_lvol_verify 01:17:41.907 05:12:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 01:17:42.165 9f23c62b-c648-4c16-b9af-7078736d23f9 01:17:42.165 05:12:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 01:17:42.423 049eecb1-5b5e-4e9b-97fa-c25ac6635000 01:17:42.423 05:12:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 01:17:42.681 /dev/nbd0 01:17:42.681 05:12:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 01:17:42.681 05:12:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 01:17:42.681 05:12:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 01:17:42.681 05:12:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 01:17:42.681 05:12:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 01:17:42.681 mke2fs 1.47.0 (5-Feb-2023) 01:17:42.681 Discarding device blocks: 0/4096 done 01:17:42.681 Creating filesystem with 4096 1k blocks and 1024 inodes 01:17:42.681 01:17:42.681 Allocating group tables: 0/1 done 01:17:42.681 Writing inode tables: 0/1 done 01:17:42.681 Creating journal (1024 blocks): done 01:17:42.681 Writing superblocks and filesystem accounting information: 0/1 done 01:17:42.681 01:17:42.681 05:12:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 01:17:42.681 05:12:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:17:42.681 05:12:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 01:17:42.681 05:12:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 01:17:42.681 05:12:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 01:17:42.681 05:12:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:17:42.681 05:12:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 01:17:42.939 05:12:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 01:17:42.939 05:12:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 01:17:42.939 05:12:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 01:17:42.939 05:12:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:17:42.939 05:12:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:17:42.939 05:12:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 01:17:42.939 05:12:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 01:17:42.939 05:12:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 01:17:42.939 05:12:25 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61109 01:17:42.939 05:12:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 61109 ']' 01:17:42.939 05:12:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 61109 01:17:42.939 05:12:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 01:17:42.939 05:12:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:17:42.939 05:12:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61109 01:17:42.939 05:12:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:17:42.939 killing process with pid 61109 01:17:42.939 05:12:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:17:42.939 05:12:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61109' 01:17:42.939 05:12:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 61109 01:17:42.939 05:12:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 61109 01:17:44.316 05:12:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 01:17:44.316 01:17:44.316 real 0m11.487s 01:17:44.316 user 0m14.755s 01:17:44.316 sys 0m4.690s 01:17:44.316 05:12:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 01:17:44.316 05:12:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 01:17:44.316 ************************************ 01:17:44.316 END TEST bdev_nbd 01:17:44.316 ************************************ 01:17:44.316 05:12:26 blockdev_nvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 01:17:44.316 05:12:26 blockdev_nvme -- bdev/blockdev.sh@801 -- # '[' nvme = nvme ']' 01:17:44.316 skipping fio tests on NVMe due to multi-ns failures. 01:17:44.316 05:12:26 blockdev_nvme -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 01:17:44.316 05:12:26 blockdev_nvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 01:17:44.316 05:12:26 blockdev_nvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 01:17:44.316 05:12:26 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 01:17:44.316 05:12:26 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 01:17:44.316 05:12:26 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 01:17:44.316 ************************************ 01:17:44.316 START TEST bdev_verify 01:17:44.316 ************************************ 01:17:44.316 05:12:26 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 01:17:44.316 [2024-12-09 05:12:26.716260] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:17:44.316 [2024-12-09 05:12:26.716377] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61500 ] 01:17:44.575 [2024-12-09 05:12:26.900826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 01:17:44.575 [2024-12-09 05:12:27.021584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:17:44.575 [2024-12-09 05:12:27.021630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:17:45.511 Running I/O for 5 seconds... 01:17:47.828 19264.00 IOPS, 75.25 MiB/s [2024-12-09T05:12:31.222Z] 20672.00 IOPS, 80.75 MiB/s [2024-12-09T05:12:32.156Z] 21461.33 IOPS, 83.83 MiB/s [2024-12-09T05:12:33.095Z] 21168.00 IOPS, 82.69 MiB/s [2024-12-09T05:12:33.095Z] 21504.00 IOPS, 84.00 MiB/s 01:17:50.639 Latency(us) 01:17:50.639 [2024-12-09T05:12:33.095Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:17:50.639 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 01:17:50.639 Verification LBA range: start 0x0 length 0xbd0bd 01:17:50.639 Nvme0n1 : 5.07 1728.30 6.75 0.00 0.00 73570.72 10580.51 81275.17 01:17:50.639 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 01:17:50.639 Verification LBA range: start 0xbd0bd length 0xbd0bd 01:17:50.639 Nvme0n1 : 5.04 1803.60 7.05 0.00 0.00 70732.43 14633.74 76642.90 01:17:50.639 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 01:17:50.639 Verification LBA range: start 0x0 length 0xa0000 01:17:50.639 Nvme1n1 : 5.09 1735.71 6.78 0.00 0.00 73355.60 10369.95 70326.18 01:17:50.639 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 01:17:50.639 Verification LBA range: start 0xa0000 length 0xa0000 01:17:50.639 Nvme1n1 : 5.04 1803.13 7.04 0.00 0.00 70637.83 14844.30 69062.84 01:17:50.639 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 01:17:50.639 Verification LBA range: start 0x0 length 0x80000 01:17:50.639 Nvme2n1 : 5.09 1735.32 6.78 0.00 0.00 73277.51 9633.00 71589.53 01:17:50.639 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 01:17:50.639 Verification LBA range: start 0x80000 length 0x80000 01:17:50.639 Nvme2n1 : 5.09 1810.89 7.07 0.00 0.00 70224.34 14423.18 61061.65 01:17:50.639 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 01:17:50.639 Verification LBA range: start 0x0 length 0x80000 01:17:50.639 Nvme2n2 : 5.09 1734.16 6.77 0.00 0.00 73173.31 12054.41 72852.87 01:17:50.639 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 01:17:50.639 Verification LBA range: start 0x80000 length 0x80000 01:17:50.639 Nvme2n2 : 5.09 1810.22 7.07 0.00 0.00 70110.39 15160.13 59377.20 01:17:50.639 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 01:17:50.639 Verification LBA range: start 0x0 length 0x80000 01:17:50.640 Nvme2n3 : 5.09 1733.53 6.77 0.00 0.00 73029.79 13054.56 74537.33 01:17:50.640 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 01:17:50.640 Verification LBA range: start 0x80000 length 0x80000 01:17:50.640 Nvme2n3 : 5.09 1809.84 7.07 0.00 0.00 70023.89 15897.09 61903.88 01:17:50.640 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 01:17:50.640 Verification LBA range: start 0x0 length 0x20000 01:17:50.640 Nvme3n1 : 5.10 1733.16 6.77 0.00 0.00 72919.43 13159.84 76221.79 01:17:50.640 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 01:17:50.640 Verification LBA range: start 0x20000 length 0x20000 01:17:50.640 Nvme3n1 : 5.09 1809.44 7.07 0.00 0.00 69917.09 15160.13 65272.80 01:17:50.640 [2024-12-09T05:12:33.096Z] =================================================================================================================== 01:17:50.640 [2024-12-09T05:12:33.096Z] Total : 21247.31 83.00 0.00 0.00 71717.69 9633.00 81275.17 01:17:52.057 01:17:52.057 real 0m7.704s 01:17:52.057 user 0m14.122s 01:17:52.057 sys 0m0.323s 01:17:52.057 05:12:34 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 01:17:52.057 05:12:34 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 01:17:52.057 ************************************ 01:17:52.057 END TEST bdev_verify 01:17:52.057 ************************************ 01:17:52.057 05:12:34 blockdev_nvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 01:17:52.057 05:12:34 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 01:17:52.057 05:12:34 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 01:17:52.057 05:12:34 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 01:17:52.057 ************************************ 01:17:52.057 START TEST bdev_verify_big_io 01:17:52.057 ************************************ 01:17:52.057 05:12:34 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 01:17:52.057 [2024-12-09 05:12:34.508982] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:17:52.057 [2024-12-09 05:12:34.509112] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61598 ] 01:17:52.316 [2024-12-09 05:12:34.700826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 01:17:52.575 [2024-12-09 05:12:34.814495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:17:52.575 [2024-12-09 05:12:34.814545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:17:53.509 Running I/O for 5 seconds... 01:17:57.732 2774.00 IOPS, 173.38 MiB/s [2024-12-09T05:12:41.566Z] 3303.00 IOPS, 206.44 MiB/s [2024-12-09T05:12:41.566Z] 2915.00 IOPS, 182.19 MiB/s [2024-12-09T05:12:41.566Z] 2766.75 IOPS, 172.92 MiB/s 01:17:59.110 Latency(us) 01:17:59.110 [2024-12-09T05:12:41.566Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:17:59.110 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 01:17:59.110 Verification LBA range: start 0x0 length 0xbd0b 01:17:59.110 Nvme0n1 : 5.56 182.95 11.43 0.00 0.00 686980.62 20845.19 710841.88 01:17:59.110 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 01:17:59.110 Verification LBA range: start 0xbd0b length 0xbd0b 01:17:59.110 Nvme0n1 : 5.56 160.18 10.01 0.00 0.00 780311.30 28004.14 845598.64 01:17:59.110 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 01:17:59.110 Verification LBA range: start 0x0 length 0xa000 01:17:59.111 Nvme1n1 : 5.56 180.74 11.30 0.00 0.00 677231.42 24319.38 626618.91 01:17:59.111 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 01:17:59.111 Verification LBA range: start 0xa000 length 0xa000 01:17:59.111 Nvme1n1 : 5.57 160.95 10.06 0.00 0.00 757060.18 66536.15 707472.96 01:17:59.111 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 01:17:59.111 Verification LBA range: start 0x0 length 0x8000 01:17:59.111 Nvme2n1 : 5.56 180.12 11.26 0.00 0.00 664512.31 23582.43 636725.67 01:17:59.111 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 01:17:59.111 Verification LBA range: start 0x8000 length 0x8000 01:17:59.111 Nvme2n1 : 5.61 164.41 10.28 0.00 0.00 716083.23 44638.18 774851.34 01:17:59.111 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 01:17:59.111 Verification LBA range: start 0x0 length 0x8000 01:17:59.111 Nvme2n2 : 5.57 183.97 11.50 0.00 0.00 644775.58 46322.63 653570.26 01:17:59.111 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 01:17:59.111 Verification LBA range: start 0x8000 length 0x8000 01:17:59.111 Nvme2n2 : 5.64 167.46 10.47 0.00 0.00 685347.48 20845.19 1064578.36 01:17:59.111 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 01:17:59.111 Verification LBA range: start 0x0 length 0x8000 01:17:59.111 Nvme2n3 : 5.57 183.90 11.49 0.00 0.00 633514.46 47585.98 667045.94 01:17:59.111 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 01:17:59.111 Verification LBA range: start 0x8000 length 0x8000 01:17:59.111 Nvme2n3 : 5.71 184.21 11.51 0.00 0.00 611701.69 15581.25 1071316.20 01:17:59.111 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 01:17:59.111 Verification LBA range: start 0x0 length 0x2000 01:17:59.111 Nvme3n1 : 5.63 200.88 12.55 0.00 0.00 569145.97 1046.21 663677.02 01:17:59.111 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 01:17:59.111 Verification LBA range: start 0x2000 length 0x2000 01:17:59.111 Nvme3n1 : 5.72 196.54 12.28 0.00 0.00 562444.44 657.99 1482324.31 01:17:59.111 [2024-12-09T05:12:41.567Z] =================================================================================================================== 01:17:59.111 [2024-12-09T05:12:41.567Z] Total : 2146.31 134.14 0.00 0.00 661109.16 657.99 1482324.31 01:18:01.014 01:18:01.014 real 0m8.857s 01:18:01.014 user 0m16.423s 01:18:01.014 sys 0m0.322s 01:18:01.014 ************************************ 01:18:01.014 END TEST bdev_verify_big_io 01:18:01.014 ************************************ 01:18:01.014 05:12:43 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 01:18:01.014 05:12:43 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 01:18:01.014 05:12:43 blockdev_nvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 01:18:01.014 05:12:43 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 01:18:01.014 05:12:43 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 01:18:01.014 05:12:43 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 01:18:01.014 ************************************ 01:18:01.014 START TEST bdev_write_zeroes 01:18:01.014 ************************************ 01:18:01.014 05:12:43 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 01:18:01.014 [2024-12-09 05:12:43.432613] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:18:01.014 [2024-12-09 05:12:43.432739] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61718 ] 01:18:01.271 [2024-12-09 05:12:43.614968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:18:01.530 [2024-12-09 05:12:43.727642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:18:02.095 Running I/O for 1 seconds... 01:18:03.026 68736.00 IOPS, 268.50 MiB/s 01:18:03.026 Latency(us) 01:18:03.026 [2024-12-09T05:12:45.482Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:18:03.026 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 01:18:03.026 Nvme0n1 : 1.02 11438.10 44.68 0.00 0.00 11169.66 9106.61 20318.79 01:18:03.026 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 01:18:03.026 Nvme1n1 : 1.02 11427.07 44.64 0.00 0.00 11167.47 9475.08 20318.79 01:18:03.026 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 01:18:03.026 Nvme2n1 : 1.02 11416.38 44.60 0.00 0.00 11121.21 8738.13 18002.66 01:18:03.026 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 01:18:03.026 Nvme2n2 : 1.02 11406.12 44.56 0.00 0.00 11112.56 8685.49 17476.27 01:18:03.026 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 01:18:03.026 Nvme2n3 : 1.02 11395.92 44.52 0.00 0.00 11096.85 7737.99 17897.38 01:18:03.026 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 01:18:03.026 Nvme3n1 : 1.02 11385.55 44.47 0.00 0.00 11087.71 7106.31 19160.73 01:18:03.026 [2024-12-09T05:12:45.482Z] =================================================================================================================== 01:18:03.026 [2024-12-09T05:12:45.482Z] Total : 68469.15 267.46 0.00 0.00 11125.91 7106.31 20318.79 01:18:04.402 01:18:04.402 real 0m3.332s 01:18:04.402 user 0m2.930s 01:18:04.402 sys 0m0.285s 01:18:04.402 05:12:46 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 01:18:04.402 05:12:46 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 01:18:04.402 ************************************ 01:18:04.402 END TEST bdev_write_zeroes 01:18:04.402 ************************************ 01:18:04.402 05:12:46 blockdev_nvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 01:18:04.402 05:12:46 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 01:18:04.402 05:12:46 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 01:18:04.402 05:12:46 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 01:18:04.402 ************************************ 01:18:04.402 START TEST bdev_json_nonenclosed 01:18:04.402 ************************************ 01:18:04.402 05:12:46 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 01:18:04.402 [2024-12-09 05:12:46.836275] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:18:04.402 [2024-12-09 05:12:46.836391] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61771 ] 01:18:04.662 [2024-12-09 05:12:47.020457] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:18:04.921 [2024-12-09 05:12:47.133798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:18:04.921 [2024-12-09 05:12:47.133893] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 01:18:04.921 [2024-12-09 05:12:47.133916] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 01:18:04.921 [2024-12-09 05:12:47.133928] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 01:18:05.181 01:18:05.181 real 0m0.725s 01:18:05.181 user 0m0.496s 01:18:05.181 sys 0m0.125s 01:18:05.181 05:12:47 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 01:18:05.181 ************************************ 01:18:05.181 END TEST bdev_json_nonenclosed 01:18:05.181 ************************************ 01:18:05.181 05:12:47 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 01:18:05.181 05:12:47 blockdev_nvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 01:18:05.181 05:12:47 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 01:18:05.181 05:12:47 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 01:18:05.181 05:12:47 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 01:18:05.181 ************************************ 01:18:05.181 START TEST bdev_json_nonarray 01:18:05.181 ************************************ 01:18:05.181 05:12:47 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 01:18:05.181 [2024-12-09 05:12:47.626852] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:18:05.181 [2024-12-09 05:12:47.626969] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61802 ] 01:18:05.440 [2024-12-09 05:12:47.812033] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:18:05.698 [2024-12-09 05:12:47.926274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:18:05.698 [2024-12-09 05:12:47.926378] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 01:18:05.698 [2024-12-09 05:12:47.926401] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 01:18:05.698 [2024-12-09 05:12:47.926413] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 01:18:05.957 01:18:05.957 real 0m0.754s 01:18:05.957 user 0m0.501s 01:18:05.957 sys 0m0.147s 01:18:05.957 ************************************ 01:18:05.957 END TEST bdev_json_nonarray 01:18:05.957 ************************************ 01:18:05.957 05:12:48 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 01:18:05.957 05:12:48 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 01:18:05.957 05:12:48 blockdev_nvme -- bdev/blockdev.sh@824 -- # [[ nvme == bdev ]] 01:18:05.957 05:12:48 blockdev_nvme -- bdev/blockdev.sh@832 -- # [[ nvme == gpt ]] 01:18:05.957 05:12:48 blockdev_nvme -- bdev/blockdev.sh@836 -- # [[ nvme == crypto_sw ]] 01:18:05.957 05:12:48 blockdev_nvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 01:18:05.957 05:12:48 blockdev_nvme -- bdev/blockdev.sh@849 -- # cleanup 01:18:05.957 05:12:48 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 01:18:05.957 05:12:48 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 01:18:05.957 05:12:48 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 01:18:05.957 05:12:48 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 01:18:05.957 05:12:48 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 01:18:05.957 05:12:48 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 01:18:05.957 ************************************ 01:18:05.957 END TEST blockdev_nvme 01:18:05.957 ************************************ 01:18:05.957 01:18:05.957 real 0m44.304s 01:18:05.957 user 1m4.282s 01:18:05.957 sys 0m8.102s 01:18:05.957 05:12:48 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 01:18:05.957 05:12:48 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 01:18:06.217 05:12:48 -- spdk/autotest.sh@209 -- # uname -s 01:18:06.217 05:12:48 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 01:18:06.217 05:12:48 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 01:18:06.217 05:12:48 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:18:06.217 05:12:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:18:06.217 05:12:48 -- common/autotest_common.sh@10 -- # set +x 01:18:06.217 ************************************ 01:18:06.217 START TEST blockdev_nvme_gpt 01:18:06.217 ************************************ 01:18:06.217 05:12:48 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 01:18:06.217 * Looking for test storage... 01:18:06.217 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 01:18:06.217 05:12:48 blockdev_nvme_gpt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:18:06.217 05:12:48 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lcov --version 01:18:06.217 05:12:48 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:18:06.217 05:12:48 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:18:06.217 05:12:48 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:18:06.217 05:12:48 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 01:18:06.217 05:12:48 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 01:18:06.217 05:12:48 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 01:18:06.217 05:12:48 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 01:18:06.217 05:12:48 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 01:18:06.217 05:12:48 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 01:18:06.217 05:12:48 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 01:18:06.217 05:12:48 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 01:18:06.217 05:12:48 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 01:18:06.217 05:12:48 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:18:06.217 05:12:48 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 01:18:06.217 05:12:48 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 01:18:06.217 05:12:48 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 01:18:06.217 05:12:48 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:18:06.217 05:12:48 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 01:18:06.217 05:12:48 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 01:18:06.217 05:12:48 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:18:06.217 05:12:48 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 01:18:06.217 05:12:48 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 01:18:06.217 05:12:48 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 01:18:06.217 05:12:48 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 01:18:06.217 05:12:48 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:18:06.217 05:12:48 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 01:18:06.217 05:12:48 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 01:18:06.217 05:12:48 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:18:06.217 05:12:48 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:18:06.217 05:12:48 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 01:18:06.217 05:12:48 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:18:06.217 05:12:48 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:18:06.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:18:06.217 --rc genhtml_branch_coverage=1 01:18:06.218 --rc genhtml_function_coverage=1 01:18:06.218 --rc genhtml_legend=1 01:18:06.218 --rc geninfo_all_blocks=1 01:18:06.218 --rc geninfo_unexecuted_blocks=1 01:18:06.218 01:18:06.218 ' 01:18:06.218 05:12:48 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:18:06.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:18:06.218 --rc genhtml_branch_coverage=1 01:18:06.218 --rc genhtml_function_coverage=1 01:18:06.218 --rc genhtml_legend=1 01:18:06.218 --rc geninfo_all_blocks=1 01:18:06.218 --rc geninfo_unexecuted_blocks=1 01:18:06.218 01:18:06.218 ' 01:18:06.218 05:12:48 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:18:06.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:18:06.218 --rc genhtml_branch_coverage=1 01:18:06.218 --rc genhtml_function_coverage=1 01:18:06.218 --rc genhtml_legend=1 01:18:06.218 --rc geninfo_all_blocks=1 01:18:06.218 --rc geninfo_unexecuted_blocks=1 01:18:06.218 01:18:06.218 ' 01:18:06.218 05:12:48 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:18:06.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:18:06.218 --rc genhtml_branch_coverage=1 01:18:06.218 --rc genhtml_function_coverage=1 01:18:06.218 --rc genhtml_legend=1 01:18:06.218 --rc geninfo_all_blocks=1 01:18:06.218 --rc geninfo_unexecuted_blocks=1 01:18:06.218 01:18:06.218 ' 01:18:06.218 05:12:48 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 01:18:06.218 05:12:48 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 01:18:06.218 05:12:48 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 01:18:06.218 05:12:48 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 01:18:06.218 05:12:48 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 01:18:06.218 05:12:48 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 01:18:06.218 05:12:48 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 01:18:06.218 05:12:48 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 01:18:06.218 05:12:48 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 01:18:06.218 05:12:48 blockdev_nvme_gpt -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 01:18:06.218 05:12:48 blockdev_nvme_gpt -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 01:18:06.218 05:12:48 blockdev_nvme_gpt -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 01:18:06.218 05:12:48 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # uname -s 01:18:06.218 05:12:48 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 01:18:06.218 05:12:48 blockdev_nvme_gpt -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 01:18:06.218 05:12:48 blockdev_nvme_gpt -- bdev/blockdev.sh@719 -- # test_type=gpt 01:18:06.218 05:12:48 blockdev_nvme_gpt -- bdev/blockdev.sh@720 -- # crypto_device= 01:18:06.218 05:12:48 blockdev_nvme_gpt -- bdev/blockdev.sh@721 -- # dek= 01:18:06.218 05:12:48 blockdev_nvme_gpt -- bdev/blockdev.sh@722 -- # env_ctx= 01:18:06.218 05:12:48 blockdev_nvme_gpt -- bdev/blockdev.sh@723 -- # wait_for_rpc= 01:18:06.218 05:12:48 blockdev_nvme_gpt -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 01:18:06.218 05:12:48 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == bdev ]] 01:18:06.218 05:12:48 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == crypto_* ]] 01:18:06.218 05:12:48 blockdev_nvme_gpt -- bdev/blockdev.sh@730 -- # start_spdk_tgt 01:18:06.218 05:12:48 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=61886 01:18:06.218 05:12:48 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 01:18:06.218 05:12:48 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 61886 01:18:06.218 05:12:48 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 61886 ']' 01:18:06.218 05:12:48 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 01:18:06.218 05:12:48 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:18:06.218 05:12:48 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 01:18:06.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:18:06.218 05:12:48 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:18:06.218 05:12:48 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 01:18:06.218 05:12:48 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 01:18:06.477 [2024-12-09 05:12:48.751008] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:18:06.477 [2024-12-09 05:12:48.751128] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61886 ] 01:18:06.736 [2024-12-09 05:12:48.935241] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:18:06.736 [2024-12-09 05:12:49.084598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:18:08.115 05:12:50 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:18:08.115 05:12:50 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 01:18:08.115 05:12:50 blockdev_nvme_gpt -- bdev/blockdev.sh@731 -- # case "$test_type" in 01:18:08.115 05:12:50 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # setup_gpt_conf 01:18:08.115 05:12:50 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 01:18:08.374 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:18:08.661 Waiting for block devices as requested 01:18:08.661 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 01:18:08.661 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 01:18:08.943 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 01:18:08.943 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 01:18:14.227 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 01:18:14.227 05:12:56 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 01:18:14.227 05:12:56 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 01:18:14.227 05:12:56 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 01:18:14.227 05:12:56 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local nvme bdf 01:18:14.227 05:12:56 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 01:18:14.227 05:12:56 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 01:18:14.227 05:12:56 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 01:18:14.227 05:12:56 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 01:18:14.227 05:12:56 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:18:14.227 05:12:56 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 01:18:14.227 05:12:56 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 01:18:14.227 05:12:56 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 01:18:14.227 05:12:56 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 01:18:14.227 05:12:56 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:18:14.227 05:12:56 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 01:18:14.227 05:12:56 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 01:18:14.227 05:12:56 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 01:18:14.227 05:12:56 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 01:18:14.227 05:12:56 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:18:14.227 05:12:56 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 01:18:14.227 05:12:56 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 01:18:14.227 05:12:56 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 01:18:14.227 05:12:56 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 01:18:14.227 05:12:56 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:18:14.227 05:12:56 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 01:18:14.227 05:12:56 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 01:18:14.227 05:12:56 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 01:18:14.227 05:12:56 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 01:18:14.227 05:12:56 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:18:14.227 05:12:56 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 01:18:14.227 05:12:56 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 01:18:14.227 05:12:56 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 01:18:14.227 05:12:56 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 01:18:14.227 05:12:56 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:18:14.227 05:12:56 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 01:18:14.227 05:12:56 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 01:18:14.227 05:12:56 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3n1 01:18:14.227 05:12:56 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 01:18:14.228 05:12:56 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:18:14.228 05:12:56 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 01:18:14.228 05:12:56 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 01:18:14.228 05:12:56 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 01:18:14.228 05:12:56 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 01:18:14.228 05:12:56 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 01:18:14.228 05:12:56 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 01:18:14.228 05:12:56 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 01:18:14.228 05:12:56 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 01:18:14.228 BYT; 01:18:14.228 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 01:18:14.228 05:12:56 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 01:18:14.228 BYT; 01:18:14.228 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 01:18:14.228 05:12:56 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 01:18:14.228 05:12:56 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 01:18:14.228 05:12:56 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 01:18:14.228 05:12:56 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 01:18:14.228 05:12:56 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 01:18:14.228 05:12:56 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 01:18:14.228 05:12:56 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 01:18:14.228 05:12:56 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 01:18:14.228 05:12:56 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 01:18:14.228 05:12:56 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 01:18:14.228 05:12:56 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 01:18:14.228 05:12:56 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 01:18:14.228 05:12:56 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 01:18:14.228 05:12:56 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 01:18:14.228 05:12:56 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 01:18:14.228 05:12:56 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 01:18:14.228 05:12:56 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 01:18:14.228 05:12:56 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 01:18:14.228 05:12:56 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 01:18:14.228 05:12:56 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 01:18:14.228 05:12:56 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 01:18:14.228 05:12:56 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 01:18:14.228 05:12:56 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 01:18:14.228 05:12:56 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 01:18:14.228 05:12:56 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 01:18:14.228 05:12:56 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 01:18:14.228 05:12:56 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 01:18:14.228 05:12:56 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 01:18:14.228 05:12:56 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 01:18:15.163 The operation has completed successfully. 01:18:15.163 05:12:57 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 01:18:16.542 The operation has completed successfully. 01:18:16.542 05:12:58 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 01:18:16.802 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:18:17.438 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 01:18:17.439 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 01:18:17.439 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 01:18:17.699 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 01:18:17.699 05:13:00 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 01:18:17.699 05:13:00 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 01:18:17.699 05:13:00 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 01:18:17.699 [] 01:18:17.699 05:13:00 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:18:17.699 05:13:00 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 01:18:17.699 05:13:00 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 01:18:17.699 05:13:00 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 01:18:17.699 05:13:00 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 01:18:17.958 05:13:00 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 01:18:17.958 05:13:00 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 01:18:17.958 05:13:00 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 01:18:18.215 05:13:00 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:18:18.215 05:13:00 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 01:18:18.215 05:13:00 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 01:18:18.215 05:13:00 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 01:18:18.215 05:13:00 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:18:18.215 05:13:00 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # cat 01:18:18.215 05:13:00 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 01:18:18.215 05:13:00 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 01:18:18.215 05:13:00 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 01:18:18.215 05:13:00 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:18:18.215 05:13:00 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 01:18:18.215 05:13:00 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 01:18:18.215 05:13:00 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 01:18:18.215 05:13:00 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:18:18.215 05:13:00 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 01:18:18.215 05:13:00 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 01:18:18.215 05:13:00 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 01:18:18.215 05:13:00 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:18:18.215 05:13:00 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 01:18:18.215 05:13:00 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 01:18:18.215 05:13:00 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 01:18:18.215 05:13:00 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 01:18:18.215 05:13:00 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 01:18:18.476 05:13:00 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:18:18.476 05:13:00 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 01:18:18.477 05:13:00 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "4d7c60f6-4fdf-4d46-b184-ebcea63d6041"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "4d7c60f6-4fdf-4d46-b184-ebcea63d6041",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "837728f4-4398-4110-8e9d-020af8e6caf9"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "837728f4-4398-4110-8e9d-020af8e6caf9",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "02545887-6506-4836-a533-698c1a36403d"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "02545887-6506-4836-a533-698c1a36403d",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "01cd2411-5490-49ae-a49e-ec751412eb94"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "01cd2411-5490-49ae-a49e-ec751412eb94",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "c6e1cbda-12d9-46e4-9c7b-eea0a05f2c4f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "c6e1cbda-12d9-46e4-9c7b-eea0a05f2c4f",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 01:18:18.477 05:13:00 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # jq -r .name 01:18:18.477 05:13:00 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 01:18:18.477 05:13:00 blockdev_nvme_gpt -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 01:18:18.477 05:13:00 blockdev_nvme_gpt -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 01:18:18.477 05:13:00 blockdev_nvme_gpt -- bdev/blockdev.sh@791 -- # killprocess 61886 01:18:18.477 05:13:00 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 61886 ']' 01:18:18.477 05:13:00 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 61886 01:18:18.477 05:13:00 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 01:18:18.477 05:13:00 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:18:18.477 05:13:00 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61886 01:18:18.477 killing process with pid 61886 01:18:18.477 05:13:00 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:18:18.477 05:13:00 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:18:18.477 05:13:00 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61886' 01:18:18.477 05:13:00 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 61886 01:18:18.477 05:13:00 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 61886 01:18:21.010 05:13:03 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 01:18:21.010 05:13:03 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 01:18:21.010 05:13:03 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 01:18:21.010 05:13:03 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 01:18:21.010 05:13:03 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 01:18:21.010 ************************************ 01:18:21.010 START TEST bdev_hello_world 01:18:21.010 ************************************ 01:18:21.010 05:13:03 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 01:18:21.010 [2024-12-09 05:13:03.403153] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:18:21.010 [2024-12-09 05:13:03.403278] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62531 ] 01:18:21.267 [2024-12-09 05:13:03.576482] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:18:21.524 [2024-12-09 05:13:03.752850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:18:22.089 [2024-12-09 05:13:04.420213] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 01:18:22.089 [2024-12-09 05:13:04.420267] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 01:18:22.089 [2024-12-09 05:13:04.420293] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 01:18:22.089 [2024-12-09 05:13:04.423232] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 01:18:22.089 [2024-12-09 05:13:04.423851] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 01:18:22.089 [2024-12-09 05:13:04.423887] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 01:18:22.089 [2024-12-09 05:13:04.424108] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 01:18:22.089 01:18:22.089 [2024-12-09 05:13:04.424144] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 01:18:23.460 01:18:23.460 real 0m2.340s 01:18:23.460 user 0m1.969s 01:18:23.460 sys 0m0.260s 01:18:23.460 05:13:05 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 01:18:23.460 ************************************ 01:18:23.460 END TEST bdev_hello_world 01:18:23.460 05:13:05 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 01:18:23.460 ************************************ 01:18:23.460 05:13:05 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 01:18:23.460 05:13:05 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:18:23.460 05:13:05 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 01:18:23.460 05:13:05 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 01:18:23.460 ************************************ 01:18:23.460 START TEST bdev_bounds 01:18:23.460 ************************************ 01:18:23.460 05:13:05 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 01:18:23.460 05:13:05 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=62583 01:18:23.460 05:13:05 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 01:18:23.460 05:13:05 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 01:18:23.460 Process bdevio pid: 62583 01:18:23.460 05:13:05 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 62583' 01:18:23.460 05:13:05 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 62583 01:18:23.460 05:13:05 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 62583 ']' 01:18:23.460 05:13:05 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:18:23.460 05:13:05 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 01:18:23.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:18:23.460 05:13:05 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:18:23.460 05:13:05 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 01:18:23.460 05:13:05 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 01:18:23.460 [2024-12-09 05:13:05.829620] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:18:23.460 [2024-12-09 05:13:05.829831] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62583 ] 01:18:23.718 [2024-12-09 05:13:06.021404] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 01:18:23.718 [2024-12-09 05:13:06.140193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:18:23.718 [2024-12-09 05:13:06.140341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:18:23.718 [2024-12-09 05:13:06.140373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:18:24.669 05:13:06 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:18:24.669 05:13:06 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 01:18:24.669 05:13:06 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 01:18:24.669 I/O targets: 01:18:24.669 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 01:18:24.669 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 01:18:24.669 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 01:18:24.669 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 01:18:24.669 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 01:18:24.669 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 01:18:24.669 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 01:18:24.669 01:18:24.669 01:18:24.669 CUnit - A unit testing framework for C - Version 2.1-3 01:18:24.669 http://cunit.sourceforge.net/ 01:18:24.669 01:18:24.669 01:18:24.669 Suite: bdevio tests on: Nvme3n1 01:18:24.669 Test: blockdev write read block ...passed 01:18:24.669 Test: blockdev write zeroes read block ...passed 01:18:24.669 Test: blockdev write zeroes read no split ...passed 01:18:24.669 Test: blockdev write zeroes read split ...passed 01:18:24.669 Test: blockdev write zeroes read split partial ...passed 01:18:24.669 Test: blockdev reset ...[2024-12-09 05:13:07.010240] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 01:18:24.669 [2024-12-09 05:13:07.013862] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 01:18:24.669 passed 01:18:24.669 Test: blockdev write read 8 blocks ...passed 01:18:24.669 Test: blockdev write read size > 128k ...passed 01:18:24.669 Test: blockdev write read invalid size ...passed 01:18:24.669 Test: blockdev write read offset + nbytes == size of blockdev ...passed 01:18:24.669 Test: blockdev write read offset + nbytes > size of blockdev ...passed 01:18:24.669 Test: blockdev write read max offset ...passed 01:18:24.669 Test: blockdev write read 2 blocks on overlapped address offset ...passed 01:18:24.669 Test: blockdev writev readv 8 blocks ...passed 01:18:24.669 Test: blockdev writev readv 30 x 1block ...passed 01:18:24.669 Test: blockdev writev readv block ...passed 01:18:24.669 Test: blockdev writev readv size > 128k ...passed 01:18:24.669 Test: blockdev writev readv size > 128k in two iovs ...passed 01:18:24.669 Test: blockdev comparev and writev ...[2024-12-09 05:13:07.021018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bc804000 len:0x1000 01:18:24.669 [2024-12-09 05:13:07.021068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 01:18:24.669 passed 01:18:24.669 Test: blockdev nvme passthru rw ...passed 01:18:24.669 Test: blockdev nvme passthru vendor specific ...passed 01:18:24.669 Test: blockdev nvme admin passthru ...[2024-12-09 05:13:07.021709] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 01:18:24.669 [2024-12-09 05:13:07.021746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 01:18:24.669 passed 01:18:24.669 Test: blockdev copy ...passed 01:18:24.669 Suite: bdevio tests on: Nvme2n3 01:18:24.669 Test: blockdev write read block ...passed 01:18:24.669 Test: blockdev write zeroes read block ...passed 01:18:24.669 Test: blockdev write zeroes read no split ...passed 01:18:24.669 Test: blockdev write zeroes read split ...passed 01:18:24.669 Test: blockdev write zeroes read split partial ...passed 01:18:24.669 Test: blockdev reset ...[2024-12-09 05:13:07.098342] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 01:18:24.669 [2024-12-09 05:13:07.102137] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 01:18:24.669 passed 01:18:24.669 Test: blockdev write read 8 blocks ...passed 01:18:24.669 Test: blockdev write read size > 128k ...passed 01:18:24.669 Test: blockdev write read invalid size ...passed 01:18:24.669 Test: blockdev write read offset + nbytes == size of blockdev ...passed 01:18:24.669 Test: blockdev write read offset + nbytes > size of blockdev ...passed 01:18:24.669 Test: blockdev write read max offset ...passed 01:18:24.669 Test: blockdev write read 2 blocks on overlapped address offset ...passed 01:18:24.669 Test: blockdev writev readv 8 blocks ...passed 01:18:24.669 Test: blockdev writev readv 30 x 1block ...passed 01:18:24.669 Test: blockdev writev readv block ...passed 01:18:24.669 Test: blockdev writev readv size > 128k ...passed 01:18:24.669 Test: blockdev writev readv size > 128k in two iovs ...passed 01:18:24.669 Test: blockdev comparev and writev ...[2024-12-09 05:13:07.109820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bc802000 len:0x1000 01:18:24.669 [2024-12-09 05:13:07.109873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 01:18:24.669 passed 01:18:24.669 Test: blockdev nvme passthru rw ...passed 01:18:24.669 Test: blockdev nvme passthru vendor specific ...passed 01:18:24.669 Test: blockdev nvme admin passthru ...[2024-12-09 05:13:07.110442] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 01:18:24.669 [2024-12-09 05:13:07.110486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 01:18:24.669 passed 01:18:24.669 Test: blockdev copy ...passed 01:18:24.669 Suite: bdevio tests on: Nvme2n2 01:18:24.669 Test: blockdev write read block ...passed 01:18:24.669 Test: blockdev write zeroes read block ...passed 01:18:24.928 Test: blockdev write zeroes read no split ...passed 01:18:24.928 Test: blockdev write zeroes read split ...passed 01:18:24.928 Test: blockdev write zeroes read split partial ...passed 01:18:24.928 Test: blockdev reset ...[2024-12-09 05:13:07.193734] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 01:18:24.928 [2024-12-09 05:13:07.198009] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 01:18:24.928 passed 01:18:24.928 Test: blockdev write read 8 blocks ...passed 01:18:24.928 Test: blockdev write read size > 128k ...passed 01:18:24.928 Test: blockdev write read invalid size ...passed 01:18:24.928 Test: blockdev write read offset + nbytes == size of blockdev ...passed 01:18:24.928 Test: blockdev write read offset + nbytes > size of blockdev ...passed 01:18:24.928 Test: blockdev write read max offset ...passed 01:18:24.928 Test: blockdev write read 2 blocks on overlapped address offset ...passed 01:18:24.928 Test: blockdev writev readv 8 blocks ...passed 01:18:24.928 Test: blockdev writev readv 30 x 1block ...passed 01:18:24.928 Test: blockdev writev readv block ...passed 01:18:24.928 Test: blockdev writev readv size > 128k ...passed 01:18:24.928 Test: blockdev writev readv size > 128k in two iovs ...passed 01:18:24.929 Test: blockdev comparev and writev ...[2024-12-09 05:13:07.206485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d0e38000 len:0x1000 01:18:24.929 [2024-12-09 05:13:07.206533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 01:18:24.929 passed 01:18:24.929 Test: blockdev nvme passthru rw ...passed 01:18:24.929 Test: blockdev nvme passthru vendor specific ...[2024-12-09 05:13:07.207480] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 01:18:24.929 passed 01:18:24.929 Test: blockdev nvme admin passthru ...[2024-12-09 05:13:07.207516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 01:18:24.929 passed 01:18:24.929 Test: blockdev copy ...passed 01:18:24.929 Suite: bdevio tests on: Nvme2n1 01:18:24.929 Test: blockdev write read block ...passed 01:18:24.929 Test: blockdev write zeroes read block ...passed 01:18:24.929 Test: blockdev write zeroes read no split ...passed 01:18:24.929 Test: blockdev write zeroes read split ...passed 01:18:24.929 Test: blockdev write zeroes read split partial ...passed 01:18:24.929 Test: blockdev reset ...[2024-12-09 05:13:07.285727] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 01:18:24.929 [2024-12-09 05:13:07.289953] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 01:18:24.929 passed 01:18:24.929 Test: blockdev write read 8 blocks ...passed 01:18:24.929 Test: blockdev write read size > 128k ...passed 01:18:24.929 Test: blockdev write read invalid size ...passed 01:18:24.929 Test: blockdev write read offset + nbytes == size of blockdev ...passed 01:18:24.929 Test: blockdev write read offset + nbytes > size of blockdev ...passed 01:18:24.929 Test: blockdev write read max offset ...passed 01:18:24.929 Test: blockdev write read 2 blocks on overlapped address offset ...passed 01:18:24.929 Test: blockdev writev readv 8 blocks ...passed 01:18:24.929 Test: blockdev writev readv 30 x 1block ...passed 01:18:24.929 Test: blockdev writev readv block ...passed 01:18:24.929 Test: blockdev writev readv size > 128k ...passed 01:18:24.929 Test: blockdev writev readv size > 128k in two iovs ...passed 01:18:24.929 Test: blockdev comparev and writev ...[2024-12-09 05:13:07.298206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d0e34000 len:0x1000 01:18:24.929 [2024-12-09 05:13:07.298258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 01:18:24.929 passed 01:18:24.929 Test: blockdev nvme passthru rw ...passed 01:18:24.929 Test: blockdev nvme passthru vendor specific ...[2024-12-09 05:13:07.299067] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 01:18:24.929 [2024-12-09 05:13:07.299102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 01:18:24.929 passed 01:18:24.929 Test: blockdev nvme admin passthru ...passed 01:18:24.929 Test: blockdev copy ...passed 01:18:24.929 Suite: bdevio tests on: Nvme1n1p2 01:18:24.929 Test: blockdev write read block ...passed 01:18:24.929 Test: blockdev write zeroes read block ...passed 01:18:24.929 Test: blockdev write zeroes read no split ...passed 01:18:24.929 Test: blockdev write zeroes read split ...passed 01:18:24.929 Test: blockdev write zeroes read split partial ...passed 01:18:24.929 Test: blockdev reset ...[2024-12-09 05:13:07.378286] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 01:18:25.188 [2024-12-09 05:13:07.382191] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 01:18:25.188 passed 01:18:25.188 Test: blockdev write read 8 blocks ...passed 01:18:25.188 Test: blockdev write read size > 128k ...passed 01:18:25.188 Test: blockdev write read invalid size ...passed 01:18:25.188 Test: blockdev write read offset + nbytes == size of blockdev ...passed 01:18:25.188 Test: blockdev write read offset + nbytes > size of blockdev ...passed 01:18:25.188 Test: blockdev write read max offset ...passed 01:18:25.188 Test: blockdev write read 2 blocks on overlapped address offset ...passed 01:18:25.188 Test: blockdev writev readv 8 blocks ...passed 01:18:25.188 Test: blockdev writev readv 30 x 1block ...passed 01:18:25.188 Test: blockdev writev readv block ...passed 01:18:25.188 Test: blockdev writev readv size > 128k ...passed 01:18:25.188 Test: blockdev writev readv size > 128k in two iovs ...passed 01:18:25.188 Test: blockdev comparev and writev ...[2024-12-09 05:13:07.389654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2d0e30000 len:0x1000 01:18:25.188 [2024-12-09 05:13:07.389700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 01:18:25.188 passed 01:18:25.188 Test: blockdev nvme passthru rw ...passed 01:18:25.188 Test: blockdev nvme passthru vendor specific ...passed 01:18:25.188 Test: blockdev nvme admin passthru ...passed 01:18:25.188 Test: blockdev copy ...passed 01:18:25.188 Suite: bdevio tests on: Nvme1n1p1 01:18:25.188 Test: blockdev write read block ...passed 01:18:25.188 Test: blockdev write zeroes read block ...passed 01:18:25.188 Test: blockdev write zeroes read no split ...passed 01:18:25.188 Test: blockdev write zeroes read split ...passed 01:18:25.188 Test: blockdev write zeroes read split partial ...passed 01:18:25.188 Test: blockdev reset ...[2024-12-09 05:13:07.454247] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 01:18:25.188 [2024-12-09 05:13:07.457927] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 01:18:25.188 passed 01:18:25.188 Test: blockdev write read 8 blocks ...passed 01:18:25.188 Test: blockdev write read size > 128k ...passed 01:18:25.188 Test: blockdev write read invalid size ...passed 01:18:25.188 Test: blockdev write read offset + nbytes == size of blockdev ...passed 01:18:25.188 Test: blockdev write read offset + nbytes > size of blockdev ...passed 01:18:25.188 Test: blockdev write read max offset ...passed 01:18:25.188 Test: blockdev write read 2 blocks on overlapped address offset ...passed 01:18:25.188 Test: blockdev writev readv 8 blocks ...passed 01:18:25.188 Test: blockdev writev readv 30 x 1block ...passed 01:18:25.188 Test: blockdev writev readv block ...passed 01:18:25.188 Test: blockdev writev readv size > 128k ...passed 01:18:25.188 Test: blockdev writev readv size > 128k in two iovs ...passed 01:18:25.188 Test: blockdev comparev and writev ...[2024-12-09 05:13:07.465713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2bd20e000 len:0x1000 01:18:25.188 [2024-12-09 05:13:07.465759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 01:18:25.188 passed 01:18:25.188 Test: blockdev nvme passthru rw ...passed 01:18:25.188 Test: blockdev nvme passthru vendor specific ...passed 01:18:25.188 Test: blockdev nvme admin passthru ...passed 01:18:25.188 Test: blockdev copy ...passed 01:18:25.188 Suite: bdevio tests on: Nvme0n1 01:18:25.188 Test: blockdev write read block ...passed 01:18:25.188 Test: blockdev write zeroes read block ...passed 01:18:25.188 Test: blockdev write zeroes read no split ...passed 01:18:25.188 Test: blockdev write zeroes read split ...passed 01:18:25.188 Test: blockdev write zeroes read split partial ...passed 01:18:25.188 Test: blockdev reset ...[2024-12-09 05:13:07.531166] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 01:18:25.188 passed 01:18:25.188 Test: blockdev write read 8 blocks ...[2024-12-09 05:13:07.534961] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 01:18:25.188 passed 01:18:25.188 Test: blockdev write read size > 128k ...passed 01:18:25.188 Test: blockdev write read invalid size ...passed 01:18:25.188 Test: blockdev write read offset + nbytes == size of blockdev ...passed 01:18:25.188 Test: blockdev write read offset + nbytes > size of blockdev ...passed 01:18:25.188 Test: blockdev write read max offset ...passed 01:18:25.188 Test: blockdev write read 2 blocks on overlapped address offset ...passed 01:18:25.188 Test: blockdev writev readv 8 blocks ...passed 01:18:25.188 Test: blockdev writev readv 30 x 1block ...passed 01:18:25.188 Test: blockdev writev readv block ...passed 01:18:25.188 Test: blockdev writev readv size > 128k ...passed 01:18:25.188 Test: blockdev writev readv size > 128k in two iovs ...passed 01:18:25.188 Test: blockdev comparev and writev ...passed 01:18:25.188 Test: blockdev nvme passthru rw ...[2024-12-09 05:13:07.541435] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 01:18:25.188 separate metadata which is not supported yet. 01:18:25.188 passed 01:18:25.188 Test: blockdev nvme passthru vendor specific ...[2024-12-09 05:13:07.541969] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 01:18:25.188 [2024-12-09 05:13:07.542016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 01:18:25.188 passed 01:18:25.189 Test: blockdev nvme admin passthru ...passed 01:18:25.189 Test: blockdev copy ...passed 01:18:25.189 01:18:25.189 Run Summary: Type Total Ran Passed Failed Inactive 01:18:25.189 suites 7 7 n/a 0 0 01:18:25.189 tests 161 161 161 0 0 01:18:25.189 asserts 1025 1025 1025 0 n/a 01:18:25.189 01:18:25.189 Elapsed time = 1.663 seconds 01:18:25.189 0 01:18:25.189 05:13:07 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 62583 01:18:25.189 05:13:07 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 62583 ']' 01:18:25.189 05:13:07 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 62583 01:18:25.189 05:13:07 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 01:18:25.189 05:13:07 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:18:25.189 05:13:07 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62583 01:18:25.189 05:13:07 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:18:25.189 05:13:07 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:18:25.189 killing process with pid 62583 01:18:25.189 05:13:07 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62583' 01:18:25.189 05:13:07 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 62583 01:18:25.189 05:13:07 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 62583 01:18:26.569 05:13:08 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 01:18:26.569 01:18:26.569 real 0m3.047s 01:18:26.569 user 0m7.612s 01:18:26.569 sys 0m0.440s 01:18:26.569 05:13:08 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 01:18:26.569 05:13:08 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 01:18:26.569 ************************************ 01:18:26.569 END TEST bdev_bounds 01:18:26.569 ************************************ 01:18:26.569 05:13:08 blockdev_nvme_gpt -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 01:18:26.569 05:13:08 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 01:18:26.569 05:13:08 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 01:18:26.569 05:13:08 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 01:18:26.569 ************************************ 01:18:26.569 START TEST bdev_nbd 01:18:26.569 ************************************ 01:18:26.569 05:13:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 01:18:26.569 05:13:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 01:18:26.569 05:13:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 01:18:26.569 05:13:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:18:26.569 05:13:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 01:18:26.569 05:13:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 01:18:26.569 05:13:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 01:18:26.569 05:13:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 01:18:26.569 05:13:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 01:18:26.569 05:13:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 01:18:26.569 05:13:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 01:18:26.569 05:13:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 01:18:26.569 05:13:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 01:18:26.569 05:13:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 01:18:26.569 05:13:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 01:18:26.569 05:13:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 01:18:26.569 05:13:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=62644 01:18:26.569 05:13:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 01:18:26.569 05:13:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 01:18:26.569 05:13:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 62644 /var/tmp/spdk-nbd.sock 01:18:26.569 05:13:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 62644 ']' 01:18:26.569 05:13:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 01:18:26.569 05:13:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 01:18:26.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 01:18:26.569 05:13:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 01:18:26.569 05:13:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 01:18:26.569 05:13:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 01:18:26.569 [2024-12-09 05:13:08.934002] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:18:26.569 [2024-12-09 05:13:08.934123] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:18:26.829 [2024-12-09 05:13:09.120385] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:18:26.829 [2024-12-09 05:13:09.227028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:18:27.764 05:13:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:18:27.764 05:13:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 01:18:27.764 05:13:09 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 01:18:27.764 05:13:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:18:27.764 05:13:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 01:18:27.764 05:13:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 01:18:27.764 05:13:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 01:18:27.764 05:13:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:18:27.764 05:13:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 01:18:27.764 05:13:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 01:18:27.764 05:13:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 01:18:27.764 05:13:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 01:18:27.764 05:13:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 01:18:27.764 05:13:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 01:18:27.764 05:13:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 01:18:27.764 05:13:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 01:18:27.764 05:13:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 01:18:27.764 05:13:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 01:18:27.764 05:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 01:18:27.764 05:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 01:18:27.764 05:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:18:27.764 05:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:18:27.764 05:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 01:18:27.764 05:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 01:18:27.764 05:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:18:27.764 05:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:18:27.764 05:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:18:27.764 1+0 records in 01:18:27.764 1+0 records out 01:18:27.764 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000627174 s, 6.5 MB/s 01:18:27.764 05:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:18:27.764 05:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 01:18:27.764 05:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:18:27.764 05:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:18:27.764 05:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 01:18:27.764 05:13:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 01:18:27.764 05:13:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 01:18:27.764 05:13:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 01:18:28.022 05:13:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 01:18:28.022 05:13:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 01:18:28.022 05:13:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 01:18:28.022 05:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 01:18:28.022 05:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 01:18:28.022 05:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:18:28.022 05:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:18:28.022 05:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 01:18:28.022 05:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 01:18:28.022 05:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:18:28.022 05:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:18:28.022 05:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:18:28.022 1+0 records in 01:18:28.022 1+0 records out 01:18:28.022 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000612095 s, 6.7 MB/s 01:18:28.022 05:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:18:28.022 05:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 01:18:28.022 05:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:18:28.022 05:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:18:28.022 05:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 01:18:28.022 05:13:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 01:18:28.022 05:13:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 01:18:28.022 05:13:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 01:18:28.280 05:13:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 01:18:28.280 05:13:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 01:18:28.280 05:13:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 01:18:28.280 05:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 01:18:28.280 05:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 01:18:28.280 05:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:18:28.280 05:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:18:28.280 05:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 01:18:28.280 05:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 01:18:28.280 05:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:18:28.280 05:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:18:28.280 05:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:18:28.280 1+0 records in 01:18:28.280 1+0 records out 01:18:28.280 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00072398 s, 5.7 MB/s 01:18:28.280 05:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:18:28.280 05:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 01:18:28.280 05:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:18:28.280 05:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:18:28.280 05:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 01:18:28.280 05:13:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 01:18:28.280 05:13:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 01:18:28.280 05:13:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 01:18:28.538 05:13:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 01:18:28.538 05:13:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 01:18:28.538 05:13:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 01:18:28.538 05:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 01:18:28.538 05:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 01:18:28.538 05:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:18:28.538 05:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:18:28.538 05:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 01:18:28.538 05:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 01:18:28.538 05:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:18:28.538 05:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:18:28.538 05:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:18:28.538 1+0 records in 01:18:28.538 1+0 records out 01:18:28.538 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000627203 s, 6.5 MB/s 01:18:28.538 05:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:18:28.538 05:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 01:18:28.538 05:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:18:28.538 05:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:18:28.538 05:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 01:18:28.538 05:13:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 01:18:28.538 05:13:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 01:18:28.538 05:13:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 01:18:28.796 05:13:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 01:18:28.796 05:13:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 01:18:28.796 05:13:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 01:18:28.796 05:13:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 01:18:28.796 05:13:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 01:18:28.796 05:13:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:18:28.796 05:13:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:18:28.796 05:13:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 01:18:28.796 05:13:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 01:18:28.796 05:13:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:18:28.796 05:13:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:18:28.796 05:13:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:18:28.796 1+0 records in 01:18:28.796 1+0 records out 01:18:28.796 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000878215 s, 4.7 MB/s 01:18:28.796 05:13:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:18:28.796 05:13:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 01:18:28.796 05:13:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:18:28.796 05:13:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:18:28.796 05:13:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 01:18:28.796 05:13:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 01:18:28.796 05:13:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 01:18:28.796 05:13:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 01:18:29.056 05:13:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 01:18:29.056 05:13:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 01:18:29.056 05:13:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 01:18:29.056 05:13:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 01:18:29.056 05:13:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 01:18:29.056 05:13:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:18:29.056 05:13:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:18:29.056 05:13:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 01:18:29.056 05:13:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 01:18:29.056 05:13:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:18:29.056 05:13:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:18:29.056 05:13:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:18:29.056 1+0 records in 01:18:29.056 1+0 records out 01:18:29.056 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000915845 s, 4.5 MB/s 01:18:29.056 05:13:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:18:29.056 05:13:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 01:18:29.056 05:13:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:18:29.056 05:13:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:18:29.056 05:13:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 01:18:29.056 05:13:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 01:18:29.056 05:13:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 01:18:29.056 05:13:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 01:18:29.315 05:13:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 01:18:29.315 05:13:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 01:18:29.315 05:13:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 01:18:29.315 05:13:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 01:18:29.315 05:13:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 01:18:29.315 05:13:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:18:29.315 05:13:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:18:29.315 05:13:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 01:18:29.315 05:13:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 01:18:29.315 05:13:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:18:29.315 05:13:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:18:29.315 05:13:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:18:29.315 1+0 records in 01:18:29.315 1+0 records out 01:18:29.315 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000918247 s, 4.5 MB/s 01:18:29.315 05:13:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:18:29.315 05:13:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 01:18:29.315 05:13:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:18:29.315 05:13:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:18:29.315 05:13:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 01:18:29.315 05:13:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 01:18:29.315 05:13:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 01:18:29.315 05:13:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 01:18:29.574 05:13:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 01:18:29.574 { 01:18:29.574 "nbd_device": "/dev/nbd0", 01:18:29.574 "bdev_name": "Nvme0n1" 01:18:29.574 }, 01:18:29.574 { 01:18:29.574 "nbd_device": "/dev/nbd1", 01:18:29.574 "bdev_name": "Nvme1n1p1" 01:18:29.574 }, 01:18:29.574 { 01:18:29.574 "nbd_device": "/dev/nbd2", 01:18:29.574 "bdev_name": "Nvme1n1p2" 01:18:29.574 }, 01:18:29.574 { 01:18:29.574 "nbd_device": "/dev/nbd3", 01:18:29.574 "bdev_name": "Nvme2n1" 01:18:29.574 }, 01:18:29.574 { 01:18:29.574 "nbd_device": "/dev/nbd4", 01:18:29.574 "bdev_name": "Nvme2n2" 01:18:29.574 }, 01:18:29.574 { 01:18:29.574 "nbd_device": "/dev/nbd5", 01:18:29.574 "bdev_name": "Nvme2n3" 01:18:29.574 }, 01:18:29.574 { 01:18:29.574 "nbd_device": "/dev/nbd6", 01:18:29.574 "bdev_name": "Nvme3n1" 01:18:29.574 } 01:18:29.574 ]' 01:18:29.574 05:13:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 01:18:29.574 05:13:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 01:18:29.574 { 01:18:29.574 "nbd_device": "/dev/nbd0", 01:18:29.574 "bdev_name": "Nvme0n1" 01:18:29.574 }, 01:18:29.574 { 01:18:29.574 "nbd_device": "/dev/nbd1", 01:18:29.574 "bdev_name": "Nvme1n1p1" 01:18:29.574 }, 01:18:29.574 { 01:18:29.574 "nbd_device": "/dev/nbd2", 01:18:29.574 "bdev_name": "Nvme1n1p2" 01:18:29.574 }, 01:18:29.574 { 01:18:29.574 "nbd_device": "/dev/nbd3", 01:18:29.574 "bdev_name": "Nvme2n1" 01:18:29.574 }, 01:18:29.574 { 01:18:29.574 "nbd_device": "/dev/nbd4", 01:18:29.574 "bdev_name": "Nvme2n2" 01:18:29.574 }, 01:18:29.574 { 01:18:29.574 "nbd_device": "/dev/nbd5", 01:18:29.574 "bdev_name": "Nvme2n3" 01:18:29.574 }, 01:18:29.574 { 01:18:29.574 "nbd_device": "/dev/nbd6", 01:18:29.574 "bdev_name": "Nvme3n1" 01:18:29.574 } 01:18:29.574 ]' 01:18:29.574 05:13:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 01:18:29.574 05:13:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 01:18:29.574 05:13:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:18:29.574 05:13:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 01:18:29.574 05:13:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 01:18:29.574 05:13:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 01:18:29.574 05:13:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:18:29.574 05:13:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 01:18:29.833 05:13:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 01:18:29.833 05:13:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 01:18:29.833 05:13:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 01:18:29.833 05:13:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:18:29.833 05:13:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:18:29.833 05:13:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 01:18:29.833 05:13:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 01:18:29.833 05:13:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 01:18:29.833 05:13:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:18:29.833 05:13:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 01:18:30.092 05:13:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 01:18:30.092 05:13:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 01:18:30.092 05:13:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 01:18:30.092 05:13:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:18:30.092 05:13:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:18:30.092 05:13:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 01:18:30.092 05:13:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 01:18:30.092 05:13:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 01:18:30.092 05:13:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:18:30.092 05:13:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 01:18:30.350 05:13:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 01:18:30.350 05:13:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 01:18:30.350 05:13:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 01:18:30.350 05:13:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:18:30.350 05:13:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:18:30.350 05:13:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 01:18:30.350 05:13:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 01:18:30.350 05:13:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 01:18:30.350 05:13:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:18:30.350 05:13:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 01:18:30.609 05:13:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 01:18:30.609 05:13:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 01:18:30.609 05:13:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 01:18:30.609 05:13:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:18:30.609 05:13:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:18:30.609 05:13:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 01:18:30.609 05:13:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 01:18:30.609 05:13:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 01:18:30.609 05:13:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:18:30.609 05:13:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 01:18:30.609 05:13:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 01:18:30.866 05:13:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 01:18:30.866 05:13:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 01:18:30.866 05:13:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:18:30.866 05:13:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:18:30.866 05:13:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 01:18:30.866 05:13:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 01:18:30.866 05:13:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 01:18:30.866 05:13:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:18:30.866 05:13:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 01:18:30.866 05:13:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 01:18:30.866 05:13:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 01:18:30.866 05:13:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 01:18:30.866 05:13:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:18:30.866 05:13:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:18:30.866 05:13:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 01:18:30.866 05:13:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 01:18:30.866 05:13:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 01:18:30.866 05:13:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:18:30.866 05:13:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 01:18:31.125 05:13:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 01:18:31.125 05:13:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 01:18:31.125 05:13:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 01:18:31.125 05:13:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:18:31.125 05:13:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:18:31.125 05:13:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 01:18:31.125 05:13:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 01:18:31.125 05:13:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 01:18:31.125 05:13:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 01:18:31.125 05:13:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:18:31.125 05:13:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 01:18:31.407 05:13:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 01:18:31.407 05:13:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 01:18:31.407 05:13:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 01:18:31.407 05:13:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 01:18:31.407 05:13:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 01:18:31.407 05:13:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 01:18:31.407 05:13:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 01:18:31.407 05:13:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 01:18:31.407 05:13:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 01:18:31.407 05:13:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 01:18:31.407 05:13:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 01:18:31.407 05:13:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 01:18:31.407 05:13:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 01:18:31.407 05:13:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:18:31.407 05:13:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 01:18:31.407 05:13:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 01:18:31.407 05:13:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 01:18:31.407 05:13:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 01:18:31.407 05:13:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 01:18:31.407 05:13:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:18:31.407 05:13:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 01:18:31.407 05:13:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 01:18:31.407 05:13:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 01:18:31.407 05:13:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 01:18:31.407 05:13:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 01:18:31.407 05:13:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 01:18:31.407 05:13:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 01:18:31.407 05:13:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 01:18:31.703 /dev/nbd0 01:18:31.703 05:13:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 01:18:31.703 05:13:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 01:18:31.703 05:13:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 01:18:31.703 05:13:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 01:18:31.703 05:13:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:18:31.703 05:13:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:18:31.703 05:13:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 01:18:31.703 05:13:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 01:18:31.703 05:13:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:18:31.703 05:13:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:18:31.703 05:13:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:18:31.703 1+0 records in 01:18:31.703 1+0 records out 01:18:31.703 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000521842 s, 7.8 MB/s 01:18:31.703 05:13:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:18:31.703 05:13:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 01:18:31.703 05:13:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:18:31.703 05:13:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:18:31.703 05:13:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 01:18:31.703 05:13:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:18:31.703 05:13:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 01:18:31.703 05:13:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 01:18:31.962 /dev/nbd1 01:18:31.962 05:13:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 01:18:31.962 05:13:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 01:18:31.962 05:13:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 01:18:31.962 05:13:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 01:18:31.962 05:13:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:18:31.962 05:13:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:18:31.962 05:13:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 01:18:31.962 05:13:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 01:18:31.962 05:13:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:18:31.962 05:13:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:18:31.962 05:13:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:18:31.962 1+0 records in 01:18:31.962 1+0 records out 01:18:31.962 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000642314 s, 6.4 MB/s 01:18:31.962 05:13:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:18:31.962 05:13:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 01:18:31.962 05:13:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:18:31.962 05:13:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:18:31.962 05:13:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 01:18:31.962 05:13:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:18:31.962 05:13:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 01:18:31.962 05:13:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 01:18:32.219 /dev/nbd10 01:18:32.219 05:13:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 01:18:32.219 05:13:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 01:18:32.219 05:13:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 01:18:32.219 05:13:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 01:18:32.219 05:13:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:18:32.219 05:13:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:18:32.219 05:13:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 01:18:32.219 05:13:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 01:18:32.219 05:13:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:18:32.219 05:13:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:18:32.219 05:13:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:18:32.219 1+0 records in 01:18:32.219 1+0 records out 01:18:32.219 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000553776 s, 7.4 MB/s 01:18:32.219 05:13:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:18:32.219 05:13:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 01:18:32.219 05:13:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:18:32.219 05:13:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:18:32.219 05:13:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 01:18:32.220 05:13:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:18:32.220 05:13:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 01:18:32.220 05:13:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 01:18:32.478 /dev/nbd11 01:18:32.478 05:13:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 01:18:32.478 05:13:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 01:18:32.478 05:13:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 01:18:32.478 05:13:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 01:18:32.478 05:13:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:18:32.478 05:13:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:18:32.478 05:13:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 01:18:32.478 05:13:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 01:18:32.478 05:13:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:18:32.478 05:13:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:18:32.478 05:13:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:18:32.478 1+0 records in 01:18:32.478 1+0 records out 01:18:32.478 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000633405 s, 6.5 MB/s 01:18:32.478 05:13:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:18:32.478 05:13:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 01:18:32.478 05:13:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:18:32.478 05:13:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:18:32.478 05:13:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 01:18:32.478 05:13:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:18:32.478 05:13:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 01:18:32.478 05:13:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 01:18:32.737 /dev/nbd12 01:18:32.737 05:13:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 01:18:32.737 05:13:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 01:18:32.737 05:13:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 01:18:32.737 05:13:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 01:18:32.737 05:13:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:18:32.737 05:13:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:18:32.737 05:13:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 01:18:32.737 05:13:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 01:18:32.737 05:13:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:18:32.737 05:13:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:18:32.737 05:13:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:18:32.737 1+0 records in 01:18:32.737 1+0 records out 01:18:32.737 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000751748 s, 5.4 MB/s 01:18:32.737 05:13:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:18:32.737 05:13:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 01:18:32.737 05:13:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:18:32.737 05:13:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:18:32.737 05:13:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 01:18:32.737 05:13:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:18:32.737 05:13:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 01:18:32.737 05:13:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 01:18:32.996 /dev/nbd13 01:18:32.996 05:13:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 01:18:32.996 05:13:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 01:18:32.996 05:13:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 01:18:32.996 05:13:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 01:18:32.996 05:13:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:18:32.996 05:13:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:18:32.996 05:13:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 01:18:32.996 05:13:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 01:18:32.996 05:13:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:18:32.996 05:13:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:18:32.996 05:13:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:18:32.996 1+0 records in 01:18:32.996 1+0 records out 01:18:32.996 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000875798 s, 4.7 MB/s 01:18:32.996 05:13:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:18:32.996 05:13:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 01:18:32.996 05:13:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:18:32.996 05:13:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:18:32.996 05:13:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 01:18:32.996 05:13:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:18:32.996 05:13:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 01:18:32.996 05:13:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 01:18:33.255 /dev/nbd14 01:18:33.255 05:13:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 01:18:33.255 05:13:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 01:18:33.255 05:13:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 01:18:33.255 05:13:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 01:18:33.255 05:13:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:18:33.255 05:13:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:18:33.255 05:13:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 01:18:33.255 05:13:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 01:18:33.255 05:13:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:18:33.255 05:13:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:18:33.255 05:13:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:18:33.255 1+0 records in 01:18:33.255 1+0 records out 01:18:33.255 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00195949 s, 2.1 MB/s 01:18:33.255 05:13:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:18:33.255 05:13:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 01:18:33.255 05:13:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:18:33.255 05:13:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:18:33.255 05:13:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 01:18:33.255 05:13:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:18:33.255 05:13:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 01:18:33.255 05:13:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 01:18:33.255 05:13:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:18:33.255 05:13:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 01:18:33.514 05:13:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 01:18:33.514 { 01:18:33.514 "nbd_device": "/dev/nbd0", 01:18:33.514 "bdev_name": "Nvme0n1" 01:18:33.514 }, 01:18:33.514 { 01:18:33.514 "nbd_device": "/dev/nbd1", 01:18:33.514 "bdev_name": "Nvme1n1p1" 01:18:33.514 }, 01:18:33.514 { 01:18:33.514 "nbd_device": "/dev/nbd10", 01:18:33.514 "bdev_name": "Nvme1n1p2" 01:18:33.514 }, 01:18:33.514 { 01:18:33.514 "nbd_device": "/dev/nbd11", 01:18:33.514 "bdev_name": "Nvme2n1" 01:18:33.514 }, 01:18:33.514 { 01:18:33.514 "nbd_device": "/dev/nbd12", 01:18:33.514 "bdev_name": "Nvme2n2" 01:18:33.514 }, 01:18:33.514 { 01:18:33.514 "nbd_device": "/dev/nbd13", 01:18:33.514 "bdev_name": "Nvme2n3" 01:18:33.514 }, 01:18:33.514 { 01:18:33.514 "nbd_device": "/dev/nbd14", 01:18:33.514 "bdev_name": "Nvme3n1" 01:18:33.514 } 01:18:33.514 ]' 01:18:33.514 05:13:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 01:18:33.514 { 01:18:33.514 "nbd_device": "/dev/nbd0", 01:18:33.514 "bdev_name": "Nvme0n1" 01:18:33.514 }, 01:18:33.514 { 01:18:33.514 "nbd_device": "/dev/nbd1", 01:18:33.514 "bdev_name": "Nvme1n1p1" 01:18:33.514 }, 01:18:33.514 { 01:18:33.514 "nbd_device": "/dev/nbd10", 01:18:33.514 "bdev_name": "Nvme1n1p2" 01:18:33.514 }, 01:18:33.514 { 01:18:33.514 "nbd_device": "/dev/nbd11", 01:18:33.514 "bdev_name": "Nvme2n1" 01:18:33.514 }, 01:18:33.514 { 01:18:33.514 "nbd_device": "/dev/nbd12", 01:18:33.514 "bdev_name": "Nvme2n2" 01:18:33.514 }, 01:18:33.514 { 01:18:33.514 "nbd_device": "/dev/nbd13", 01:18:33.514 "bdev_name": "Nvme2n3" 01:18:33.514 }, 01:18:33.514 { 01:18:33.514 "nbd_device": "/dev/nbd14", 01:18:33.514 "bdev_name": "Nvme3n1" 01:18:33.514 } 01:18:33.514 ]' 01:18:33.514 05:13:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 01:18:33.514 05:13:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 01:18:33.514 /dev/nbd1 01:18:33.514 /dev/nbd10 01:18:33.514 /dev/nbd11 01:18:33.514 /dev/nbd12 01:18:33.514 /dev/nbd13 01:18:33.514 /dev/nbd14' 01:18:33.514 05:13:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 01:18:33.514 /dev/nbd1 01:18:33.514 /dev/nbd10 01:18:33.514 /dev/nbd11 01:18:33.514 /dev/nbd12 01:18:33.514 /dev/nbd13 01:18:33.514 /dev/nbd14' 01:18:33.514 05:13:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 01:18:33.514 05:13:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 01:18:33.514 05:13:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 01:18:33.514 05:13:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 01:18:33.514 05:13:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 01:18:33.514 05:13:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 01:18:33.514 05:13:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 01:18:33.514 05:13:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 01:18:33.514 05:13:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 01:18:33.514 05:13:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 01:18:33.514 05:13:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 01:18:33.514 05:13:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 01:18:33.514 256+0 records in 01:18:33.514 256+0 records out 01:18:33.514 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0123384 s, 85.0 MB/s 01:18:33.514 05:13:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 01:18:33.514 05:13:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 01:18:33.773 256+0 records in 01:18:33.773 256+0 records out 01:18:33.773 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.140896 s, 7.4 MB/s 01:18:33.774 05:13:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 01:18:33.774 05:13:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 01:18:33.774 256+0 records in 01:18:33.774 256+0 records out 01:18:33.774 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.148303 s, 7.1 MB/s 01:18:33.774 05:13:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 01:18:33.774 05:13:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 01:18:34.032 256+0 records in 01:18:34.032 256+0 records out 01:18:34.032 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.144282 s, 7.3 MB/s 01:18:34.032 05:13:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 01:18:34.032 05:13:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 01:18:34.291 256+0 records in 01:18:34.291 256+0 records out 01:18:34.291 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.140747 s, 7.5 MB/s 01:18:34.291 05:13:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 01:18:34.291 05:13:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 01:18:34.291 256+0 records in 01:18:34.291 256+0 records out 01:18:34.291 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.140551 s, 7.5 MB/s 01:18:34.291 05:13:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 01:18:34.291 05:13:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 01:18:34.550 256+0 records in 01:18:34.550 256+0 records out 01:18:34.550 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.140384 s, 7.5 MB/s 01:18:34.550 05:13:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 01:18:34.550 05:13:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 01:18:34.550 256+0 records in 01:18:34.550 256+0 records out 01:18:34.550 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.140674 s, 7.5 MB/s 01:18:34.550 05:13:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 01:18:34.550 05:13:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 01:18:34.550 05:13:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 01:18:34.550 05:13:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 01:18:34.550 05:13:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 01:18:34.550 05:13:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 01:18:34.550 05:13:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 01:18:34.550 05:13:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 01:18:34.550 05:13:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 01:18:34.550 05:13:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 01:18:34.550 05:13:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 01:18:34.550 05:13:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 01:18:34.550 05:13:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 01:18:34.550 05:13:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 01:18:34.550 05:13:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 01:18:34.550 05:13:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 01:18:34.550 05:13:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 01:18:34.550 05:13:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 01:18:34.550 05:13:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 01:18:34.809 05:13:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 01:18:34.810 05:13:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 01:18:34.810 05:13:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 01:18:34.810 05:13:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 01:18:34.810 05:13:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:18:34.810 05:13:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 01:18:34.810 05:13:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 01:18:34.810 05:13:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 01:18:34.810 05:13:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:18:34.810 05:13:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 01:18:34.810 05:13:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 01:18:34.810 05:13:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 01:18:34.810 05:13:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 01:18:34.810 05:13:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:18:34.810 05:13:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:18:34.810 05:13:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 01:18:34.810 05:13:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 01:18:34.810 05:13:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 01:18:34.810 05:13:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:18:34.810 05:13:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 01:18:35.069 05:13:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 01:18:35.069 05:13:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 01:18:35.069 05:13:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 01:18:35.069 05:13:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:18:35.069 05:13:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:18:35.069 05:13:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 01:18:35.069 05:13:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 01:18:35.069 05:13:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 01:18:35.069 05:13:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:18:35.069 05:13:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 01:18:35.328 05:13:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 01:18:35.328 05:13:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 01:18:35.328 05:13:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 01:18:35.328 05:13:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:18:35.328 05:13:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:18:35.328 05:13:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 01:18:35.328 05:13:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 01:18:35.328 05:13:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 01:18:35.328 05:13:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:18:35.328 05:13:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 01:18:35.587 05:13:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 01:18:35.587 05:13:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 01:18:35.587 05:13:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 01:18:35.588 05:13:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:18:35.588 05:13:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:18:35.588 05:13:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 01:18:35.588 05:13:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 01:18:35.588 05:13:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 01:18:35.588 05:13:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:18:35.588 05:13:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 01:18:35.847 05:13:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 01:18:35.847 05:13:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 01:18:35.847 05:13:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 01:18:35.847 05:13:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:18:35.847 05:13:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:18:35.847 05:13:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 01:18:35.847 05:13:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 01:18:35.847 05:13:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 01:18:35.847 05:13:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:18:35.847 05:13:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 01:18:36.106 05:13:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 01:18:36.106 05:13:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 01:18:36.106 05:13:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 01:18:36.106 05:13:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:18:36.106 05:13:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:18:36.106 05:13:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 01:18:36.106 05:13:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 01:18:36.106 05:13:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 01:18:36.107 05:13:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:18:36.107 05:13:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 01:18:36.107 05:13:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 01:18:36.107 05:13:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 01:18:36.107 05:13:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 01:18:36.107 05:13:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:18:36.107 05:13:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:18:36.107 05:13:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 01:18:36.107 05:13:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 01:18:36.107 05:13:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 01:18:36.107 05:13:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 01:18:36.107 05:13:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:18:36.107 05:13:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 01:18:36.366 05:13:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 01:18:36.366 05:13:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 01:18:36.366 05:13:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 01:18:36.624 05:13:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 01:18:36.624 05:13:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 01:18:36.624 05:13:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 01:18:36.624 05:13:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 01:18:36.624 05:13:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 01:18:36.625 05:13:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 01:18:36.625 05:13:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 01:18:36.625 05:13:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 01:18:36.625 05:13:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 01:18:36.625 05:13:18 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 01:18:36.625 05:13:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:18:36.625 05:13:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 01:18:36.625 05:13:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 01:18:36.625 malloc_lvol_verify 01:18:36.625 05:13:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 01:18:36.884 0aa5773c-d77c-472b-bbf5-266ea013d073 01:18:36.884 05:13:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 01:18:37.143 323a9f22-b75e-4c30-a18e-694776bc5f92 01:18:37.143 05:13:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 01:18:37.402 /dev/nbd0 01:18:37.402 05:13:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 01:18:37.402 05:13:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 01:18:37.402 05:13:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 01:18:37.402 05:13:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 01:18:37.402 05:13:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 01:18:37.402 mke2fs 1.47.0 (5-Feb-2023) 01:18:37.402 Discarding device blocks: 0/4096 done 01:18:37.402 Creating filesystem with 4096 1k blocks and 1024 inodes 01:18:37.402 01:18:37.402 Allocating group tables: 0/1 done 01:18:37.402 Writing inode tables: 0/1 done 01:18:37.402 Creating journal (1024 blocks): done 01:18:37.402 Writing superblocks and filesystem accounting information: 0/1 done 01:18:37.402 01:18:37.402 05:13:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 01:18:37.402 05:13:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:18:37.402 05:13:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 01:18:37.402 05:13:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 01:18:37.402 05:13:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 01:18:37.402 05:13:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:18:37.402 05:13:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 01:18:37.662 05:13:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 01:18:37.662 05:13:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 01:18:37.662 05:13:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 01:18:37.662 05:13:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:18:37.662 05:13:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:18:37.662 05:13:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 01:18:37.662 05:13:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 01:18:37.662 05:13:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 01:18:37.662 05:13:19 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 62644 01:18:37.662 05:13:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 62644 ']' 01:18:37.662 05:13:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 62644 01:18:37.662 05:13:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 01:18:37.662 05:13:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:18:37.662 05:13:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62644 01:18:37.662 killing process with pid 62644 01:18:37.662 05:13:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:18:37.662 05:13:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:18:37.662 05:13:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62644' 01:18:37.662 05:13:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 62644 01:18:37.662 05:13:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 62644 01:18:39.042 05:13:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 01:18:39.042 01:18:39.042 real 0m12.470s 01:18:39.042 user 0m15.995s 01:18:39.042 sys 0m5.284s 01:18:39.042 05:13:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 01:18:39.042 05:13:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 01:18:39.042 ************************************ 01:18:39.042 END TEST bdev_nbd 01:18:39.042 ************************************ 01:18:39.042 05:13:21 blockdev_nvme_gpt -- bdev/blockdev.sh@800 -- # [[ y == y ]] 01:18:39.042 05:13:21 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = nvme ']' 01:18:39.042 05:13:21 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = gpt ']' 01:18:39.042 skipping fio tests on NVMe due to multi-ns failures. 01:18:39.042 05:13:21 blockdev_nvme_gpt -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 01:18:39.042 05:13:21 blockdev_nvme_gpt -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 01:18:39.042 05:13:21 blockdev_nvme_gpt -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 01:18:39.042 05:13:21 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 01:18:39.042 05:13:21 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 01:18:39.042 05:13:21 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 01:18:39.042 ************************************ 01:18:39.042 START TEST bdev_verify 01:18:39.042 ************************************ 01:18:39.042 05:13:21 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 01:18:39.042 [2024-12-09 05:13:21.463135] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:18:39.042 [2024-12-09 05:13:21.463271] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63072 ] 01:18:39.302 [2024-12-09 05:13:21.646356] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 01:18:39.563 [2024-12-09 05:13:21.764369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:18:39.563 [2024-12-09 05:13:21.764420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:18:40.129 Running I/O for 5 seconds... 01:18:42.441 21632.00 IOPS, 84.50 MiB/s [2024-12-09T05:13:25.867Z] 21088.00 IOPS, 82.38 MiB/s [2024-12-09T05:13:26.801Z] 22144.00 IOPS, 86.50 MiB/s [2024-12-09T05:13:27.738Z] 22221.50 IOPS, 86.80 MiB/s [2024-12-09T05:13:27.738Z] 22423.60 IOPS, 87.59 MiB/s 01:18:45.282 Latency(us) 01:18:45.282 [2024-12-09T05:13:27.738Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:18:45.282 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 01:18:45.282 Verification LBA range: start 0x0 length 0xbd0bd 01:18:45.282 Nvme0n1 : 5.08 1586.84 6.20 0.00 0.00 80498.15 17370.99 82117.40 01:18:45.282 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 01:18:45.282 Verification LBA range: start 0xbd0bd length 0xbd0bd 01:18:45.282 Nvme0n1 : 5.09 1583.99 6.19 0.00 0.00 79902.61 21371.58 68641.72 01:18:45.282 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 01:18:45.282 Verification LBA range: start 0x0 length 0x4ff80 01:18:45.282 Nvme1n1p1 : 5.08 1585.86 6.19 0.00 0.00 80439.43 17370.99 75800.67 01:18:45.282 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 01:18:45.282 Verification LBA range: start 0x4ff80 length 0x4ff80 01:18:45.282 Nvme1n1p1 : 5.09 1583.41 6.19 0.00 0.00 79809.22 18529.05 69483.95 01:18:45.282 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 01:18:45.282 Verification LBA range: start 0x0 length 0x4ff7f 01:18:45.282 Nvme1n1p2 : 5.09 1584.96 6.19 0.00 0.00 80283.58 17476.27 68641.72 01:18:45.282 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 01:18:45.282 Verification LBA range: start 0x4ff7f length 0x4ff7f 01:18:45.282 Nvme1n1p2 : 5.10 1591.53 6.22 0.00 0.00 79354.80 2052.93 74116.22 01:18:45.282 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 01:18:45.282 Verification LBA range: start 0x0 length 0x80000 01:18:45.282 Nvme2n1 : 5.09 1583.86 6.19 0.00 0.00 80155.75 20318.79 65272.80 01:18:45.282 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 01:18:45.282 Verification LBA range: start 0x80000 length 0x80000 01:18:45.282 Nvme2n1 : 5.08 1586.84 6.20 0.00 0.00 80486.99 21687.42 80854.05 01:18:45.282 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 01:18:45.282 Verification LBA range: start 0x0 length 0x80000 01:18:45.282 Nvme2n2 : 5.09 1583.31 6.18 0.00 0.00 80032.56 20318.79 63588.34 01:18:45.282 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 01:18:45.282 Verification LBA range: start 0x80000 length 0x80000 01:18:45.282 Nvme2n2 : 5.08 1585.88 6.19 0.00 0.00 80242.53 23056.04 64851.69 01:18:45.282 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 01:18:45.282 Verification LBA range: start 0x0 length 0x80000 01:18:45.282 Nvme2n3 : 5.10 1582.67 6.18 0.00 0.00 79866.62 14423.18 68220.61 01:18:45.282 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 01:18:45.282 Verification LBA range: start 0x80000 length 0x80000 01:18:45.282 Nvme2n3 : 5.09 1585.03 6.19 0.00 0.00 80146.22 24424.66 66536.15 01:18:45.282 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 01:18:45.282 Verification LBA range: start 0x0 length 0x20000 01:18:45.282 Nvme3n1 : 5.10 1582.28 6.18 0.00 0.00 79807.55 17581.55 70747.30 01:18:45.282 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 01:18:45.282 Verification LBA range: start 0x20000 length 0x20000 01:18:45.282 Nvme3n1 : 5.09 1584.34 6.19 0.00 0.00 80030.49 25372.17 68641.72 01:18:45.282 [2024-12-09T05:13:27.738Z] =================================================================================================================== 01:18:45.282 [2024-12-09T05:13:27.738Z] Total : 22190.81 86.68 0.00 0.00 80075.12 2052.93 82117.40 01:18:47.194 01:18:47.194 real 0m7.828s 01:18:47.194 user 0m14.382s 01:18:47.194 sys 0m0.318s 01:18:47.194 05:13:29 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 01:18:47.194 05:13:29 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 01:18:47.194 ************************************ 01:18:47.194 END TEST bdev_verify 01:18:47.194 ************************************ 01:18:47.194 05:13:29 blockdev_nvme_gpt -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 01:18:47.194 05:13:29 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 01:18:47.194 05:13:29 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 01:18:47.194 05:13:29 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 01:18:47.194 ************************************ 01:18:47.194 START TEST bdev_verify_big_io 01:18:47.194 ************************************ 01:18:47.194 05:13:29 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 01:18:47.194 [2024-12-09 05:13:29.353955] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:18:47.194 [2024-12-09 05:13:29.354076] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63170 ] 01:18:47.194 [2024-12-09 05:13:29.541652] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 01:18:47.453 [2024-12-09 05:13:29.661359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:18:47.453 [2024-12-09 05:13:29.661409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:18:48.390 Running I/O for 5 seconds... 01:18:51.488 1584.00 IOPS, 99.00 MiB/s [2024-12-09T05:13:35.873Z] 2063.50 IOPS, 128.97 MiB/s [2024-12-09T05:13:36.441Z] 2028.00 IOPS, 126.75 MiB/s [2024-12-09T05:13:36.441Z] 2818.25 IOPS, 176.14 MiB/s 01:18:53.985 Latency(us) 01:18:53.985 [2024-12-09T05:13:36.441Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:18:53.985 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 01:18:53.985 Verification LBA range: start 0x0 length 0xbd0b 01:18:53.985 Nvme0n1 : 5.57 144.04 9.00 0.00 0.00 856660.52 22634.92 943297.29 01:18:53.985 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 01:18:53.985 Verification LBA range: start 0xbd0b length 0xbd0b 01:18:53.985 Nvme0n1 : 5.84 121.37 7.59 0.00 0.00 980510.42 104436.49 1037627.01 01:18:53.985 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 01:18:53.986 Verification LBA range: start 0x0 length 0x4ff8 01:18:53.986 Nvme1n1p1 : 5.66 154.32 9.64 0.00 0.00 782541.39 71168.41 805171.61 01:18:53.986 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 01:18:53.986 Verification LBA range: start 0x4ff8 length 0x4ff8 01:18:53.986 Nvme1n1p1 : 5.86 125.70 7.86 0.00 0.00 923971.71 69905.07 1003937.82 01:18:53.986 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 01:18:53.986 Verification LBA range: start 0x0 length 0x4ff7 01:18:53.986 Nvme1n1p2 : 5.66 154.63 9.66 0.00 0.00 762247.01 72010.64 811909.45 01:18:53.986 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 01:18:53.986 Verification LBA range: start 0x4ff7 length 0x4ff7 01:18:53.986 Nvme1n1p2 : 5.87 131.15 8.20 0.00 0.00 872592.37 17581.55 1017413.50 01:18:53.986 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 01:18:53.986 Verification LBA range: start 0x0 length 0x8000 01:18:53.986 Nvme2n1 : 5.66 158.25 9.89 0.00 0.00 732826.33 82538.51 822016.21 01:18:53.986 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 01:18:53.986 Verification LBA range: start 0x8000 length 0x8000 01:18:53.986 Nvme2n1 : 5.87 131.30 8.21 0.00 0.00 850294.81 19266.00 1024151.34 01:18:53.986 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 01:18:53.986 Verification LBA range: start 0x0 length 0x8000 01:18:53.986 Nvme2n2 : 5.76 166.55 10.41 0.00 0.00 681298.81 50112.67 822016.21 01:18:53.986 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 01:18:53.986 Verification LBA range: start 0x8000 length 0x8000 01:18:53.986 Nvme2n2 : 5.88 141.49 8.84 0.00 0.00 772739.55 5184.98 1030889.18 01:18:53.986 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 01:18:53.986 Verification LBA range: start 0x0 length 0x8000 01:18:53.986 Nvme2n3 : 5.84 166.48 10.40 0.00 0.00 666900.48 29267.48 1670983.76 01:18:53.986 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 01:18:53.986 Verification LBA range: start 0x8000 length 0x8000 01:18:53.986 Nvme2n3 : 5.86 126.97 7.94 0.00 0.00 984417.96 32425.84 950035.12 01:18:53.986 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 01:18:53.986 Verification LBA range: start 0x0 length 0x2000 01:18:53.986 Nvme3n1 : 5.86 177.79 11.11 0.00 0.00 610830.91 3026.76 1697935.11 01:18:53.986 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 01:18:53.986 Verification LBA range: start 0x2000 length 0x2000 01:18:53.986 Nvme3n1 : 5.86 127.28 7.95 0.00 0.00 960711.17 72431.76 916345.93 01:18:53.986 [2024-12-09T05:13:36.442Z] =================================================================================================================== 01:18:53.986 [2024-12-09T05:13:36.442Z] Total : 2027.32 126.71 0.00 0.00 803834.78 3026.76 1697935.11 01:18:56.521 01:18:56.521 real 0m9.253s 01:18:56.521 user 0m17.189s 01:18:56.521 sys 0m0.327s 01:18:56.521 05:13:38 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 01:18:56.521 ************************************ 01:18:56.521 END TEST bdev_verify_big_io 01:18:56.521 ************************************ 01:18:56.521 05:13:38 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 01:18:56.521 05:13:38 blockdev_nvme_gpt -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 01:18:56.521 05:13:38 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 01:18:56.521 05:13:38 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 01:18:56.521 05:13:38 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 01:18:56.521 ************************************ 01:18:56.521 START TEST bdev_write_zeroes 01:18:56.521 ************************************ 01:18:56.521 05:13:38 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 01:18:56.521 [2024-12-09 05:13:38.683386] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:18:56.521 [2024-12-09 05:13:38.683516] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63290 ] 01:18:56.521 [2024-12-09 05:13:38.868366] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:18:56.780 [2024-12-09 05:13:39.006452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:18:57.348 Running I/O for 1 seconds... 01:18:58.726 64960.00 IOPS, 253.75 MiB/s 01:18:58.726 Latency(us) 01:18:58.726 [2024-12-09T05:13:41.182Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:18:58.726 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 01:18:58.726 Nvme0n1 : 1.03 9238.72 36.09 0.00 0.00 13817.57 7106.31 37900.34 01:18:58.726 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 01:18:58.726 Nvme1n1p1 : 1.03 9228.56 36.05 0.00 0.00 13810.78 11896.49 37268.67 01:18:58.726 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 01:18:58.726 Nvme1n1p2 : 1.03 9219.05 36.01 0.00 0.00 13773.20 11580.66 35163.09 01:18:58.726 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 01:18:58.726 Nvme2n1 : 1.03 9211.22 35.98 0.00 0.00 13692.33 11791.22 33268.07 01:18:58.726 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 01:18:58.726 Nvme2n2 : 1.03 9202.43 35.95 0.00 0.00 13652.91 11738.58 28635.81 01:18:58.726 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 01:18:58.726 Nvme2n3 : 1.03 9250.34 36.13 0.00 0.00 13574.34 6895.76 25161.61 01:18:58.726 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 01:18:58.726 Nvme3n1 : 1.03 9242.22 36.10 0.00 0.00 13534.58 6948.40 23687.71 01:18:58.726 [2024-12-09T05:13:41.182Z] =================================================================================================================== 01:18:58.726 [2024-12-09T05:13:41.182Z] Total : 64592.54 252.31 0.00 0.00 13693.40 6895.76 37900.34 01:19:00.099 01:19:00.099 real 0m3.592s 01:19:00.099 user 0m3.120s 01:19:00.099 sys 0m0.352s 01:19:00.099 05:13:42 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 01:19:00.099 05:13:42 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 01:19:00.099 ************************************ 01:19:00.099 END TEST bdev_write_zeroes 01:19:00.099 ************************************ 01:19:00.099 05:13:42 blockdev_nvme_gpt -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 01:19:00.099 05:13:42 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 01:19:00.099 05:13:42 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 01:19:00.099 05:13:42 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 01:19:00.099 ************************************ 01:19:00.099 START TEST bdev_json_nonenclosed 01:19:00.099 ************************************ 01:19:00.099 05:13:42 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 01:19:00.099 [2024-12-09 05:13:42.359624] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:19:00.099 [2024-12-09 05:13:42.360261] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63349 ] 01:19:00.099 [2024-12-09 05:13:42.549762] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:19:00.357 [2024-12-09 05:13:42.693944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:19:00.357 [2024-12-09 05:13:42.694068] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 01:19:00.357 [2024-12-09 05:13:42.694095] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 01:19:00.357 [2024-12-09 05:13:42.694109] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 01:19:00.615 01:19:00.615 real 0m0.809s 01:19:00.616 user 0m0.523s 01:19:00.616 sys 0m0.179s 01:19:00.616 05:13:43 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 01:19:00.616 05:13:43 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 01:19:00.616 ************************************ 01:19:00.616 END TEST bdev_json_nonenclosed 01:19:00.616 ************************************ 01:19:00.874 05:13:43 blockdev_nvme_gpt -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 01:19:00.874 05:13:43 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 01:19:00.874 05:13:43 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 01:19:00.874 05:13:43 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 01:19:00.874 ************************************ 01:19:00.874 START TEST bdev_json_nonarray 01:19:00.874 ************************************ 01:19:00.874 05:13:43 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 01:19:00.874 [2024-12-09 05:13:43.244767] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:19:00.874 [2024-12-09 05:13:43.244920] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63380 ] 01:19:01.132 [2024-12-09 05:13:43.437773] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:19:01.132 [2024-12-09 05:13:43.584905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:19:01.132 [2024-12-09 05:13:43.585066] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 01:19:01.132 [2024-12-09 05:13:43.585094] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 01:19:01.132 [2024-12-09 05:13:43.585108] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 01:19:01.700 01:19:01.700 real 0m0.814s 01:19:01.700 user 0m0.530s 01:19:01.700 sys 0m0.178s 01:19:01.700 05:13:43 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 01:19:01.700 05:13:43 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 01:19:01.700 ************************************ 01:19:01.700 END TEST bdev_json_nonarray 01:19:01.700 ************************************ 01:19:01.700 05:13:44 blockdev_nvme_gpt -- bdev/blockdev.sh@824 -- # [[ gpt == bdev ]] 01:19:01.700 05:13:44 blockdev_nvme_gpt -- bdev/blockdev.sh@832 -- # [[ gpt == gpt ]] 01:19:01.700 05:13:44 blockdev_nvme_gpt -- bdev/blockdev.sh@833 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 01:19:01.700 05:13:44 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:19:01.700 05:13:44 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 01:19:01.700 05:13:44 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 01:19:01.700 ************************************ 01:19:01.700 START TEST bdev_gpt_uuid 01:19:01.700 ************************************ 01:19:01.700 05:13:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 01:19:01.700 05:13:44 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@651 -- # local bdev 01:19:01.700 05:13:44 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@653 -- # start_spdk_tgt 01:19:01.700 05:13:44 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=63411 01:19:01.700 05:13:44 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 01:19:01.700 05:13:44 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 01:19:01.700 05:13:44 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 63411 01:19:01.700 05:13:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 63411 ']' 01:19:01.700 05:13:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:19:01.700 05:13:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 01:19:01.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:19:01.700 05:13:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:19:01.700 05:13:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 01:19:01.700 05:13:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 01:19:01.959 [2024-12-09 05:13:44.154094] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:19:01.959 [2024-12-09 05:13:44.154263] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63411 ] 01:19:01.959 [2024-12-09 05:13:44.345569] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:19:02.218 [2024-12-09 05:13:44.493029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:19:03.159 05:13:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:19:03.159 05:13:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 01:19:03.159 05:13:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@655 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 01:19:03.159 05:13:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 01:19:03.159 05:13:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 01:19:03.746 Some configs were skipped because the RPC state that can call them passed over. 01:19:03.746 05:13:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:19:03.746 05:13:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@656 -- # rpc_cmd bdev_wait_for_examine 01:19:03.746 05:13:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 01:19:03.746 05:13:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 01:19:03.746 05:13:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:19:03.746 05:13:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 01:19:03.746 05:13:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 01:19:03.746 05:13:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 01:19:03.746 05:13:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:19:03.746 05:13:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # bdev='[ 01:19:03.746 { 01:19:03.746 "name": "Nvme1n1p1", 01:19:03.746 "aliases": [ 01:19:03.746 "6f89f330-603b-4116-ac73-2ca8eae53030" 01:19:03.746 ], 01:19:03.746 "product_name": "GPT Disk", 01:19:03.746 "block_size": 4096, 01:19:03.746 "num_blocks": 655104, 01:19:03.746 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 01:19:03.746 "assigned_rate_limits": { 01:19:03.746 "rw_ios_per_sec": 0, 01:19:03.746 "rw_mbytes_per_sec": 0, 01:19:03.746 "r_mbytes_per_sec": 0, 01:19:03.746 "w_mbytes_per_sec": 0 01:19:03.746 }, 01:19:03.746 "claimed": false, 01:19:03.746 "zoned": false, 01:19:03.746 "supported_io_types": { 01:19:03.746 "read": true, 01:19:03.746 "write": true, 01:19:03.746 "unmap": true, 01:19:03.746 "flush": true, 01:19:03.746 "reset": true, 01:19:03.746 "nvme_admin": false, 01:19:03.746 "nvme_io": false, 01:19:03.746 "nvme_io_md": false, 01:19:03.746 "write_zeroes": true, 01:19:03.746 "zcopy": false, 01:19:03.746 "get_zone_info": false, 01:19:03.746 "zone_management": false, 01:19:03.746 "zone_append": false, 01:19:03.746 "compare": true, 01:19:03.746 "compare_and_write": false, 01:19:03.746 "abort": true, 01:19:03.746 "seek_hole": false, 01:19:03.746 "seek_data": false, 01:19:03.746 "copy": true, 01:19:03.746 "nvme_iov_md": false 01:19:03.746 }, 01:19:03.746 "driver_specific": { 01:19:03.746 "gpt": { 01:19:03.746 "base_bdev": "Nvme1n1", 01:19:03.746 "offset_blocks": 256, 01:19:03.746 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 01:19:03.746 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 01:19:03.746 "partition_name": "SPDK_TEST_first" 01:19:03.746 } 01:19:03.746 } 01:19:03.746 } 01:19:03.746 ]' 01:19:03.746 05:13:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # jq -r length 01:19:03.746 05:13:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # [[ 1 == \1 ]] 01:19:03.746 05:13:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # jq -r '.[0].aliases[0]' 01:19:03.746 05:13:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 01:19:03.746 05:13:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 01:19:03.746 05:13:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 01:19:03.746 05:13:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 01:19:03.746 05:13:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 01:19:03.746 05:13:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 01:19:03.746 05:13:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:19:03.746 05:13:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # bdev='[ 01:19:03.746 { 01:19:03.746 "name": "Nvme1n1p2", 01:19:03.746 "aliases": [ 01:19:03.746 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 01:19:03.746 ], 01:19:03.746 "product_name": "GPT Disk", 01:19:03.746 "block_size": 4096, 01:19:03.746 "num_blocks": 655103, 01:19:03.746 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 01:19:03.746 "assigned_rate_limits": { 01:19:03.746 "rw_ios_per_sec": 0, 01:19:03.746 "rw_mbytes_per_sec": 0, 01:19:03.746 "r_mbytes_per_sec": 0, 01:19:03.746 "w_mbytes_per_sec": 0 01:19:03.746 }, 01:19:03.746 "claimed": false, 01:19:03.746 "zoned": false, 01:19:03.746 "supported_io_types": { 01:19:03.746 "read": true, 01:19:03.746 "write": true, 01:19:03.746 "unmap": true, 01:19:03.746 "flush": true, 01:19:03.746 "reset": true, 01:19:03.746 "nvme_admin": false, 01:19:03.746 "nvme_io": false, 01:19:03.746 "nvme_io_md": false, 01:19:03.746 "write_zeroes": true, 01:19:03.746 "zcopy": false, 01:19:03.746 "get_zone_info": false, 01:19:03.746 "zone_management": false, 01:19:03.746 "zone_append": false, 01:19:03.746 "compare": true, 01:19:03.746 "compare_and_write": false, 01:19:03.746 "abort": true, 01:19:03.746 "seek_hole": false, 01:19:03.746 "seek_data": false, 01:19:03.746 "copy": true, 01:19:03.746 "nvme_iov_md": false 01:19:03.746 }, 01:19:03.746 "driver_specific": { 01:19:03.746 "gpt": { 01:19:03.746 "base_bdev": "Nvme1n1", 01:19:03.746 "offset_blocks": 655360, 01:19:03.746 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 01:19:03.746 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 01:19:03.746 "partition_name": "SPDK_TEST_second" 01:19:03.746 } 01:19:03.746 } 01:19:03.746 } 01:19:03.746 ]' 01:19:03.746 05:13:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # jq -r length 01:19:03.746 05:13:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # [[ 1 == \1 ]] 01:19:03.747 05:13:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # jq -r '.[0].aliases[0]' 01:19:04.014 05:13:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 01:19:04.014 05:13:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 01:19:04.014 05:13:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 01:19:04.014 05:13:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@668 -- # killprocess 63411 01:19:04.014 05:13:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 63411 ']' 01:19:04.014 05:13:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 63411 01:19:04.014 05:13:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 01:19:04.014 05:13:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:19:04.014 05:13:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63411 01:19:04.014 05:13:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:19:04.014 05:13:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:19:04.014 killing process with pid 63411 01:19:04.014 05:13:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63411' 01:19:04.014 05:13:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 63411 01:19:04.014 05:13:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 63411 01:19:07.304 01:19:07.304 real 0m5.051s 01:19:07.304 user 0m5.016s 01:19:07.304 sys 0m0.770s 01:19:07.304 05:13:49 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 01:19:07.304 ************************************ 01:19:07.304 END TEST bdev_gpt_uuid 01:19:07.304 ************************************ 01:19:07.304 05:13:49 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 01:19:07.304 05:13:49 blockdev_nvme_gpt -- bdev/blockdev.sh@836 -- # [[ gpt == crypto_sw ]] 01:19:07.304 05:13:49 blockdev_nvme_gpt -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 01:19:07.304 05:13:49 blockdev_nvme_gpt -- bdev/blockdev.sh@849 -- # cleanup 01:19:07.304 05:13:49 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 01:19:07.304 05:13:49 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 01:19:07.304 05:13:49 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 01:19:07.304 05:13:49 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 01:19:07.304 05:13:49 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 01:19:07.304 05:13:49 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 01:19:07.304 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:19:07.562 Waiting for block devices as requested 01:19:07.821 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 01:19:07.821 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 01:19:07.821 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 01:19:08.080 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 01:19:13.347 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 01:19:13.347 05:13:55 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 01:19:13.347 05:13:55 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 01:19:13.347 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 01:19:13.347 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 01:19:13.347 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 01:19:13.347 /dev/nvme0n1: calling ioctl to re-read partition table: Success 01:19:13.347 05:13:55 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 01:19:13.347 ************************************ 01:19:13.347 END TEST blockdev_nvme_gpt 01:19:13.347 ************************************ 01:19:13.347 01:19:13.347 real 1m7.295s 01:19:13.347 user 1m22.450s 01:19:13.347 sys 0m12.575s 01:19:13.347 05:13:55 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 01:19:13.347 05:13:55 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 01:19:13.347 05:13:55 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 01:19:13.347 05:13:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:19:13.347 05:13:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:19:13.347 05:13:55 -- common/autotest_common.sh@10 -- # set +x 01:19:13.607 ************************************ 01:19:13.607 START TEST nvme 01:19:13.607 ************************************ 01:19:13.607 05:13:55 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 01:19:13.607 * Looking for test storage... 01:19:13.607 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 01:19:13.607 05:13:55 nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:19:13.607 05:13:55 nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:19:13.607 05:13:55 nvme -- common/autotest_common.sh@1693 -- # lcov --version 01:19:13.607 05:13:56 nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:19:13.607 05:13:56 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:19:13.607 05:13:56 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 01:19:13.607 05:13:56 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 01:19:13.607 05:13:56 nvme -- scripts/common.sh@336 -- # IFS=.-: 01:19:13.607 05:13:56 nvme -- scripts/common.sh@336 -- # read -ra ver1 01:19:13.607 05:13:56 nvme -- scripts/common.sh@337 -- # IFS=.-: 01:19:13.607 05:13:56 nvme -- scripts/common.sh@337 -- # read -ra ver2 01:19:13.607 05:13:56 nvme -- scripts/common.sh@338 -- # local 'op=<' 01:19:13.607 05:13:56 nvme -- scripts/common.sh@340 -- # ver1_l=2 01:19:13.607 05:13:56 nvme -- scripts/common.sh@341 -- # ver2_l=1 01:19:13.607 05:13:56 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:19:13.607 05:13:56 nvme -- scripts/common.sh@344 -- # case "$op" in 01:19:13.607 05:13:56 nvme -- scripts/common.sh@345 -- # : 1 01:19:13.607 05:13:56 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 01:19:13.607 05:13:56 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:19:13.607 05:13:56 nvme -- scripts/common.sh@365 -- # decimal 1 01:19:13.607 05:13:56 nvme -- scripts/common.sh@353 -- # local d=1 01:19:13.607 05:13:56 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:19:13.607 05:13:56 nvme -- scripts/common.sh@355 -- # echo 1 01:19:13.607 05:13:56 nvme -- scripts/common.sh@365 -- # ver1[v]=1 01:19:13.607 05:13:56 nvme -- scripts/common.sh@366 -- # decimal 2 01:19:13.607 05:13:56 nvme -- scripts/common.sh@353 -- # local d=2 01:19:13.607 05:13:56 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:19:13.607 05:13:56 nvme -- scripts/common.sh@355 -- # echo 2 01:19:13.607 05:13:56 nvme -- scripts/common.sh@366 -- # ver2[v]=2 01:19:13.607 05:13:56 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:19:13.607 05:13:56 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:19:13.607 05:13:56 nvme -- scripts/common.sh@368 -- # return 0 01:19:13.607 05:13:56 nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:19:13.607 05:13:56 nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:19:13.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:19:13.607 --rc genhtml_branch_coverage=1 01:19:13.607 --rc genhtml_function_coverage=1 01:19:13.607 --rc genhtml_legend=1 01:19:13.607 --rc geninfo_all_blocks=1 01:19:13.607 --rc geninfo_unexecuted_blocks=1 01:19:13.607 01:19:13.607 ' 01:19:13.607 05:13:56 nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:19:13.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:19:13.607 --rc genhtml_branch_coverage=1 01:19:13.607 --rc genhtml_function_coverage=1 01:19:13.607 --rc genhtml_legend=1 01:19:13.607 --rc geninfo_all_blocks=1 01:19:13.607 --rc geninfo_unexecuted_blocks=1 01:19:13.607 01:19:13.607 ' 01:19:13.607 05:13:56 nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:19:13.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:19:13.607 --rc genhtml_branch_coverage=1 01:19:13.607 --rc genhtml_function_coverage=1 01:19:13.607 --rc genhtml_legend=1 01:19:13.607 --rc geninfo_all_blocks=1 01:19:13.607 --rc geninfo_unexecuted_blocks=1 01:19:13.607 01:19:13.607 ' 01:19:13.607 05:13:56 nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:19:13.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:19:13.607 --rc genhtml_branch_coverage=1 01:19:13.607 --rc genhtml_function_coverage=1 01:19:13.607 --rc genhtml_legend=1 01:19:13.607 --rc geninfo_all_blocks=1 01:19:13.607 --rc geninfo_unexecuted_blocks=1 01:19:13.607 01:19:13.607 ' 01:19:13.607 05:13:56 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 01:19:14.543 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:19:15.110 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 01:19:15.110 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 01:19:15.110 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 01:19:15.368 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 01:19:15.368 05:13:57 nvme -- nvme/nvme.sh@79 -- # uname 01:19:15.368 05:13:57 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 01:19:15.368 05:13:57 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 01:19:15.368 05:13:57 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 01:19:15.368 05:13:57 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 01:19:15.368 05:13:57 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 01:19:15.368 05:13:57 nvme -- common/autotest_common.sh@1073 -- # echo 0 01:19:15.368 05:13:57 nvme -- common/autotest_common.sh@1075 -- # stubpid=64076 01:19:15.368 05:13:57 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 01:19:15.368 Waiting for stub to ready for secondary processes... 01:19:15.368 05:13:57 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 01:19:15.368 05:13:57 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 01:19:15.368 05:13:57 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/64076 ]] 01:19:15.368 05:13:57 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 01:19:15.368 [2024-12-09 05:13:57.730260] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:19:15.368 [2024-12-09 05:13:57.730413] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 01:19:16.309 05:13:58 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 01:19:16.309 05:13:58 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/64076 ]] 01:19:16.309 05:13:58 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 01:19:17.243 [2024-12-09 05:13:59.406148] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 01:19:17.243 [2024-12-09 05:13:59.513667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:19:17.243 [2024-12-09 05:13:59.513831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:19:17.243 [2024-12-09 05:13:59.513865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:19:17.243 [2024-12-09 05:13:59.531830] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 01:19:17.243 [2024-12-09 05:13:59.531877] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 01:19:17.243 [2024-12-09 05:13:59.548839] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 01:19:17.243 [2024-12-09 05:13:59.548975] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 01:19:17.243 [2024-12-09 05:13:59.553173] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 01:19:17.243 [2024-12-09 05:13:59.553674] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 01:19:17.243 [2024-12-09 05:13:59.553774] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 01:19:17.243 [2024-12-09 05:13:59.557843] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 01:19:17.243 [2024-12-09 05:13:59.558110] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 01:19:17.243 [2024-12-09 05:13:59.558203] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 01:19:17.243 [2024-12-09 05:13:59.561775] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 01:19:17.243 [2024-12-09 05:13:59.562032] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 01:19:17.243 [2024-12-09 05:13:59.562116] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 01:19:17.243 [2024-12-09 05:13:59.562175] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 01:19:17.243 [2024-12-09 05:13:59.562231] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 01:19:17.243 05:13:59 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 01:19:17.243 done. 01:19:17.243 05:13:59 nvme -- common/autotest_common.sh@1082 -- # echo done. 01:19:17.243 05:13:59 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 01:19:17.243 05:13:59 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 01:19:17.243 05:13:59 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 01:19:17.243 05:13:59 nvme -- common/autotest_common.sh@10 -- # set +x 01:19:17.500 ************************************ 01:19:17.500 START TEST nvme_reset 01:19:17.500 ************************************ 01:19:17.500 05:13:59 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 01:19:17.757 Initializing NVMe Controllers 01:19:17.757 Skipping QEMU NVMe SSD at 0000:00:13.0 01:19:17.757 Skipping QEMU NVMe SSD at 0000:00:10.0 01:19:17.757 Skipping QEMU NVMe SSD at 0000:00:11.0 01:19:17.757 Skipping QEMU NVMe SSD at 0000:00:12.0 01:19:17.757 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 01:19:17.757 01:19:17.757 real 0m0.314s 01:19:17.757 user 0m0.114s 01:19:17.757 sys 0m0.156s 01:19:17.757 05:14:00 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 01:19:17.757 05:14:00 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 01:19:17.757 ************************************ 01:19:17.757 END TEST nvme_reset 01:19:17.757 ************************************ 01:19:17.757 05:14:00 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 01:19:17.757 05:14:00 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:19:17.757 05:14:00 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 01:19:17.757 05:14:00 nvme -- common/autotest_common.sh@10 -- # set +x 01:19:17.757 ************************************ 01:19:17.757 START TEST nvme_identify 01:19:17.757 ************************************ 01:19:17.757 05:14:00 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 01:19:17.757 05:14:00 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 01:19:17.757 05:14:00 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 01:19:17.757 05:14:00 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 01:19:17.757 05:14:00 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 01:19:17.757 05:14:00 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 01:19:17.758 05:14:00 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 01:19:17.758 05:14:00 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 01:19:17.758 05:14:00 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 01:19:17.758 05:14:00 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 01:19:17.758 05:14:00 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 01:19:17.758 05:14:00 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 01:19:17.758 05:14:00 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 01:19:18.018 [2024-12-09 05:14:00.446417] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 64109 terminated unexpected 01:19:18.018 ===================================================== 01:19:18.018 NVMe Controller at 0000:00:13.0 [1b36:0010] 01:19:18.018 ===================================================== 01:19:18.018 Controller Capabilities/Features 01:19:18.018 ================================ 01:19:18.018 Vendor ID: 1b36 01:19:18.018 Subsystem Vendor ID: 1af4 01:19:18.018 Serial Number: 12343 01:19:18.018 Model Number: QEMU NVMe Ctrl 01:19:18.018 Firmware Version: 8.0.0 01:19:18.018 Recommended Arb Burst: 6 01:19:18.018 IEEE OUI Identifier: 00 54 52 01:19:18.018 Multi-path I/O 01:19:18.018 May have multiple subsystem ports: No 01:19:18.018 May have multiple controllers: Yes 01:19:18.018 Associated with SR-IOV VF: No 01:19:18.018 Max Data Transfer Size: 524288 01:19:18.018 Max Number of Namespaces: 256 01:19:18.018 Max Number of I/O Queues: 64 01:19:18.018 NVMe Specification Version (VS): 1.4 01:19:18.018 NVMe Specification Version (Identify): 1.4 01:19:18.018 Maximum Queue Entries: 2048 01:19:18.018 Contiguous Queues Required: Yes 01:19:18.018 Arbitration Mechanisms Supported 01:19:18.018 Weighted Round Robin: Not Supported 01:19:18.018 Vendor Specific: Not Supported 01:19:18.018 Reset Timeout: 7500 ms 01:19:18.018 Doorbell Stride: 4 bytes 01:19:18.018 NVM Subsystem Reset: Not Supported 01:19:18.018 Command Sets Supported 01:19:18.018 NVM Command Set: Supported 01:19:18.018 Boot Partition: Not Supported 01:19:18.018 Memory Page Size Minimum: 4096 bytes 01:19:18.018 Memory Page Size Maximum: 65536 bytes 01:19:18.018 Persistent Memory Region: Not Supported 01:19:18.018 Optional Asynchronous Events Supported 01:19:18.018 Namespace Attribute Notices: Supported 01:19:18.018 Firmware Activation Notices: Not Supported 01:19:18.018 ANA Change Notices: Not Supported 01:19:18.018 PLE Aggregate Log Change Notices: Not Supported 01:19:18.018 LBA Status Info Alert Notices: Not Supported 01:19:18.018 EGE Aggregate Log Change Notices: Not Supported 01:19:18.018 Normal NVM Subsystem Shutdown event: Not Supported 01:19:18.018 Zone Descriptor Change Notices: Not Supported 01:19:18.018 Discovery Log Change Notices: Not Supported 01:19:18.018 Controller Attributes 01:19:18.018 128-bit Host Identifier: Not Supported 01:19:18.018 Non-Operational Permissive Mode: Not Supported 01:19:18.018 NVM Sets: Not Supported 01:19:18.018 Read Recovery Levels: Not Supported 01:19:18.018 Endurance Groups: Supported 01:19:18.018 Predictable Latency Mode: Not Supported 01:19:18.018 Traffic Based Keep ALive: Not Supported 01:19:18.018 Namespace Granularity: Not Supported 01:19:18.018 SQ Associations: Not Supported 01:19:18.018 UUID List: Not Supported 01:19:18.018 Multi-Domain Subsystem: Not Supported 01:19:18.018 Fixed Capacity Management: Not Supported 01:19:18.018 Variable Capacity Management: Not Supported 01:19:18.018 Delete Endurance Group: Not Supported 01:19:18.018 Delete NVM Set: Not Supported 01:19:18.018 Extended LBA Formats Supported: Supported 01:19:18.018 Flexible Data Placement Supported: Supported 01:19:18.018 01:19:18.018 Controller Memory Buffer Support 01:19:18.018 ================================ 01:19:18.018 Supported: No 01:19:18.018 01:19:18.018 Persistent Memory Region Support 01:19:18.018 ================================ 01:19:18.018 Supported: No 01:19:18.018 01:19:18.018 Admin Command Set Attributes 01:19:18.018 ============================ 01:19:18.018 Security Send/Receive: Not Supported 01:19:18.018 Format NVM: Supported 01:19:18.018 Firmware Activate/Download: Not Supported 01:19:18.018 Namespace Management: Supported 01:19:18.018 Device Self-Test: Not Supported 01:19:18.018 Directives: Supported 01:19:18.018 NVMe-MI: Not Supported 01:19:18.018 Virtualization Management: Not Supported 01:19:18.018 Doorbell Buffer Config: Supported 01:19:18.018 Get LBA Status Capability: Not Supported 01:19:18.018 Command & Feature Lockdown Capability: Not Supported 01:19:18.018 Abort Command Limit: 4 01:19:18.018 Async Event Request Limit: 4 01:19:18.018 Number of Firmware Slots: N/A 01:19:18.018 Firmware Slot 1 Read-Only: N/A 01:19:18.018 Firmware Activation Without Reset: N/A 01:19:18.018 Multiple Update Detection Support: N/A 01:19:18.018 Firmware Update Granularity: No Information Provided 01:19:18.018 Per-Namespace SMART Log: Yes 01:19:18.018 Asymmetric Namespace Access Log Page: Not Supported 01:19:18.018 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 01:19:18.018 Command Effects Log Page: Supported 01:19:18.018 Get Log Page Extended Data: Supported 01:19:18.018 Telemetry Log Pages: Not Supported 01:19:18.018 Persistent Event Log Pages: Not Supported 01:19:18.018 Supported Log Pages Log Page: May Support 01:19:18.018 Commands Supported & Effects Log Page: Not Supported 01:19:18.018 Feature Identifiers & Effects Log Page:May Support 01:19:18.018 NVMe-MI Commands & Effects Log Page: May Support 01:19:18.018 Data Area 4 for Telemetry Log: Not Supported 01:19:18.018 Error Log Page Entries Supported: 1 01:19:18.018 Keep Alive: Not Supported 01:19:18.018 01:19:18.018 NVM Command Set Attributes 01:19:18.018 ========================== 01:19:18.018 Submission Queue Entry Size 01:19:18.018 Max: 64 01:19:18.018 Min: 64 01:19:18.018 Completion Queue Entry Size 01:19:18.018 Max: 16 01:19:18.018 Min: 16 01:19:18.018 Number of Namespaces: 256 01:19:18.018 Compare Command: Supported 01:19:18.018 Write Uncorrectable Command: Not Supported 01:19:18.018 Dataset Management Command: Supported 01:19:18.018 Write Zeroes Command: Supported 01:19:18.018 Set Features Save Field: Supported 01:19:18.018 Reservations: Not Supported 01:19:18.018 Timestamp: Supported 01:19:18.018 Copy: Supported 01:19:18.018 Volatile Write Cache: Present 01:19:18.018 Atomic Write Unit (Normal): 1 01:19:18.018 Atomic Write Unit (PFail): 1 01:19:18.018 Atomic Compare & Write Unit: 1 01:19:18.018 Fused Compare & Write: Not Supported 01:19:18.018 Scatter-Gather List 01:19:18.018 SGL Command Set: Supported 01:19:18.018 SGL Keyed: Not Supported 01:19:18.018 SGL Bit Bucket Descriptor: Not Supported 01:19:18.018 SGL Metadata Pointer: Not Supported 01:19:18.018 Oversized SGL: Not Supported 01:19:18.019 SGL Metadata Address: Not Supported 01:19:18.019 SGL Offset: Not Supported 01:19:18.019 Transport SGL Data Block: Not Supported 01:19:18.019 Replay Protected Memory Block: Not Supported 01:19:18.019 01:19:18.019 Firmware Slot Information 01:19:18.019 ========================= 01:19:18.019 Active slot: 1 01:19:18.019 Slot 1 Firmware Revision: 1.0 01:19:18.019 01:19:18.019 01:19:18.019 Commands Supported and Effects 01:19:18.019 ============================== 01:19:18.019 Admin Commands 01:19:18.019 -------------- 01:19:18.019 Delete I/O Submission Queue (00h): Supported 01:19:18.019 Create I/O Submission Queue (01h): Supported 01:19:18.019 Get Log Page (02h): Supported 01:19:18.019 Delete I/O Completion Queue (04h): Supported 01:19:18.019 Create I/O Completion Queue (05h): Supported 01:19:18.019 Identify (06h): Supported 01:19:18.019 Abort (08h): Supported 01:19:18.019 Set Features (09h): Supported 01:19:18.019 Get Features (0Ah): Supported 01:19:18.019 Asynchronous Event Request (0Ch): Supported 01:19:18.019 Namespace Attachment (15h): Supported NS-Inventory-Change 01:19:18.019 Directive Send (19h): Supported 01:19:18.019 Directive Receive (1Ah): Supported 01:19:18.019 Virtualization Management (1Ch): Supported 01:19:18.019 Doorbell Buffer Config (7Ch): Supported 01:19:18.019 Format NVM (80h): Supported LBA-Change 01:19:18.019 I/O Commands 01:19:18.019 ------------ 01:19:18.019 Flush (00h): Supported LBA-Change 01:19:18.019 Write (01h): Supported LBA-Change 01:19:18.019 Read (02h): Supported 01:19:18.019 Compare (05h): Supported 01:19:18.019 Write Zeroes (08h): Supported LBA-Change 01:19:18.019 Dataset Management (09h): Supported LBA-Change 01:19:18.019 Unknown (0Ch): Supported 01:19:18.019 Unknown (12h): Supported 01:19:18.019 Copy (19h): Supported LBA-Change 01:19:18.019 Unknown (1Dh): Supported LBA-Change 01:19:18.019 01:19:18.019 Error Log 01:19:18.019 ========= 01:19:18.019 01:19:18.019 Arbitration 01:19:18.019 =========== 01:19:18.019 Arbitration Burst: no limit 01:19:18.019 01:19:18.019 Power Management 01:19:18.019 ================ 01:19:18.019 Number of Power States: 1 01:19:18.019 Current Power State: Power State #0 01:19:18.019 Power State #0: 01:19:18.019 Max Power: 25.00 W 01:19:18.019 Non-Operational State: Operational 01:19:18.019 Entry Latency: 16 microseconds 01:19:18.019 Exit Latency: 4 microseconds 01:19:18.019 Relative Read Throughput: 0 01:19:18.019 Relative Read Latency: 0 01:19:18.019 Relative Write Throughput: 0 01:19:18.019 Relative Write Latency: 0 01:19:18.019 Idle Power: Not Reported 01:19:18.019 Active Power: Not Reported 01:19:18.019 Non-Operational Permissive Mode: Not Supported 01:19:18.019 01:19:18.019 Health Information 01:19:18.019 ================== 01:19:18.019 Critical Warnings: 01:19:18.019 Available Spare Space: OK 01:19:18.019 Temperature: OK 01:19:18.019 Device Reliability: OK 01:19:18.019 Read Only: No 01:19:18.019 Volatile Memory Backup: OK 01:19:18.019 Current Temperature: 323 Kelvin (50 Celsius) 01:19:18.019 Temperature Threshold: 343 Kelvin (70 Celsius) 01:19:18.019 Available Spare: 0% 01:19:18.019 Available Spare Threshold: 0% 01:19:18.019 Life Percentage Used: 0% 01:19:18.019 Data Units Read: 871 01:19:18.019 Data Units Written: 800 01:19:18.019 Host Read Commands: 39341 01:19:18.019 Host Write Commands: 38764 01:19:18.019 Controller Busy Time: 0 minutes 01:19:18.019 Power Cycles: 0 01:19:18.019 Power On Hours: 0 hours 01:19:18.019 Unsafe Shutdowns: 0 01:19:18.019 Unrecoverable Media Errors: 0 01:19:18.019 Lifetime Error Log Entries: 0 01:19:18.019 Warning Temperature Time: 0 minutes 01:19:18.019 Critical Temperature Time: 0 minutes 01:19:18.019 01:19:18.019 Number of Queues 01:19:18.019 ================ 01:19:18.019 Number of I/O Submission Queues: 64 01:19:18.019 Number of I/O Completion Queues: 64 01:19:18.019 01:19:18.019 ZNS Specific Controller Data 01:19:18.019 ============================ 01:19:18.019 Zone Append Size Limit: 0 01:19:18.019 01:19:18.019 01:19:18.019 Active Namespaces 01:19:18.019 ================= 01:19:18.019 Namespace ID:1 01:19:18.019 Error Recovery Timeout: Unlimited 01:19:18.019 Command Set Identifier: NVM (00h) 01:19:18.019 Deallocate: Supported 01:19:18.019 Deallocated/Unwritten Error: Supported 01:19:18.019 Deallocated Read Value: All 0x00 01:19:18.019 Deallocate in Write Zeroes: Not Supported 01:19:18.019 Deallocated Guard Field: 0xFFFF 01:19:18.019 Flush: Supported 01:19:18.019 Reservation: Not Supported 01:19:18.019 Namespace Sharing Capabilities: Multiple Controllers 01:19:18.019 Size (in LBAs): 262144 (1GiB) 01:19:18.019 Capacity (in LBAs): 262144 (1GiB) 01:19:18.019 Utilization (in LBAs): 262144 (1GiB) 01:19:18.019 Thin Provisioning: Not Supported 01:19:18.019 Per-NS Atomic Units: No 01:19:18.019 Maximum Single Source Range Length: 128 01:19:18.019 Maximum Copy Length: 128 01:19:18.019 Maximum Source Range Count: 128 01:19:18.019 NGUID/EUI64 Never Reused: No 01:19:18.019 Namespace Write Protected: No 01:19:18.019 Endurance group ID: 1 01:19:18.019 Number of LBA Formats: 8 01:19:18.019 Current LBA Format: LBA Format #04 01:19:18.019 LBA Format #00: Data Size: 512 Metadata Size: 0 01:19:18.019 LBA Format #01: Data Size: 512 Metadata Size: 8 01:19:18.019 LBA Format #02: Data Size: 512 Metadata Size: 16 01:19:18.019 LBA Format #03: Data Size: 512 Metadata Size: 64 01:19:18.019 LBA Format #04: Data Size: 4096 Metadata Size: 0 01:19:18.019 LBA Format #05: Data Size: 4096 Metadata Size: 8 01:19:18.019 LBA Format #06: Data Size: 4096 Metadata Size: 16 01:19:18.019 LBA Format #07: Data Size: 4096 Metadata Size: 64 01:19:18.019 01:19:18.019 Get Feature FDP: 01:19:18.019 ================ 01:19:18.019 Enabled: Yes 01:19:18.019 FDP configuration index: 0 01:19:18.019 01:19:18.019 FDP configurations log page 01:19:18.019 =========================== 01:19:18.019 Number of FDP configurations: 1 01:19:18.019 Version: 0 01:19:18.019 Size: 112 01:19:18.019 FDP Configuration Descriptor: 0 01:19:18.019 Descriptor Size: 96 01:19:18.019 Reclaim Group Identifier format: 2 01:19:18.019 FDP Volatile Write Cache: Not Present 01:19:18.019 FDP Configuration: Valid 01:19:18.019 Vendor Specific Size: 0 01:19:18.019 Number of Reclaim Groups: 2 01:19:18.019 Number of Recalim Unit Handles: 8 01:19:18.019 Max Placement Identifiers: 128 01:19:18.019 Number of Namespaces Suppprted: 256 01:19:18.019 Reclaim unit Nominal Size: 6000000 bytes 01:19:18.019 Estimated Reclaim Unit Time Limit: Not Reported 01:19:18.019 RUH Desc #000: RUH Type: Initially Isolated 01:19:18.019 RUH Desc #001: RUH Type: Initially Isolated 01:19:18.019 RUH Desc #002: RUH Type: Initially Isolated 01:19:18.019 RUH Desc #003: RUH Type: Initially Isolated 01:19:18.019 RUH Desc #004: RUH Type: Initially Isolated 01:19:18.019 RUH Desc #005: RUH Type: Initially Isolated 01:19:18.019 RUH Desc #006: RUH Type: Initially Isolated 01:19:18.019 RUH Desc #007: RUH Type: Initially Isolated 01:19:18.019 01:19:18.019 FDP reclaim unit handle usage log page 01:19:18.019 ==================================[2024-12-09 05:14:00.448335] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 64109 terminated unexpected 01:19:18.019 ==== 01:19:18.019 Number of Reclaim Unit Handles: 8 01:19:18.019 RUH Usage Desc #000: RUH Attributes: Controller Specified 01:19:18.019 RUH Usage Desc #001: RUH Attributes: Unused 01:19:18.019 RUH Usage Desc #002: RUH Attributes: Unused 01:19:18.019 RUH Usage Desc #003: RUH Attributes: Unused 01:19:18.019 RUH Usage Desc #004: RUH Attributes: Unused 01:19:18.019 RUH Usage Desc #005: RUH Attributes: Unused 01:19:18.019 RUH Usage Desc #006: RUH Attributes: Unused 01:19:18.019 RUH Usage Desc #007: RUH Attributes: Unused 01:19:18.019 01:19:18.019 FDP statistics log page 01:19:18.019 ======================= 01:19:18.019 Host bytes with metadata written: 511811584 01:19:18.019 Media bytes with metadata written: 511868928 01:19:18.019 Media bytes erased: 0 01:19:18.019 01:19:18.019 FDP events log page 01:19:18.019 =================== 01:19:18.019 Number of FDP events: 0 01:19:18.019 01:19:18.019 NVM Specific Namespace Data 01:19:18.019 =========================== 01:19:18.019 Logical Block Storage Tag Mask: 0 01:19:18.019 Protection Information Capabilities: 01:19:18.019 16b Guard Protection Information Storage Tag Support: No 01:19:18.019 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 01:19:18.019 Storage Tag Check Read Support: No 01:19:18.019 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:19:18.020 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:19:18.020 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:19:18.020 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:19:18.020 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:19:18.020 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:19:18.020 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:19:18.020 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:19:18.020 ===================================================== 01:19:18.020 NVMe Controller at 0000:00:10.0 [1b36:0010] 01:19:18.020 ===================================================== 01:19:18.020 Controller Capabilities/Features 01:19:18.020 ================================ 01:19:18.020 Vendor ID: 1b36 01:19:18.020 Subsystem Vendor ID: 1af4 01:19:18.020 Serial Number: 12340 01:19:18.020 Model Number: QEMU NVMe Ctrl 01:19:18.020 Firmware Version: 8.0.0 01:19:18.020 Recommended Arb Burst: 6 01:19:18.020 IEEE OUI Identifier: 00 54 52 01:19:18.020 Multi-path I/O 01:19:18.020 May have multiple subsystem ports: No 01:19:18.020 May have multiple controllers: No 01:19:18.020 Associated with SR-IOV VF: No 01:19:18.020 Max Data Transfer Size: 524288 01:19:18.020 Max Number of Namespaces: 256 01:19:18.020 Max Number of I/O Queues: 64 01:19:18.020 NVMe Specification Version (VS): 1.4 01:19:18.020 NVMe Specification Version (Identify): 1.4 01:19:18.020 Maximum Queue Entries: 2048 01:19:18.020 Contiguous Queues Required: Yes 01:19:18.020 Arbitration Mechanisms Supported 01:19:18.020 Weighted Round Robin: Not Supported 01:19:18.020 Vendor Specific: Not Supported 01:19:18.020 Reset Timeout: 7500 ms 01:19:18.020 Doorbell Stride: 4 bytes 01:19:18.020 NVM Subsystem Reset: Not Supported 01:19:18.020 Command Sets Supported 01:19:18.020 NVM Command Set: Supported 01:19:18.020 Boot Partition: Not Supported 01:19:18.020 Memory Page Size Minimum: 4096 bytes 01:19:18.020 Memory Page Size Maximum: 65536 bytes 01:19:18.020 Persistent Memory Region: Not Supported 01:19:18.020 Optional Asynchronous Events Supported 01:19:18.020 Namespace Attribute Notices: Supported 01:19:18.020 Firmware Activation Notices: Not Supported 01:19:18.020 ANA Change Notices: Not Supported 01:19:18.020 PLE Aggregate Log Change Notices: Not Supported 01:19:18.020 LBA Status Info Alert Notices: Not Supported 01:19:18.020 EGE Aggregate Log Change Notices: Not Supported 01:19:18.020 Normal NVM Subsystem Shutdown event: Not Supported 01:19:18.020 Zone Descriptor Change Notices: Not Supported 01:19:18.020 Discovery Log Change Notices: Not Supported 01:19:18.020 Controller Attributes 01:19:18.020 128-bit Host Identifier: Not Supported 01:19:18.020 Non-Operational Permissive Mode: Not Supported 01:19:18.020 NVM Sets: Not Supported 01:19:18.020 Read Recovery Levels: Not Supported 01:19:18.020 Endurance Groups: Not Supported 01:19:18.020 Predictable Latency Mode: Not Supported 01:19:18.020 Traffic Based Keep ALive: Not Supported 01:19:18.020 Namespace Granularity: Not Supported 01:19:18.020 SQ Associations: Not Supported 01:19:18.020 UUID List: Not Supported 01:19:18.020 Multi-Domain Subsystem: Not Supported 01:19:18.020 Fixed Capacity Management: Not Supported 01:19:18.020 Variable Capacity Management: Not Supported 01:19:18.020 Delete Endurance Group: Not Supported 01:19:18.020 Delete NVM Set: Not Supported 01:19:18.020 Extended LBA Formats Supported: Supported 01:19:18.020 Flexible Data Placement Supported: Not Supported 01:19:18.020 01:19:18.020 Controller Memory Buffer Support 01:19:18.020 ================================ 01:19:18.020 Supported: No 01:19:18.020 01:19:18.020 Persistent Memory Region Support 01:19:18.020 ================================ 01:19:18.020 Supported: No 01:19:18.020 01:19:18.020 Admin Command Set Attributes 01:19:18.020 ============================ 01:19:18.020 Security Send/Receive: Not Supported 01:19:18.020 Format NVM: Supported 01:19:18.020 Firmware Activate/Download: Not Supported 01:19:18.020 Namespace Management: Supported 01:19:18.020 Device Self-Test: Not Supported 01:19:18.020 Directives: Supported 01:19:18.020 NVMe-MI: Not Supported 01:19:18.020 Virtualization Management: Not Supported 01:19:18.020 Doorbell Buffer Config: Supported 01:19:18.020 Get LBA Status Capability: Not Supported 01:19:18.020 Command & Feature Lockdown Capability: Not Supported 01:19:18.020 Abort Command Limit: 4 01:19:18.020 Async Event Request Limit: 4 01:19:18.020 Number of Firmware Slots: N/A 01:19:18.020 Firmware Slot 1 Read-Only: N/A 01:19:18.020 Firmware Activation Without Reset: N/A 01:19:18.020 Multiple Update Detection Support: N/A 01:19:18.020 Firmware Update Granularity: No Information Provided 01:19:18.020 Per-Namespace SMART Log: Yes 01:19:18.020 Asymmetric Namespace Access Log Page: Not Supported 01:19:18.020 Subsystem NQN: nqn.2019-08.org.qemu:12340 01:19:18.020 Command Effects Log Page: Supported 01:19:18.020 Get Log Page Extended Data: Supported 01:19:18.020 Telemetry Log Pages: Not Supported 01:19:18.020 Persistent Event Log Pages: Not Supported 01:19:18.020 Supported Log Pages Log Page: May Support 01:19:18.020 Commands Supported & Effects Log Page: Not Supported 01:19:18.020 Feature Identifiers & Effects Log Page:May Support 01:19:18.020 NVMe-MI Commands & Effects Log Page: May Support 01:19:18.020 Data Area 4 for Telemetry Log: Not Supported 01:19:18.020 Error Log Page Entries Supported: 1 01:19:18.020 Keep Alive: Not Supported 01:19:18.020 01:19:18.020 NVM Command Set Attributes 01:19:18.020 ========================== 01:19:18.020 Submission Queue Entry Size 01:19:18.020 Max: 64 01:19:18.020 Min: 64 01:19:18.020 Completion Queue Entry Size 01:19:18.020 Max: 16 01:19:18.020 Min: 16 01:19:18.020 Number of Namespaces: 256 01:19:18.020 Compare Command: Supported 01:19:18.020 Write Uncorrectable Command: Not Supported 01:19:18.020 Dataset Management Command: Supported 01:19:18.020 Write Zeroes Command: Supported 01:19:18.020 Set Features Save Field: Supported 01:19:18.020 Reservations: Not Supported 01:19:18.020 Timestamp: Supported 01:19:18.020 Copy: Supported 01:19:18.020 Volatile Write Cache: Present 01:19:18.020 Atomic Write Unit (Normal): 1 01:19:18.020 Atomic Write Unit (PFail): 1 01:19:18.020 Atomic Compare & Write Unit: 1 01:19:18.020 Fused Compare & Write: Not Supported 01:19:18.020 Scatter-Gather List 01:19:18.020 SGL Command Set: Supported 01:19:18.020 SGL Keyed: Not Supported 01:19:18.020 SGL Bit Bucket Descriptor: Not Supported 01:19:18.020 SGL Metadata Pointer: Not Supported 01:19:18.020 Oversized SGL: Not Supported 01:19:18.020 SGL Metadata Address: Not Supported 01:19:18.020 SGL Offset: Not Supported 01:19:18.020 Transport SGL Data Block: Not Supported 01:19:18.020 Replay Protected Memory Block: Not Supported 01:19:18.020 01:19:18.020 Firmware Slot Information 01:19:18.020 ========================= 01:19:18.020 Active slot: 1 01:19:18.020 Slot 1 Firmware Revision: 1.0 01:19:18.020 01:19:18.020 01:19:18.020 Commands Supported and Effects 01:19:18.020 ============================== 01:19:18.020 Admin Commands 01:19:18.020 -------------- 01:19:18.020 Delete I/O Submission Queue (00h): Supported 01:19:18.020 Create I/O Submission Queue (01h): Supported 01:19:18.020 Get Log Page (02h): Supported 01:19:18.020 Delete I/O Completion Queue (04h): Supported 01:19:18.020 Create I/O Completion Queue (05h): Supported 01:19:18.020 Identify (06h): Supported 01:19:18.020 Abort (08h): Supported 01:19:18.020 Set Features (09h): Supported 01:19:18.020 Get Features (0Ah): Supported 01:19:18.020 Asynchronous Event Request (0Ch): Supported 01:19:18.020 Namespace Attachment (15h): Supported NS-Inventory-Change 01:19:18.020 Directive Send (19h): Supported 01:19:18.020 Directive Receive (1Ah): Supported 01:19:18.020 Virtualization Management (1Ch): Supported 01:19:18.020 Doorbell Buffer Config (7Ch): Supported 01:19:18.020 Format NVM (80h): Supported LBA-Change 01:19:18.020 I/O Commands 01:19:18.020 ------------ 01:19:18.020 Flush (00h): Supported LBA-Change 01:19:18.020 Write (01h): Supported LBA-Change 01:19:18.020 Read (02h): Supported 01:19:18.020 Compare (05h): Supported 01:19:18.020 Write Zeroes (08h): Supported LBA-Change 01:19:18.020 Dataset Management (09h): Supported LBA-Change 01:19:18.020 Unknown (0Ch): Supported 01:19:18.020 Unknown (12h): Supported 01:19:18.020 Copy (19h): Supported LBA-Change 01:19:18.020 Unknown (1Dh): Supported LBA-Change 01:19:18.020 01:19:18.020 Error Log 01:19:18.020 ========= 01:19:18.021 01:19:18.021 Arbitration 01:19:18.021 =========== 01:19:18.021 Arbitration Burst: no limit 01:19:18.021 01:19:18.021 Power Management 01:19:18.021 ================ 01:19:18.021 Number of Power States: 1 01:19:18.021 Current Power State: Power State #0 01:19:18.021 Power State #0: 01:19:18.021 Max Power: 25.00 W 01:19:18.021 Non-Operational State: Operational 01:19:18.021 Entry Latency: 16 microseconds 01:19:18.021 Exit Latency: 4 microseconds 01:19:18.021 Relative Read Throughput: 0 01:19:18.021 Relative Read Latency: 0 01:19:18.021 Relative Write Throughput: 0 01:19:18.021 Relative Write Latency: 0 01:19:18.021 Idle Power: Not Reported 01:19:18.021 Active Power: Not Reported 01:19:18.021 Non-Operational Permissive Mode: Not Supported 01:19:18.021 01:19:18.021 Health Information 01:19:18.021 ================== 01:19:18.021 Critical Warnings: 01:19:18.021 Available Spare Space: OK 01:19:18.021 Temperature: OK 01:19:18.021 Device Reliability: OK 01:19:18.021 Read Only: No 01:19:18.021 Volatile Memory Backup: OK 01:19:18.021 Current Temperature: 323 Kelvin (50 Celsius) 01:19:18.021 Temperature Threshold: 343 Kelvin (70 Celsius) 01:19:18.021 Available Spare: 0% 01:19:18.021 Available Spare Threshold: 0% 01:19:18.021 Life Percentage Used: 0% 01:19:18.021 Data Units Read: 791 01:19:18.021 Data Units Written: 719 01:19:18.021 Host Read Commands: 38415 01:19:18.021 Host Write Commands: 38201 01:19:18.021 Controller Busy Time: 0 minutes 01:19:18.021 Power Cycles: 0 01:19:18.021 Power On Hours: 0 hours 01:19:18.021 Unsafe Shutdowns: 0 01:19:18.021 Unrecoverable Media Errors: 0 01:19:18.021 Lifetime Error Log Entries: 0 01:19:18.021 Warning Temperature Time: 0 minutes 01:19:18.021 Critical Temperature Time: 0 minutes 01:19:18.021 01:19:18.021 Number of Queues 01:19:18.021 ================ 01:19:18.021 Number of I/O Submission Queues: 64 01:19:18.021 Number of I/O Completion Queues: 64 01:19:18.021 01:19:18.021 ZNS Specific Controller Data 01:19:18.021 ============================ 01:19:18.021 Zone Append Size Limit: 0 01:19:18.021 01:19:18.021 01:19:18.021 Active Namespaces 01:19:18.021 ================= 01:19:18.021 Namespace ID:1 01:19:18.021 Error Recovery Timeout: Unlimited 01:19:18.021 Command Set Identifier: NVM (00h) 01:19:18.021 Deallocate: Supported 01:19:18.021 Deallocated/Unwritten Error: Supported 01:19:18.021 Deallocated Read Value: All 0x00 01:19:18.021 Deallocate in Write Zeroes: Not Supported 01:19:18.021 Deallocated Guard Field: 0xFFFF 01:19:18.021 Flush: Supported 01:19:18.021 Reservation: Not Supported 01:19:18.021 Metadata Transferred as: Separate Metadata Buffer 01:19:18.021 Namespace Sharing Capabilities: Private 01:19:18.021 Size (in LBAs): 1548666 (5GiB) 01:19:18.021 Capacity (in LBAs): 1548666 (5GiB) 01:19:18.021 Utilization (in LBAs): 1548666 (5GiB) 01:19:18.021 Thin Provisioning: Not Supported 01:19:18.021 Per-NS Atomic Units: No 01:19:18.021 Maximum Single Source Range Length: 128 01:19:18.021 Maximum Copy Length: 128 01:19:18.021 Maximum Source Range Count: 128 01:19:18.021 NGUID/EUI64 Never Reused: No 01:19:18.021 Namespace Write Protected: No 01:19:18.021 Number of LBA Formats: 8 01:19:18.021 Current LBA Format: [2024-12-09 05:14:00.449292] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 64109 terminated unexpected 01:19:18.021 LBA Format #07 01:19:18.021 LBA Format #00: Data Size: 512 Metadata Size: 0 01:19:18.021 LBA Format #01: Data Size: 512 Metadata Size: 8 01:19:18.021 LBA Format #02: Data Size: 512 Metadata Size: 16 01:19:18.021 LBA Format #03: Data Size: 512 Metadata Size: 64 01:19:18.021 LBA Format #04: Data Size: 4096 Metadata Size: 0 01:19:18.021 LBA Format #05: Data Size: 4096 Metadata Size: 8 01:19:18.021 LBA Format #06: Data Size: 4096 Metadata Size: 16 01:19:18.021 LBA Format #07: Data Size: 4096 Metadata Size: 64 01:19:18.021 01:19:18.021 NVM Specific Namespace Data 01:19:18.021 =========================== 01:19:18.021 Logical Block Storage Tag Mask: 0 01:19:18.021 Protection Information Capabilities: 01:19:18.021 16b Guard Protection Information Storage Tag Support: No 01:19:18.021 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 01:19:18.021 Storage Tag Check Read Support: No 01:19:18.021 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:19:18.021 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:19:18.021 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:19:18.021 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:19:18.021 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:19:18.021 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:19:18.021 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:19:18.021 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:19:18.021 ===================================================== 01:19:18.021 NVMe Controller at 0000:00:11.0 [1b36:0010] 01:19:18.021 ===================================================== 01:19:18.021 Controller Capabilities/Features 01:19:18.021 ================================ 01:19:18.021 Vendor ID: 1b36 01:19:18.021 Subsystem Vendor ID: 1af4 01:19:18.021 Serial Number: 12341 01:19:18.021 Model Number: QEMU NVMe Ctrl 01:19:18.021 Firmware Version: 8.0.0 01:19:18.021 Recommended Arb Burst: 6 01:19:18.021 IEEE OUI Identifier: 00 54 52 01:19:18.021 Multi-path I/O 01:19:18.021 May have multiple subsystem ports: No 01:19:18.021 May have multiple controllers: No 01:19:18.021 Associated with SR-IOV VF: No 01:19:18.021 Max Data Transfer Size: 524288 01:19:18.021 Max Number of Namespaces: 256 01:19:18.021 Max Number of I/O Queues: 64 01:19:18.021 NVMe Specification Version (VS): 1.4 01:19:18.021 NVMe Specification Version (Identify): 1.4 01:19:18.021 Maximum Queue Entries: 2048 01:19:18.021 Contiguous Queues Required: Yes 01:19:18.021 Arbitration Mechanisms Supported 01:19:18.021 Weighted Round Robin: Not Supported 01:19:18.021 Vendor Specific: Not Supported 01:19:18.021 Reset Timeout: 7500 ms 01:19:18.021 Doorbell Stride: 4 bytes 01:19:18.021 NVM Subsystem Reset: Not Supported 01:19:18.021 Command Sets Supported 01:19:18.021 NVM Command Set: Supported 01:19:18.021 Boot Partition: Not Supported 01:19:18.021 Memory Page Size Minimum: 4096 bytes 01:19:18.021 Memory Page Size Maximum: 65536 bytes 01:19:18.021 Persistent Memory Region: Not Supported 01:19:18.021 Optional Asynchronous Events Supported 01:19:18.021 Namespace Attribute Notices: Supported 01:19:18.021 Firmware Activation Notices: Not Supported 01:19:18.021 ANA Change Notices: Not Supported 01:19:18.021 PLE Aggregate Log Change Notices: Not Supported 01:19:18.021 LBA Status Info Alert Notices: Not Supported 01:19:18.021 EGE Aggregate Log Change Notices: Not Supported 01:19:18.021 Normal NVM Subsystem Shutdown event: Not Supported 01:19:18.021 Zone Descriptor Change Notices: Not Supported 01:19:18.021 Discovery Log Change Notices: Not Supported 01:19:18.021 Controller Attributes 01:19:18.021 128-bit Host Identifier: Not Supported 01:19:18.021 Non-Operational Permissive Mode: Not Supported 01:19:18.021 NVM Sets: Not Supported 01:19:18.021 Read Recovery Levels: Not Supported 01:19:18.021 Endurance Groups: Not Supported 01:19:18.021 Predictable Latency Mode: Not Supported 01:19:18.021 Traffic Based Keep ALive: Not Supported 01:19:18.021 Namespace Granularity: Not Supported 01:19:18.021 SQ Associations: Not Supported 01:19:18.021 UUID List: Not Supported 01:19:18.021 Multi-Domain Subsystem: Not Supported 01:19:18.021 Fixed Capacity Management: Not Supported 01:19:18.021 Variable Capacity Management: Not Supported 01:19:18.021 Delete Endurance Group: Not Supported 01:19:18.021 Delete NVM Set: Not Supported 01:19:18.021 Extended LBA Formats Supported: Supported 01:19:18.021 Flexible Data Placement Supported: Not Supported 01:19:18.021 01:19:18.021 Controller Memory Buffer Support 01:19:18.021 ================================ 01:19:18.021 Supported: No 01:19:18.021 01:19:18.021 Persistent Memory Region Support 01:19:18.021 ================================ 01:19:18.021 Supported: No 01:19:18.021 01:19:18.021 Admin Command Set Attributes 01:19:18.021 ============================ 01:19:18.021 Security Send/Receive: Not Supported 01:19:18.021 Format NVM: Supported 01:19:18.021 Firmware Activate/Download: Not Supported 01:19:18.021 Namespace Management: Supported 01:19:18.021 Device Self-Test: Not Supported 01:19:18.021 Directives: Supported 01:19:18.021 NVMe-MI: Not Supported 01:19:18.021 Virtualization Management: Not Supported 01:19:18.022 Doorbell Buffer Config: Supported 01:19:18.022 Get LBA Status Capability: Not Supported 01:19:18.022 Command & Feature Lockdown Capability: Not Supported 01:19:18.022 Abort Command Limit: 4 01:19:18.022 Async Event Request Limit: 4 01:19:18.022 Number of Firmware Slots: N/A 01:19:18.022 Firmware Slot 1 Read-Only: N/A 01:19:18.022 Firmware Activation Without Reset: N/A 01:19:18.022 Multiple Update Detection Support: N/A 01:19:18.022 Firmware Update Granularity: No Information Provided 01:19:18.022 Per-Namespace SMART Log: Yes 01:19:18.022 Asymmetric Namespace Access Log Page: Not Supported 01:19:18.022 Subsystem NQN: nqn.2019-08.org.qemu:12341 01:19:18.022 Command Effects Log Page: Supported 01:19:18.022 Get Log Page Extended Data: Supported 01:19:18.022 Telemetry Log Pages: Not Supported 01:19:18.022 Persistent Event Log Pages: Not Supported 01:19:18.022 Supported Log Pages Log Page: May Support 01:19:18.022 Commands Supported & Effects Log Page: Not Supported 01:19:18.022 Feature Identifiers & Effects Log Page:May Support 01:19:18.022 NVMe-MI Commands & Effects Log Page: May Support 01:19:18.022 Data Area 4 for Telemetry Log: Not Supported 01:19:18.022 Error Log Page Entries Supported: 1 01:19:18.022 Keep Alive: Not Supported 01:19:18.022 01:19:18.022 NVM Command Set Attributes 01:19:18.022 ========================== 01:19:18.022 Submission Queue Entry Size 01:19:18.022 Max: 64 01:19:18.022 Min: 64 01:19:18.022 Completion Queue Entry Size 01:19:18.022 Max: 16 01:19:18.022 Min: 16 01:19:18.022 Number of Namespaces: 256 01:19:18.022 Compare Command: Supported 01:19:18.022 Write Uncorrectable Command: Not Supported 01:19:18.022 Dataset Management Command: Supported 01:19:18.022 Write Zeroes Command: Supported 01:19:18.022 Set Features Save Field: Supported 01:19:18.022 Reservations: Not Supported 01:19:18.022 Timestamp: Supported 01:19:18.022 Copy: Supported 01:19:18.022 Volatile Write Cache: Present 01:19:18.022 Atomic Write Unit (Normal): 1 01:19:18.022 Atomic Write Unit (PFail): 1 01:19:18.022 Atomic Compare & Write Unit: 1 01:19:18.022 Fused Compare & Write: Not Supported 01:19:18.022 Scatter-Gather List 01:19:18.022 SGL Command Set: Supported 01:19:18.022 SGL Keyed: Not Supported 01:19:18.022 SGL Bit Bucket Descriptor: Not Supported 01:19:18.022 SGL Metadata Pointer: Not Supported 01:19:18.022 Oversized SGL: Not Supported 01:19:18.022 SGL Metadata Address: Not Supported 01:19:18.022 SGL Offset: Not Supported 01:19:18.022 Transport SGL Data Block: Not Supported 01:19:18.022 Replay Protected Memory Block: Not Supported 01:19:18.022 01:19:18.022 Firmware Slot Information 01:19:18.022 ========================= 01:19:18.022 Active slot: 1 01:19:18.022 Slot 1 Firmware Revision: 1.0 01:19:18.022 01:19:18.022 01:19:18.022 Commands Supported and Effects 01:19:18.022 ============================== 01:19:18.022 Admin Commands 01:19:18.022 -------------- 01:19:18.022 Delete I/O Submission Queue (00h): Supported 01:19:18.022 Create I/O Submission Queue (01h): Supported 01:19:18.022 Get Log Page (02h): Supported 01:19:18.022 Delete I/O Completion Queue (04h): Supported 01:19:18.022 Create I/O Completion Queue (05h): Supported 01:19:18.022 Identify (06h): Supported 01:19:18.022 Abort (08h): Supported 01:19:18.022 Set Features (09h): Supported 01:19:18.022 Get Features (0Ah): Supported 01:19:18.022 Asynchronous Event Request (0Ch): Supported 01:19:18.022 Namespace Attachment (15h): Supported NS-Inventory-Change 01:19:18.022 Directive Send (19h): Supported 01:19:18.022 Directive Receive (1Ah): Supported 01:19:18.022 Virtualization Management (1Ch): Supported 01:19:18.022 Doorbell Buffer Config (7Ch): Supported 01:19:18.022 Format NVM (80h): Supported LBA-Change 01:19:18.022 I/O Commands 01:19:18.022 ------------ 01:19:18.022 Flush (00h): Supported LBA-Change 01:19:18.022 Write (01h): Supported LBA-Change 01:19:18.022 Read (02h): Supported 01:19:18.022 Compare (05h): Supported 01:19:18.022 Write Zeroes (08h): Supported LBA-Change 01:19:18.022 Dataset Management (09h): Supported LBA-Change 01:19:18.022 Unknown (0Ch): Supported 01:19:18.022 Unknown (12h): Supported 01:19:18.022 Copy (19h): Supported LBA-Change 01:19:18.022 Unknown (1Dh): Supported LBA-Change 01:19:18.022 01:19:18.022 Error Log 01:19:18.022 ========= 01:19:18.022 01:19:18.022 Arbitration 01:19:18.022 =========== 01:19:18.022 Arbitration Burst: no limit 01:19:18.022 01:19:18.022 Power Management 01:19:18.022 ================ 01:19:18.022 Number of Power States: 1 01:19:18.022 Current Power State: Power State #0 01:19:18.022 Power State #0: 01:19:18.022 Max Power: 25.00 W 01:19:18.022 Non-Operational State: Operational 01:19:18.022 Entry Latency: 16 microseconds 01:19:18.022 Exit Latency: 4 microseconds 01:19:18.022 Relative Read Throughput: 0 01:19:18.022 Relative Read Latency: 0 01:19:18.022 Relative Write Throughput: 0 01:19:18.022 Relative Write Latency: 0 01:19:18.022 Idle Power: Not Reported 01:19:18.022 Active Power: Not Reported 01:19:18.022 Non-Operational Permissive Mode: Not Supported 01:19:18.022 01:19:18.022 Health Information 01:19:18.022 ================== 01:19:18.022 Critical Warnings: 01:19:18.022 Available Spare Space: OK 01:19:18.022 Temperature: OK 01:19:18.022 Device Reliability: OK 01:19:18.022 Read Only: No 01:19:18.022 Volatile Memory Backup: OK 01:19:18.022 Current Temperature: 323 Kelvin (50 Celsius) 01:19:18.022 Temperature Threshold: 343 Kelvin (70 Celsius) 01:19:18.022 Available Spare: 0% 01:19:18.022 Available Spare Threshold: 0% 01:19:18.022 Life Percentage Used: 0% 01:19:18.022 Data Units Read: 1214 01:19:18.022 Data Units Written: 1075 01:19:18.022 Host Read Commands: 57537 01:19:18.022 Host Write Commands: 56217 01:19:18.022 Controller Busy Time: 0 minutes 01:19:18.022 Power Cycles: 0 01:19:18.022 Power On Hours: 0 hours 01:19:18.022 Unsafe Shutdowns: 0 01:19:18.022 Unrecoverable Media Errors: 0 01:19:18.022 Lifetime Error Log Entries: 0 01:19:18.022 Warning Temperature Time: 0 minutes 01:19:18.022 Critical Temperature Time: 0 minutes 01:19:18.022 01:19:18.022 Number of Queues 01:19:18.022 ================ 01:19:18.022 Number of I/O Submission Queues: 64 01:19:18.022 Number of I/O Completion Queues: 64 01:19:18.022 01:19:18.022 ZNS Specific Controller Data 01:19:18.022 ============================ 01:19:18.022 Zone Append Size Limit: 0 01:19:18.022 01:19:18.022 01:19:18.022 Active Namespaces 01:19:18.022 ================= 01:19:18.022 Namespace ID:1 01:19:18.022 Error Recovery Timeout: Unlimited 01:19:18.022 Command Set Identifier: NVM (00h) 01:19:18.022 Deallocate: Supported 01:19:18.022 Deallocated/Unwritten Error: Supported 01:19:18.022 Deallocated Read Value: All 0x00 01:19:18.022 Deallocate in Write Zeroes: Not Supported 01:19:18.022 Deallocated Guard Field: 0xFFFF 01:19:18.022 Flush: Supported 01:19:18.022 Reservation: Not Supported 01:19:18.022 Namespace Sharing Capabilities: Private 01:19:18.022 Size (in LBAs): 1310720 (5GiB) 01:19:18.022 Capacity (in LBAs): 1310720 (5GiB) 01:19:18.022 Utilization (in LBAs): 1310720 (5GiB) 01:19:18.022 Thin Provisioning: Not Supported 01:19:18.022 Per-NS Atomic Units: No 01:19:18.023 Maximum Single Source Range Length: 128 01:19:18.023 Maximum Copy Length: 128 01:19:18.023 Maximum Source Range Count: 128 01:19:18.023 NGUID/EUI64 Never Reused: No 01:19:18.023 Namespace Write Protected: No 01:19:18.023 Number of LBA Formats: 8 01:19:18.023 Current LBA Format: LBA Format #04 01:19:18.023 LBA Format #00: Data Size: 512 Metadata Size: 0 01:19:18.023 LBA Format #01: Data Size: 512 Metadata Size: 8 01:19:18.023 LBA Format #02: Data Size: 512 Metadata Size: 16 01:19:18.023 LBA Format #03: Data Size: 512 Metadata Size: 64 01:19:18.023 LBA Format #04: Data Size: 4096 Metadata Size: 0 01:19:18.023 LBA Form[2024-12-09 05:14:00.450178] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 64109 terminated unexpected 01:19:18.023 at #05: Data Size: 4096 Metadata Size: 8 01:19:18.023 LBA Format #06: Data Size: 4096 Metadata Size: 16 01:19:18.023 LBA Format #07: Data Size: 4096 Metadata Size: 64 01:19:18.023 01:19:18.023 NVM Specific Namespace Data 01:19:18.023 =========================== 01:19:18.023 Logical Block Storage Tag Mask: 0 01:19:18.023 Protection Information Capabilities: 01:19:18.023 16b Guard Protection Information Storage Tag Support: No 01:19:18.023 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 01:19:18.023 Storage Tag Check Read Support: No 01:19:18.023 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:19:18.023 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:19:18.023 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:19:18.023 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:19:18.023 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:19:18.023 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:19:18.023 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:19:18.023 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:19:18.023 ===================================================== 01:19:18.023 NVMe Controller at 0000:00:12.0 [1b36:0010] 01:19:18.023 ===================================================== 01:19:18.023 Controller Capabilities/Features 01:19:18.023 ================================ 01:19:18.023 Vendor ID: 1b36 01:19:18.023 Subsystem Vendor ID: 1af4 01:19:18.023 Serial Number: 12342 01:19:18.023 Model Number: QEMU NVMe Ctrl 01:19:18.023 Firmware Version: 8.0.0 01:19:18.023 Recommended Arb Burst: 6 01:19:18.023 IEEE OUI Identifier: 00 54 52 01:19:18.023 Multi-path I/O 01:19:18.023 May have multiple subsystem ports: No 01:19:18.023 May have multiple controllers: No 01:19:18.023 Associated with SR-IOV VF: No 01:19:18.023 Max Data Transfer Size: 524288 01:19:18.023 Max Number of Namespaces: 256 01:19:18.023 Max Number of I/O Queues: 64 01:19:18.023 NVMe Specification Version (VS): 1.4 01:19:18.023 NVMe Specification Version (Identify): 1.4 01:19:18.023 Maximum Queue Entries: 2048 01:19:18.023 Contiguous Queues Required: Yes 01:19:18.023 Arbitration Mechanisms Supported 01:19:18.023 Weighted Round Robin: Not Supported 01:19:18.023 Vendor Specific: Not Supported 01:19:18.023 Reset Timeout: 7500 ms 01:19:18.023 Doorbell Stride: 4 bytes 01:19:18.023 NVM Subsystem Reset: Not Supported 01:19:18.023 Command Sets Supported 01:19:18.023 NVM Command Set: Supported 01:19:18.023 Boot Partition: Not Supported 01:19:18.023 Memory Page Size Minimum: 4096 bytes 01:19:18.023 Memory Page Size Maximum: 65536 bytes 01:19:18.023 Persistent Memory Region: Not Supported 01:19:18.023 Optional Asynchronous Events Supported 01:19:18.023 Namespace Attribute Notices: Supported 01:19:18.023 Firmware Activation Notices: Not Supported 01:19:18.023 ANA Change Notices: Not Supported 01:19:18.023 PLE Aggregate Log Change Notices: Not Supported 01:19:18.023 LBA Status Info Alert Notices: Not Supported 01:19:18.023 EGE Aggregate Log Change Notices: Not Supported 01:19:18.023 Normal NVM Subsystem Shutdown event: Not Supported 01:19:18.023 Zone Descriptor Change Notices: Not Supported 01:19:18.023 Discovery Log Change Notices: Not Supported 01:19:18.023 Controller Attributes 01:19:18.023 128-bit Host Identifier: Not Supported 01:19:18.023 Non-Operational Permissive Mode: Not Supported 01:19:18.023 NVM Sets: Not Supported 01:19:18.023 Read Recovery Levels: Not Supported 01:19:18.023 Endurance Groups: Not Supported 01:19:18.023 Predictable Latency Mode: Not Supported 01:19:18.023 Traffic Based Keep ALive: Not Supported 01:19:18.023 Namespace Granularity: Not Supported 01:19:18.023 SQ Associations: Not Supported 01:19:18.023 UUID List: Not Supported 01:19:18.023 Multi-Domain Subsystem: Not Supported 01:19:18.023 Fixed Capacity Management: Not Supported 01:19:18.023 Variable Capacity Management: Not Supported 01:19:18.023 Delete Endurance Group: Not Supported 01:19:18.023 Delete NVM Set: Not Supported 01:19:18.023 Extended LBA Formats Supported: Supported 01:19:18.023 Flexible Data Placement Supported: Not Supported 01:19:18.023 01:19:18.023 Controller Memory Buffer Support 01:19:18.023 ================================ 01:19:18.023 Supported: No 01:19:18.023 01:19:18.023 Persistent Memory Region Support 01:19:18.023 ================================ 01:19:18.023 Supported: No 01:19:18.023 01:19:18.023 Admin Command Set Attributes 01:19:18.023 ============================ 01:19:18.023 Security Send/Receive: Not Supported 01:19:18.023 Format NVM: Supported 01:19:18.023 Firmware Activate/Download: Not Supported 01:19:18.023 Namespace Management: Supported 01:19:18.023 Device Self-Test: Not Supported 01:19:18.023 Directives: Supported 01:19:18.023 NVMe-MI: Not Supported 01:19:18.023 Virtualization Management: Not Supported 01:19:18.023 Doorbell Buffer Config: Supported 01:19:18.023 Get LBA Status Capability: Not Supported 01:19:18.023 Command & Feature Lockdown Capability: Not Supported 01:19:18.023 Abort Command Limit: 4 01:19:18.023 Async Event Request Limit: 4 01:19:18.023 Number of Firmware Slots: N/A 01:19:18.023 Firmware Slot 1 Read-Only: N/A 01:19:18.023 Firmware Activation Without Reset: N/A 01:19:18.023 Multiple Update Detection Support: N/A 01:19:18.023 Firmware Update Granularity: No Information Provided 01:19:18.023 Per-Namespace SMART Log: Yes 01:19:18.023 Asymmetric Namespace Access Log Page: Not Supported 01:19:18.023 Subsystem NQN: nqn.2019-08.org.qemu:12342 01:19:18.023 Command Effects Log Page: Supported 01:19:18.023 Get Log Page Extended Data: Supported 01:19:18.023 Telemetry Log Pages: Not Supported 01:19:18.023 Persistent Event Log Pages: Not Supported 01:19:18.023 Supported Log Pages Log Page: May Support 01:19:18.023 Commands Supported & Effects Log Page: Not Supported 01:19:18.023 Feature Identifiers & Effects Log Page:May Support 01:19:18.023 NVMe-MI Commands & Effects Log Page: May Support 01:19:18.023 Data Area 4 for Telemetry Log: Not Supported 01:19:18.023 Error Log Page Entries Supported: 1 01:19:18.023 Keep Alive: Not Supported 01:19:18.023 01:19:18.023 NVM Command Set Attributes 01:19:18.023 ========================== 01:19:18.023 Submission Queue Entry Size 01:19:18.023 Max: 64 01:19:18.023 Min: 64 01:19:18.023 Completion Queue Entry Size 01:19:18.023 Max: 16 01:19:18.023 Min: 16 01:19:18.023 Number of Namespaces: 256 01:19:18.023 Compare Command: Supported 01:19:18.023 Write Uncorrectable Command: Not Supported 01:19:18.024 Dataset Management Command: Supported 01:19:18.024 Write Zeroes Command: Supported 01:19:18.024 Set Features Save Field: Supported 01:19:18.024 Reservations: Not Supported 01:19:18.024 Timestamp: Supported 01:19:18.024 Copy: Supported 01:19:18.024 Volatile Write Cache: Present 01:19:18.024 Atomic Write Unit (Normal): 1 01:19:18.024 Atomic Write Unit (PFail): 1 01:19:18.024 Atomic Compare & Write Unit: 1 01:19:18.024 Fused Compare & Write: Not Supported 01:19:18.024 Scatter-Gather List 01:19:18.024 SGL Command Set: Supported 01:19:18.024 SGL Keyed: Not Supported 01:19:18.024 SGL Bit Bucket Descriptor: Not Supported 01:19:18.024 SGL Metadata Pointer: Not Supported 01:19:18.024 Oversized SGL: Not Supported 01:19:18.024 SGL Metadata Address: Not Supported 01:19:18.024 SGL Offset: Not Supported 01:19:18.024 Transport SGL Data Block: Not Supported 01:19:18.024 Replay Protected Memory Block: Not Supported 01:19:18.024 01:19:18.024 Firmware Slot Information 01:19:18.024 ========================= 01:19:18.024 Active slot: 1 01:19:18.024 Slot 1 Firmware Revision: 1.0 01:19:18.024 01:19:18.024 01:19:18.024 Commands Supported and Effects 01:19:18.024 ============================== 01:19:18.024 Admin Commands 01:19:18.024 -------------- 01:19:18.024 Delete I/O Submission Queue (00h): Supported 01:19:18.024 Create I/O Submission Queue (01h): Supported 01:19:18.024 Get Log Page (02h): Supported 01:19:18.024 Delete I/O Completion Queue (04h): Supported 01:19:18.024 Create I/O Completion Queue (05h): Supported 01:19:18.024 Identify (06h): Supported 01:19:18.024 Abort (08h): Supported 01:19:18.024 Set Features (09h): Supported 01:19:18.024 Get Features (0Ah): Supported 01:19:18.024 Asynchronous Event Request (0Ch): Supported 01:19:18.024 Namespace Attachment (15h): Supported NS-Inventory-Change 01:19:18.024 Directive Send (19h): Supported 01:19:18.024 Directive Receive (1Ah): Supported 01:19:18.024 Virtualization Management (1Ch): Supported 01:19:18.024 Doorbell Buffer Config (7Ch): Supported 01:19:18.024 Format NVM (80h): Supported LBA-Change 01:19:18.024 I/O Commands 01:19:18.024 ------------ 01:19:18.024 Flush (00h): Supported LBA-Change 01:19:18.024 Write (01h): Supported LBA-Change 01:19:18.024 Read (02h): Supported 01:19:18.024 Compare (05h): Supported 01:19:18.024 Write Zeroes (08h): Supported LBA-Change 01:19:18.024 Dataset Management (09h): Supported LBA-Change 01:19:18.024 Unknown (0Ch): Supported 01:19:18.024 Unknown (12h): Supported 01:19:18.024 Copy (19h): Supported LBA-Change 01:19:18.024 Unknown (1Dh): Supported LBA-Change 01:19:18.024 01:19:18.024 Error Log 01:19:18.024 ========= 01:19:18.024 01:19:18.024 Arbitration 01:19:18.024 =========== 01:19:18.024 Arbitration Burst: no limit 01:19:18.024 01:19:18.024 Power Management 01:19:18.024 ================ 01:19:18.024 Number of Power States: 1 01:19:18.024 Current Power State: Power State #0 01:19:18.024 Power State #0: 01:19:18.024 Max Power: 25.00 W 01:19:18.024 Non-Operational State: Operational 01:19:18.024 Entry Latency: 16 microseconds 01:19:18.024 Exit Latency: 4 microseconds 01:19:18.024 Relative Read Throughput: 0 01:19:18.024 Relative Read Latency: 0 01:19:18.024 Relative Write Throughput: 0 01:19:18.024 Relative Write Latency: 0 01:19:18.024 Idle Power: Not Reported 01:19:18.024 Active Power: Not Reported 01:19:18.024 Non-Operational Permissive Mode: Not Supported 01:19:18.024 01:19:18.024 Health Information 01:19:18.024 ================== 01:19:18.024 Critical Warnings: 01:19:18.024 Available Spare Space: OK 01:19:18.024 Temperature: OK 01:19:18.024 Device Reliability: OK 01:19:18.024 Read Only: No 01:19:18.024 Volatile Memory Backup: OK 01:19:18.024 Current Temperature: 323 Kelvin (50 Celsius) 01:19:18.024 Temperature Threshold: 343 Kelvin (70 Celsius) 01:19:18.024 Available Spare: 0% 01:19:18.024 Available Spare Threshold: 0% 01:19:18.024 Life Percentage Used: 0% 01:19:18.024 Data Units Read: 2486 01:19:18.024 Data Units Written: 2273 01:19:18.024 Host Read Commands: 117040 01:19:18.024 Host Write Commands: 115309 01:19:18.024 Controller Busy Time: 0 minutes 01:19:18.024 Power Cycles: 0 01:19:18.024 Power On Hours: 0 hours 01:19:18.024 Unsafe Shutdowns: 0 01:19:18.024 Unrecoverable Media Errors: 0 01:19:18.024 Lifetime Error Log Entries: 0 01:19:18.024 Warning Temperature Time: 0 minutes 01:19:18.024 Critical Temperature Time: 0 minutes 01:19:18.024 01:19:18.024 Number of Queues 01:19:18.024 ================ 01:19:18.024 Number of I/O Submission Queues: 64 01:19:18.024 Number of I/O Completion Queues: 64 01:19:18.024 01:19:18.024 ZNS Specific Controller Data 01:19:18.024 ============================ 01:19:18.024 Zone Append Size Limit: 0 01:19:18.024 01:19:18.024 01:19:18.024 Active Namespaces 01:19:18.024 ================= 01:19:18.024 Namespace ID:1 01:19:18.024 Error Recovery Timeout: Unlimited 01:19:18.024 Command Set Identifier: NVM (00h) 01:19:18.024 Deallocate: Supported 01:19:18.024 Deallocated/Unwritten Error: Supported 01:19:18.024 Deallocated Read Value: All 0x00 01:19:18.024 Deallocate in Write Zeroes: Not Supported 01:19:18.024 Deallocated Guard Field: 0xFFFF 01:19:18.024 Flush: Supported 01:19:18.024 Reservation: Not Supported 01:19:18.024 Namespace Sharing Capabilities: Private 01:19:18.024 Size (in LBAs): 1048576 (4GiB) 01:19:18.024 Capacity (in LBAs): 1048576 (4GiB) 01:19:18.024 Utilization (in LBAs): 1048576 (4GiB) 01:19:18.024 Thin Provisioning: Not Supported 01:19:18.024 Per-NS Atomic Units: No 01:19:18.024 Maximum Single Source Range Length: 128 01:19:18.024 Maximum Copy Length: 128 01:19:18.024 Maximum Source Range Count: 128 01:19:18.024 NGUID/EUI64 Never Reused: No 01:19:18.024 Namespace Write Protected: No 01:19:18.024 Number of LBA Formats: 8 01:19:18.024 Current LBA Format: LBA Format #04 01:19:18.024 LBA Format #00: Data Size: 512 Metadata Size: 0 01:19:18.024 LBA Format #01: Data Size: 512 Metadata Size: 8 01:19:18.024 LBA Format #02: Data Size: 512 Metadata Size: 16 01:19:18.024 LBA Format #03: Data Size: 512 Metadata Size: 64 01:19:18.024 LBA Format #04: Data Size: 4096 Metadata Size: 0 01:19:18.024 LBA Format #05: Data Size: 4096 Metadata Size: 8 01:19:18.024 LBA Format #06: Data Size: 4096 Metadata Size: 16 01:19:18.024 LBA Format #07: Data Size: 4096 Metadata Size: 64 01:19:18.024 01:19:18.024 NVM Specific Namespace Data 01:19:18.024 =========================== 01:19:18.024 Logical Block Storage Tag Mask: 0 01:19:18.024 Protection Information Capabilities: 01:19:18.024 16b Guard Protection Information Storage Tag Support: No 01:19:18.024 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 01:19:18.024 Storage Tag Check Read Support: No 01:19:18.024 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:19:18.024 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:19:18.024 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:19:18.024 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:19:18.024 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:19:18.024 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:19:18.024 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:19:18.024 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:19:18.024 Namespace ID:2 01:19:18.024 Error Recovery Timeout: Unlimited 01:19:18.024 Command Set Identifier: NVM (00h) 01:19:18.024 Deallocate: Supported 01:19:18.024 Deallocated/Unwritten Error: Supported 01:19:18.024 Deallocated Read Value: All 0x00 01:19:18.024 Deallocate in Write Zeroes: Not Supported 01:19:18.024 Deallocated Guard Field: 0xFFFF 01:19:18.024 Flush: Supported 01:19:18.024 Reservation: Not Supported 01:19:18.024 Namespace Sharing Capabilities: Private 01:19:18.024 Size (in LBAs): 1048576 (4GiB) 01:19:18.024 Capacity (in LBAs): 1048576 (4GiB) 01:19:18.024 Utilization (in LBAs): 1048576 (4GiB) 01:19:18.024 Thin Provisioning: Not Supported 01:19:18.024 Per-NS Atomic Units: No 01:19:18.024 Maximum Single Source Range Length: 128 01:19:18.024 Maximum Copy Length: 128 01:19:18.024 Maximum Source Range Count: 128 01:19:18.024 NGUID/EUI64 Never Reused: No 01:19:18.024 Namespace Write Protected: No 01:19:18.024 Number of LBA Formats: 8 01:19:18.024 Current LBA Format: LBA Format #04 01:19:18.024 LBA Format #00: Data Size: 512 Metadata Size: 0 01:19:18.024 LBA Format #01: Data Size: 512 Metadata Size: 8 01:19:18.025 LBA Format #02: Data Size: 512 Metadata Size: 16 01:19:18.025 LBA Format #03: Data Size: 512 Metadata Size: 64 01:19:18.025 LBA Format #04: Data Size: 4096 Metadata Size: 0 01:19:18.025 LBA Format #05: Data Size: 4096 Metadata Size: 8 01:19:18.025 LBA Format #06: Data Size: 4096 Metadata Size: 16 01:19:18.025 LBA Format #07: Data Size: 4096 Metadata Size: 64 01:19:18.025 01:19:18.025 NVM Specific Namespace Data 01:19:18.025 =========================== 01:19:18.025 Logical Block Storage Tag Mask: 0 01:19:18.025 Protection Information Capabilities: 01:19:18.025 16b Guard Protection Information Storage Tag Support: No 01:19:18.025 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 01:19:18.025 Storage Tag Check Read Support: No 01:19:18.025 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:19:18.025 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:19:18.025 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:19:18.025 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:19:18.025 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:19:18.025 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:19:18.025 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:19:18.025 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:19:18.025 Namespace ID:3 01:19:18.025 Error Recovery Timeout: Unlimited 01:19:18.025 Command Set Identifier: NVM (00h) 01:19:18.025 Deallocate: Supported 01:19:18.025 Deallocated/Unwritten Error: Supported 01:19:18.025 Deallocated Read Value: All 0x00 01:19:18.025 Deallocate in Write Zeroes: Not Supported 01:19:18.025 Deallocated Guard Field: 0xFFFF 01:19:18.025 Flush: Supported 01:19:18.025 Reservation: Not Supported 01:19:18.025 Namespace Sharing Capabilities: Private 01:19:18.025 Size (in LBAs): 1048576 (4GiB) 01:19:18.282 Capacity (in LBAs): 1048576 (4GiB) 01:19:18.282 Utilization (in LBAs): 1048576 (4GiB) 01:19:18.282 Thin Provisioning: Not Supported 01:19:18.282 Per-NS Atomic Units: No 01:19:18.282 Maximum Single Source Range Length: 128 01:19:18.282 Maximum Copy Length: 128 01:19:18.282 Maximum Source Range Count: 128 01:19:18.282 NGUID/EUI64 Never Reused: No 01:19:18.282 Namespace Write Protected: No 01:19:18.282 Number of LBA Formats: 8 01:19:18.282 Current LBA Format: LBA Format #04 01:19:18.282 LBA Format #00: Data Size: 512 Metadata Size: 0 01:19:18.282 LBA Format #01: Data Size: 512 Metadata Size: 8 01:19:18.282 LBA Format #02: Data Size: 512 Metadata Size: 16 01:19:18.282 LBA Format #03: Data Size: 512 Metadata Size: 64 01:19:18.282 LBA Format #04: Data Size: 4096 Metadata Size: 0 01:19:18.282 LBA Format #05: Data Size: 4096 Metadata Size: 8 01:19:18.282 LBA Format #06: Data Size: 4096 Metadata Size: 16 01:19:18.282 LBA Format #07: Data Size: 4096 Metadata Size: 64 01:19:18.282 01:19:18.282 NVM Specific Namespace Data 01:19:18.282 =========================== 01:19:18.282 Logical Block Storage Tag Mask: 0 01:19:18.282 Protection Information Capabilities: 01:19:18.282 16b Guard Protection Information Storage Tag Support: No 01:19:18.282 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 01:19:18.282 Storage Tag Check Read Support: No 01:19:18.282 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:19:18.282 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:19:18.282 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:19:18.282 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:19:18.282 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:19:18.282 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:19:18.282 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:19:18.282 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:19:18.282 05:14:00 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 01:19:18.282 05:14:00 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 01:19:18.540 ===================================================== 01:19:18.540 NVMe Controller at 0000:00:10.0 [1b36:0010] 01:19:18.540 ===================================================== 01:19:18.540 Controller Capabilities/Features 01:19:18.540 ================================ 01:19:18.540 Vendor ID: 1b36 01:19:18.540 Subsystem Vendor ID: 1af4 01:19:18.540 Serial Number: 12340 01:19:18.540 Model Number: QEMU NVMe Ctrl 01:19:18.540 Firmware Version: 8.0.0 01:19:18.540 Recommended Arb Burst: 6 01:19:18.540 IEEE OUI Identifier: 00 54 52 01:19:18.540 Multi-path I/O 01:19:18.540 May have multiple subsystem ports: No 01:19:18.540 May have multiple controllers: No 01:19:18.540 Associated with SR-IOV VF: No 01:19:18.540 Max Data Transfer Size: 524288 01:19:18.540 Max Number of Namespaces: 256 01:19:18.540 Max Number of I/O Queues: 64 01:19:18.540 NVMe Specification Version (VS): 1.4 01:19:18.540 NVMe Specification Version (Identify): 1.4 01:19:18.540 Maximum Queue Entries: 2048 01:19:18.540 Contiguous Queues Required: Yes 01:19:18.540 Arbitration Mechanisms Supported 01:19:18.540 Weighted Round Robin: Not Supported 01:19:18.540 Vendor Specific: Not Supported 01:19:18.540 Reset Timeout: 7500 ms 01:19:18.540 Doorbell Stride: 4 bytes 01:19:18.540 NVM Subsystem Reset: Not Supported 01:19:18.540 Command Sets Supported 01:19:18.540 NVM Command Set: Supported 01:19:18.540 Boot Partition: Not Supported 01:19:18.540 Memory Page Size Minimum: 4096 bytes 01:19:18.540 Memory Page Size Maximum: 65536 bytes 01:19:18.540 Persistent Memory Region: Not Supported 01:19:18.540 Optional Asynchronous Events Supported 01:19:18.540 Namespace Attribute Notices: Supported 01:19:18.540 Firmware Activation Notices: Not Supported 01:19:18.540 ANA Change Notices: Not Supported 01:19:18.540 PLE Aggregate Log Change Notices: Not Supported 01:19:18.541 LBA Status Info Alert Notices: Not Supported 01:19:18.541 EGE Aggregate Log Change Notices: Not Supported 01:19:18.541 Normal NVM Subsystem Shutdown event: Not Supported 01:19:18.541 Zone Descriptor Change Notices: Not Supported 01:19:18.541 Discovery Log Change Notices: Not Supported 01:19:18.541 Controller Attributes 01:19:18.541 128-bit Host Identifier: Not Supported 01:19:18.541 Non-Operational Permissive Mode: Not Supported 01:19:18.541 NVM Sets: Not Supported 01:19:18.541 Read Recovery Levels: Not Supported 01:19:18.541 Endurance Groups: Not Supported 01:19:18.541 Predictable Latency Mode: Not Supported 01:19:18.541 Traffic Based Keep ALive: Not Supported 01:19:18.541 Namespace Granularity: Not Supported 01:19:18.541 SQ Associations: Not Supported 01:19:18.541 UUID List: Not Supported 01:19:18.541 Multi-Domain Subsystem: Not Supported 01:19:18.541 Fixed Capacity Management: Not Supported 01:19:18.541 Variable Capacity Management: Not Supported 01:19:18.541 Delete Endurance Group: Not Supported 01:19:18.541 Delete NVM Set: Not Supported 01:19:18.541 Extended LBA Formats Supported: Supported 01:19:18.541 Flexible Data Placement Supported: Not Supported 01:19:18.541 01:19:18.541 Controller Memory Buffer Support 01:19:18.541 ================================ 01:19:18.541 Supported: No 01:19:18.541 01:19:18.541 Persistent Memory Region Support 01:19:18.541 ================================ 01:19:18.541 Supported: No 01:19:18.541 01:19:18.541 Admin Command Set Attributes 01:19:18.541 ============================ 01:19:18.541 Security Send/Receive: Not Supported 01:19:18.541 Format NVM: Supported 01:19:18.541 Firmware Activate/Download: Not Supported 01:19:18.541 Namespace Management: Supported 01:19:18.541 Device Self-Test: Not Supported 01:19:18.541 Directives: Supported 01:19:18.541 NVMe-MI: Not Supported 01:19:18.541 Virtualization Management: Not Supported 01:19:18.541 Doorbell Buffer Config: Supported 01:19:18.541 Get LBA Status Capability: Not Supported 01:19:18.541 Command & Feature Lockdown Capability: Not Supported 01:19:18.541 Abort Command Limit: 4 01:19:18.541 Async Event Request Limit: 4 01:19:18.541 Number of Firmware Slots: N/A 01:19:18.541 Firmware Slot 1 Read-Only: N/A 01:19:18.541 Firmware Activation Without Reset: N/A 01:19:18.541 Multiple Update Detection Support: N/A 01:19:18.541 Firmware Update Granularity: No Information Provided 01:19:18.541 Per-Namespace SMART Log: Yes 01:19:18.541 Asymmetric Namespace Access Log Page: Not Supported 01:19:18.541 Subsystem NQN: nqn.2019-08.org.qemu:12340 01:19:18.541 Command Effects Log Page: Supported 01:19:18.541 Get Log Page Extended Data: Supported 01:19:18.541 Telemetry Log Pages: Not Supported 01:19:18.541 Persistent Event Log Pages: Not Supported 01:19:18.541 Supported Log Pages Log Page: May Support 01:19:18.541 Commands Supported & Effects Log Page: Not Supported 01:19:18.541 Feature Identifiers & Effects Log Page:May Support 01:19:18.541 NVMe-MI Commands & Effects Log Page: May Support 01:19:18.541 Data Area 4 for Telemetry Log: Not Supported 01:19:18.541 Error Log Page Entries Supported: 1 01:19:18.541 Keep Alive: Not Supported 01:19:18.541 01:19:18.541 NVM Command Set Attributes 01:19:18.541 ========================== 01:19:18.541 Submission Queue Entry Size 01:19:18.541 Max: 64 01:19:18.541 Min: 64 01:19:18.541 Completion Queue Entry Size 01:19:18.541 Max: 16 01:19:18.541 Min: 16 01:19:18.541 Number of Namespaces: 256 01:19:18.541 Compare Command: Supported 01:19:18.541 Write Uncorrectable Command: Not Supported 01:19:18.541 Dataset Management Command: Supported 01:19:18.541 Write Zeroes Command: Supported 01:19:18.541 Set Features Save Field: Supported 01:19:18.541 Reservations: Not Supported 01:19:18.541 Timestamp: Supported 01:19:18.541 Copy: Supported 01:19:18.541 Volatile Write Cache: Present 01:19:18.541 Atomic Write Unit (Normal): 1 01:19:18.541 Atomic Write Unit (PFail): 1 01:19:18.541 Atomic Compare & Write Unit: 1 01:19:18.541 Fused Compare & Write: Not Supported 01:19:18.541 Scatter-Gather List 01:19:18.541 SGL Command Set: Supported 01:19:18.541 SGL Keyed: Not Supported 01:19:18.541 SGL Bit Bucket Descriptor: Not Supported 01:19:18.541 SGL Metadata Pointer: Not Supported 01:19:18.541 Oversized SGL: Not Supported 01:19:18.541 SGL Metadata Address: Not Supported 01:19:18.541 SGL Offset: Not Supported 01:19:18.541 Transport SGL Data Block: Not Supported 01:19:18.541 Replay Protected Memory Block: Not Supported 01:19:18.541 01:19:18.541 Firmware Slot Information 01:19:18.541 ========================= 01:19:18.541 Active slot: 1 01:19:18.541 Slot 1 Firmware Revision: 1.0 01:19:18.541 01:19:18.541 01:19:18.541 Commands Supported and Effects 01:19:18.541 ============================== 01:19:18.541 Admin Commands 01:19:18.541 -------------- 01:19:18.541 Delete I/O Submission Queue (00h): Supported 01:19:18.541 Create I/O Submission Queue (01h): Supported 01:19:18.541 Get Log Page (02h): Supported 01:19:18.541 Delete I/O Completion Queue (04h): Supported 01:19:18.541 Create I/O Completion Queue (05h): Supported 01:19:18.541 Identify (06h): Supported 01:19:18.541 Abort (08h): Supported 01:19:18.541 Set Features (09h): Supported 01:19:18.541 Get Features (0Ah): Supported 01:19:18.541 Asynchronous Event Request (0Ch): Supported 01:19:18.541 Namespace Attachment (15h): Supported NS-Inventory-Change 01:19:18.541 Directive Send (19h): Supported 01:19:18.541 Directive Receive (1Ah): Supported 01:19:18.541 Virtualization Management (1Ch): Supported 01:19:18.541 Doorbell Buffer Config (7Ch): Supported 01:19:18.541 Format NVM (80h): Supported LBA-Change 01:19:18.541 I/O Commands 01:19:18.541 ------------ 01:19:18.541 Flush (00h): Supported LBA-Change 01:19:18.541 Write (01h): Supported LBA-Change 01:19:18.541 Read (02h): Supported 01:19:18.541 Compare (05h): Supported 01:19:18.541 Write Zeroes (08h): Supported LBA-Change 01:19:18.541 Dataset Management (09h): Supported LBA-Change 01:19:18.541 Unknown (0Ch): Supported 01:19:18.541 Unknown (12h): Supported 01:19:18.541 Copy (19h): Supported LBA-Change 01:19:18.541 Unknown (1Dh): Supported LBA-Change 01:19:18.541 01:19:18.541 Error Log 01:19:18.541 ========= 01:19:18.541 01:19:18.541 Arbitration 01:19:18.541 =========== 01:19:18.541 Arbitration Burst: no limit 01:19:18.541 01:19:18.541 Power Management 01:19:18.541 ================ 01:19:18.541 Number of Power States: 1 01:19:18.541 Current Power State: Power State #0 01:19:18.541 Power State #0: 01:19:18.541 Max Power: 25.00 W 01:19:18.541 Non-Operational State: Operational 01:19:18.541 Entry Latency: 16 microseconds 01:19:18.541 Exit Latency: 4 microseconds 01:19:18.541 Relative Read Throughput: 0 01:19:18.541 Relative Read Latency: 0 01:19:18.541 Relative Write Throughput: 0 01:19:18.541 Relative Write Latency: 0 01:19:18.541 Idle Power: Not Reported 01:19:18.541 Active Power: Not Reported 01:19:18.541 Non-Operational Permissive Mode: Not Supported 01:19:18.541 01:19:18.541 Health Information 01:19:18.541 ================== 01:19:18.541 Critical Warnings: 01:19:18.541 Available Spare Space: OK 01:19:18.541 Temperature: OK 01:19:18.541 Device Reliability: OK 01:19:18.541 Read Only: No 01:19:18.541 Volatile Memory Backup: OK 01:19:18.541 Current Temperature: 323 Kelvin (50 Celsius) 01:19:18.541 Temperature Threshold: 343 Kelvin (70 Celsius) 01:19:18.541 Available Spare: 0% 01:19:18.541 Available Spare Threshold: 0% 01:19:18.541 Life Percentage Used: 0% 01:19:18.541 Data Units Read: 791 01:19:18.541 Data Units Written: 719 01:19:18.541 Host Read Commands: 38415 01:19:18.541 Host Write Commands: 38201 01:19:18.541 Controller Busy Time: 0 minutes 01:19:18.541 Power Cycles: 0 01:19:18.541 Power On Hours: 0 hours 01:19:18.541 Unsafe Shutdowns: 0 01:19:18.541 Unrecoverable Media Errors: 0 01:19:18.541 Lifetime Error Log Entries: 0 01:19:18.541 Warning Temperature Time: 0 minutes 01:19:18.541 Critical Temperature Time: 0 minutes 01:19:18.541 01:19:18.541 Number of Queues 01:19:18.541 ================ 01:19:18.541 Number of I/O Submission Queues: 64 01:19:18.541 Number of I/O Completion Queues: 64 01:19:18.541 01:19:18.541 ZNS Specific Controller Data 01:19:18.541 ============================ 01:19:18.541 Zone Append Size Limit: 0 01:19:18.541 01:19:18.541 01:19:18.541 Active Namespaces 01:19:18.541 ================= 01:19:18.541 Namespace ID:1 01:19:18.541 Error Recovery Timeout: Unlimited 01:19:18.541 Command Set Identifier: NVM (00h) 01:19:18.541 Deallocate: Supported 01:19:18.542 Deallocated/Unwritten Error: Supported 01:19:18.542 Deallocated Read Value: All 0x00 01:19:18.542 Deallocate in Write Zeroes: Not Supported 01:19:18.542 Deallocated Guard Field: 0xFFFF 01:19:18.542 Flush: Supported 01:19:18.542 Reservation: Not Supported 01:19:18.542 Metadata Transferred as: Separate Metadata Buffer 01:19:18.542 Namespace Sharing Capabilities: Private 01:19:18.542 Size (in LBAs): 1548666 (5GiB) 01:19:18.542 Capacity (in LBAs): 1548666 (5GiB) 01:19:18.542 Utilization (in LBAs): 1548666 (5GiB) 01:19:18.542 Thin Provisioning: Not Supported 01:19:18.542 Per-NS Atomic Units: No 01:19:18.542 Maximum Single Source Range Length: 128 01:19:18.542 Maximum Copy Length: 128 01:19:18.542 Maximum Source Range Count: 128 01:19:18.542 NGUID/EUI64 Never Reused: No 01:19:18.542 Namespace Write Protected: No 01:19:18.542 Number of LBA Formats: 8 01:19:18.542 Current LBA Format: LBA Format #07 01:19:18.542 LBA Format #00: Data Size: 512 Metadata Size: 0 01:19:18.542 LBA Format #01: Data Size: 512 Metadata Size: 8 01:19:18.542 LBA Format #02: Data Size: 512 Metadata Size: 16 01:19:18.542 LBA Format #03: Data Size: 512 Metadata Size: 64 01:19:18.542 LBA Format #04: Data Size: 4096 Metadata Size: 0 01:19:18.542 LBA Format #05: Data Size: 4096 Metadata Size: 8 01:19:18.542 LBA Format #06: Data Size: 4096 Metadata Size: 16 01:19:18.542 LBA Format #07: Data Size: 4096 Metadata Size: 64 01:19:18.542 01:19:18.542 NVM Specific Namespace Data 01:19:18.542 =========================== 01:19:18.542 Logical Block Storage Tag Mask: 0 01:19:18.542 Protection Information Capabilities: 01:19:18.542 16b Guard Protection Information Storage Tag Support: No 01:19:18.542 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 01:19:18.542 Storage Tag Check Read Support: No 01:19:18.542 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:19:18.542 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:19:18.542 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:19:18.542 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:19:18.542 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:19:18.542 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:19:18.542 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:19:18.542 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:19:18.542 05:14:00 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 01:19:18.542 05:14:00 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 01:19:19.107 ===================================================== 01:19:19.107 NVMe Controller at 0000:00:11.0 [1b36:0010] 01:19:19.107 ===================================================== 01:19:19.107 Controller Capabilities/Features 01:19:19.107 ================================ 01:19:19.107 Vendor ID: 1b36 01:19:19.107 Subsystem Vendor ID: 1af4 01:19:19.107 Serial Number: 12341 01:19:19.107 Model Number: QEMU NVMe Ctrl 01:19:19.107 Firmware Version: 8.0.0 01:19:19.107 Recommended Arb Burst: 6 01:19:19.107 IEEE OUI Identifier: 00 54 52 01:19:19.107 Multi-path I/O 01:19:19.107 May have multiple subsystem ports: No 01:19:19.107 May have multiple controllers: No 01:19:19.107 Associated with SR-IOV VF: No 01:19:19.107 Max Data Transfer Size: 524288 01:19:19.107 Max Number of Namespaces: 256 01:19:19.107 Max Number of I/O Queues: 64 01:19:19.107 NVMe Specification Version (VS): 1.4 01:19:19.107 NVMe Specification Version (Identify): 1.4 01:19:19.107 Maximum Queue Entries: 2048 01:19:19.107 Contiguous Queues Required: Yes 01:19:19.107 Arbitration Mechanisms Supported 01:19:19.107 Weighted Round Robin: Not Supported 01:19:19.107 Vendor Specific: Not Supported 01:19:19.107 Reset Timeout: 7500 ms 01:19:19.107 Doorbell Stride: 4 bytes 01:19:19.107 NVM Subsystem Reset: Not Supported 01:19:19.107 Command Sets Supported 01:19:19.108 NVM Command Set: Supported 01:19:19.108 Boot Partition: Not Supported 01:19:19.108 Memory Page Size Minimum: 4096 bytes 01:19:19.108 Memory Page Size Maximum: 65536 bytes 01:19:19.108 Persistent Memory Region: Not Supported 01:19:19.108 Optional Asynchronous Events Supported 01:19:19.108 Namespace Attribute Notices: Supported 01:19:19.108 Firmware Activation Notices: Not Supported 01:19:19.108 ANA Change Notices: Not Supported 01:19:19.108 PLE Aggregate Log Change Notices: Not Supported 01:19:19.108 LBA Status Info Alert Notices: Not Supported 01:19:19.108 EGE Aggregate Log Change Notices: Not Supported 01:19:19.108 Normal NVM Subsystem Shutdown event: Not Supported 01:19:19.108 Zone Descriptor Change Notices: Not Supported 01:19:19.108 Discovery Log Change Notices: Not Supported 01:19:19.108 Controller Attributes 01:19:19.108 128-bit Host Identifier: Not Supported 01:19:19.108 Non-Operational Permissive Mode: Not Supported 01:19:19.108 NVM Sets: Not Supported 01:19:19.108 Read Recovery Levels: Not Supported 01:19:19.108 Endurance Groups: Not Supported 01:19:19.108 Predictable Latency Mode: Not Supported 01:19:19.108 Traffic Based Keep ALive: Not Supported 01:19:19.108 Namespace Granularity: Not Supported 01:19:19.108 SQ Associations: Not Supported 01:19:19.108 UUID List: Not Supported 01:19:19.108 Multi-Domain Subsystem: Not Supported 01:19:19.108 Fixed Capacity Management: Not Supported 01:19:19.108 Variable Capacity Management: Not Supported 01:19:19.108 Delete Endurance Group: Not Supported 01:19:19.108 Delete NVM Set: Not Supported 01:19:19.108 Extended LBA Formats Supported: Supported 01:19:19.108 Flexible Data Placement Supported: Not Supported 01:19:19.108 01:19:19.108 Controller Memory Buffer Support 01:19:19.108 ================================ 01:19:19.108 Supported: No 01:19:19.108 01:19:19.108 Persistent Memory Region Support 01:19:19.108 ================================ 01:19:19.108 Supported: No 01:19:19.108 01:19:19.108 Admin Command Set Attributes 01:19:19.108 ============================ 01:19:19.108 Security Send/Receive: Not Supported 01:19:19.108 Format NVM: Supported 01:19:19.108 Firmware Activate/Download: Not Supported 01:19:19.108 Namespace Management: Supported 01:19:19.108 Device Self-Test: Not Supported 01:19:19.108 Directives: Supported 01:19:19.108 NVMe-MI: Not Supported 01:19:19.108 Virtualization Management: Not Supported 01:19:19.108 Doorbell Buffer Config: Supported 01:19:19.108 Get LBA Status Capability: Not Supported 01:19:19.108 Command & Feature Lockdown Capability: Not Supported 01:19:19.108 Abort Command Limit: 4 01:19:19.108 Async Event Request Limit: 4 01:19:19.108 Number of Firmware Slots: N/A 01:19:19.108 Firmware Slot 1 Read-Only: N/A 01:19:19.108 Firmware Activation Without Reset: N/A 01:19:19.108 Multiple Update Detection Support: N/A 01:19:19.108 Firmware Update Granularity: No Information Provided 01:19:19.108 Per-Namespace SMART Log: Yes 01:19:19.108 Asymmetric Namespace Access Log Page: Not Supported 01:19:19.108 Subsystem NQN: nqn.2019-08.org.qemu:12341 01:19:19.108 Command Effects Log Page: Supported 01:19:19.108 Get Log Page Extended Data: Supported 01:19:19.108 Telemetry Log Pages: Not Supported 01:19:19.108 Persistent Event Log Pages: Not Supported 01:19:19.108 Supported Log Pages Log Page: May Support 01:19:19.108 Commands Supported & Effects Log Page: Not Supported 01:19:19.108 Feature Identifiers & Effects Log Page:May Support 01:19:19.108 NVMe-MI Commands & Effects Log Page: May Support 01:19:19.108 Data Area 4 for Telemetry Log: Not Supported 01:19:19.108 Error Log Page Entries Supported: 1 01:19:19.108 Keep Alive: Not Supported 01:19:19.108 01:19:19.108 NVM Command Set Attributes 01:19:19.108 ========================== 01:19:19.108 Submission Queue Entry Size 01:19:19.108 Max: 64 01:19:19.108 Min: 64 01:19:19.108 Completion Queue Entry Size 01:19:19.108 Max: 16 01:19:19.108 Min: 16 01:19:19.108 Number of Namespaces: 256 01:19:19.108 Compare Command: Supported 01:19:19.108 Write Uncorrectable Command: Not Supported 01:19:19.108 Dataset Management Command: Supported 01:19:19.108 Write Zeroes Command: Supported 01:19:19.108 Set Features Save Field: Supported 01:19:19.108 Reservations: Not Supported 01:19:19.108 Timestamp: Supported 01:19:19.108 Copy: Supported 01:19:19.108 Volatile Write Cache: Present 01:19:19.108 Atomic Write Unit (Normal): 1 01:19:19.108 Atomic Write Unit (PFail): 1 01:19:19.108 Atomic Compare & Write Unit: 1 01:19:19.108 Fused Compare & Write: Not Supported 01:19:19.108 Scatter-Gather List 01:19:19.108 SGL Command Set: Supported 01:19:19.108 SGL Keyed: Not Supported 01:19:19.108 SGL Bit Bucket Descriptor: Not Supported 01:19:19.108 SGL Metadata Pointer: Not Supported 01:19:19.108 Oversized SGL: Not Supported 01:19:19.108 SGL Metadata Address: Not Supported 01:19:19.108 SGL Offset: Not Supported 01:19:19.108 Transport SGL Data Block: Not Supported 01:19:19.108 Replay Protected Memory Block: Not Supported 01:19:19.108 01:19:19.108 Firmware Slot Information 01:19:19.108 ========================= 01:19:19.108 Active slot: 1 01:19:19.108 Slot 1 Firmware Revision: 1.0 01:19:19.108 01:19:19.108 01:19:19.108 Commands Supported and Effects 01:19:19.108 ============================== 01:19:19.108 Admin Commands 01:19:19.108 -------------- 01:19:19.108 Delete I/O Submission Queue (00h): Supported 01:19:19.108 Create I/O Submission Queue (01h): Supported 01:19:19.108 Get Log Page (02h): Supported 01:19:19.108 Delete I/O Completion Queue (04h): Supported 01:19:19.108 Create I/O Completion Queue (05h): Supported 01:19:19.108 Identify (06h): Supported 01:19:19.108 Abort (08h): Supported 01:19:19.108 Set Features (09h): Supported 01:19:19.108 Get Features (0Ah): Supported 01:19:19.108 Asynchronous Event Request (0Ch): Supported 01:19:19.108 Namespace Attachment (15h): Supported NS-Inventory-Change 01:19:19.108 Directive Send (19h): Supported 01:19:19.108 Directive Receive (1Ah): Supported 01:19:19.108 Virtualization Management (1Ch): Supported 01:19:19.108 Doorbell Buffer Config (7Ch): Supported 01:19:19.108 Format NVM (80h): Supported LBA-Change 01:19:19.108 I/O Commands 01:19:19.108 ------------ 01:19:19.108 Flush (00h): Supported LBA-Change 01:19:19.108 Write (01h): Supported LBA-Change 01:19:19.108 Read (02h): Supported 01:19:19.108 Compare (05h): Supported 01:19:19.108 Write Zeroes (08h): Supported LBA-Change 01:19:19.108 Dataset Management (09h): Supported LBA-Change 01:19:19.108 Unknown (0Ch): Supported 01:19:19.108 Unknown (12h): Supported 01:19:19.108 Copy (19h): Supported LBA-Change 01:19:19.108 Unknown (1Dh): Supported LBA-Change 01:19:19.108 01:19:19.108 Error Log 01:19:19.108 ========= 01:19:19.108 01:19:19.108 Arbitration 01:19:19.108 =========== 01:19:19.108 Arbitration Burst: no limit 01:19:19.108 01:19:19.108 Power Management 01:19:19.108 ================ 01:19:19.108 Number of Power States: 1 01:19:19.108 Current Power State: Power State #0 01:19:19.108 Power State #0: 01:19:19.108 Max Power: 25.00 W 01:19:19.108 Non-Operational State: Operational 01:19:19.108 Entry Latency: 16 microseconds 01:19:19.108 Exit Latency: 4 microseconds 01:19:19.108 Relative Read Throughput: 0 01:19:19.108 Relative Read Latency: 0 01:19:19.108 Relative Write Throughput: 0 01:19:19.108 Relative Write Latency: 0 01:19:19.108 Idle Power: Not Reported 01:19:19.108 Active Power: Not Reported 01:19:19.108 Non-Operational Permissive Mode: Not Supported 01:19:19.108 01:19:19.108 Health Information 01:19:19.108 ================== 01:19:19.108 Critical Warnings: 01:19:19.108 Available Spare Space: OK 01:19:19.108 Temperature: OK 01:19:19.108 Device Reliability: OK 01:19:19.108 Read Only: No 01:19:19.108 Volatile Memory Backup: OK 01:19:19.108 Current Temperature: 323 Kelvin (50 Celsius) 01:19:19.108 Temperature Threshold: 343 Kelvin (70 Celsius) 01:19:19.108 Available Spare: 0% 01:19:19.108 Available Spare Threshold: 0% 01:19:19.108 Life Percentage Used: 0% 01:19:19.108 Data Units Read: 1214 01:19:19.108 Data Units Written: 1075 01:19:19.108 Host Read Commands: 57537 01:19:19.108 Host Write Commands: 56217 01:19:19.108 Controller Busy Time: 0 minutes 01:19:19.108 Power Cycles: 0 01:19:19.108 Power On Hours: 0 hours 01:19:19.108 Unsafe Shutdowns: 0 01:19:19.108 Unrecoverable Media Errors: 0 01:19:19.108 Lifetime Error Log Entries: 0 01:19:19.108 Warning Temperature Time: 0 minutes 01:19:19.108 Critical Temperature Time: 0 minutes 01:19:19.108 01:19:19.108 Number of Queues 01:19:19.108 ================ 01:19:19.108 Number of I/O Submission Queues: 64 01:19:19.108 Number of I/O Completion Queues: 64 01:19:19.108 01:19:19.108 ZNS Specific Controller Data 01:19:19.109 ============================ 01:19:19.109 Zone Append Size Limit: 0 01:19:19.109 01:19:19.109 01:19:19.109 Active Namespaces 01:19:19.109 ================= 01:19:19.109 Namespace ID:1 01:19:19.109 Error Recovery Timeout: Unlimited 01:19:19.109 Command Set Identifier: NVM (00h) 01:19:19.109 Deallocate: Supported 01:19:19.109 Deallocated/Unwritten Error: Supported 01:19:19.109 Deallocated Read Value: All 0x00 01:19:19.109 Deallocate in Write Zeroes: Not Supported 01:19:19.109 Deallocated Guard Field: 0xFFFF 01:19:19.109 Flush: Supported 01:19:19.109 Reservation: Not Supported 01:19:19.109 Namespace Sharing Capabilities: Private 01:19:19.109 Size (in LBAs): 1310720 (5GiB) 01:19:19.109 Capacity (in LBAs): 1310720 (5GiB) 01:19:19.109 Utilization (in LBAs): 1310720 (5GiB) 01:19:19.109 Thin Provisioning: Not Supported 01:19:19.109 Per-NS Atomic Units: No 01:19:19.109 Maximum Single Source Range Length: 128 01:19:19.109 Maximum Copy Length: 128 01:19:19.109 Maximum Source Range Count: 128 01:19:19.109 NGUID/EUI64 Never Reused: No 01:19:19.109 Namespace Write Protected: No 01:19:19.109 Number of LBA Formats: 8 01:19:19.109 Current LBA Format: LBA Format #04 01:19:19.109 LBA Format #00: Data Size: 512 Metadata Size: 0 01:19:19.109 LBA Format #01: Data Size: 512 Metadata Size: 8 01:19:19.109 LBA Format #02: Data Size: 512 Metadata Size: 16 01:19:19.109 LBA Format #03: Data Size: 512 Metadata Size: 64 01:19:19.109 LBA Format #04: Data Size: 4096 Metadata Size: 0 01:19:19.109 LBA Format #05: Data Size: 4096 Metadata Size: 8 01:19:19.109 LBA Format #06: Data Size: 4096 Metadata Size: 16 01:19:19.109 LBA Format #07: Data Size: 4096 Metadata Size: 64 01:19:19.109 01:19:19.109 NVM Specific Namespace Data 01:19:19.109 =========================== 01:19:19.109 Logical Block Storage Tag Mask: 0 01:19:19.109 Protection Information Capabilities: 01:19:19.109 16b Guard Protection Information Storage Tag Support: No 01:19:19.109 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 01:19:19.109 Storage Tag Check Read Support: No 01:19:19.109 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:19:19.109 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:19:19.109 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:19:19.109 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:19:19.109 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:19:19.109 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:19:19.109 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:19:19.109 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:19:19.109 05:14:01 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 01:19:19.109 05:14:01 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 01:19:19.367 ===================================================== 01:19:19.367 NVMe Controller at 0000:00:12.0 [1b36:0010] 01:19:19.367 ===================================================== 01:19:19.367 Controller Capabilities/Features 01:19:19.367 ================================ 01:19:19.367 Vendor ID: 1b36 01:19:19.367 Subsystem Vendor ID: 1af4 01:19:19.367 Serial Number: 12342 01:19:19.367 Model Number: QEMU NVMe Ctrl 01:19:19.367 Firmware Version: 8.0.0 01:19:19.367 Recommended Arb Burst: 6 01:19:19.367 IEEE OUI Identifier: 00 54 52 01:19:19.367 Multi-path I/O 01:19:19.367 May have multiple subsystem ports: No 01:19:19.367 May have multiple controllers: No 01:19:19.367 Associated with SR-IOV VF: No 01:19:19.367 Max Data Transfer Size: 524288 01:19:19.367 Max Number of Namespaces: 256 01:19:19.367 Max Number of I/O Queues: 64 01:19:19.367 NVMe Specification Version (VS): 1.4 01:19:19.367 NVMe Specification Version (Identify): 1.4 01:19:19.367 Maximum Queue Entries: 2048 01:19:19.367 Contiguous Queues Required: Yes 01:19:19.367 Arbitration Mechanisms Supported 01:19:19.367 Weighted Round Robin: Not Supported 01:19:19.367 Vendor Specific: Not Supported 01:19:19.367 Reset Timeout: 7500 ms 01:19:19.367 Doorbell Stride: 4 bytes 01:19:19.367 NVM Subsystem Reset: Not Supported 01:19:19.367 Command Sets Supported 01:19:19.367 NVM Command Set: Supported 01:19:19.367 Boot Partition: Not Supported 01:19:19.367 Memory Page Size Minimum: 4096 bytes 01:19:19.367 Memory Page Size Maximum: 65536 bytes 01:19:19.367 Persistent Memory Region: Not Supported 01:19:19.367 Optional Asynchronous Events Supported 01:19:19.367 Namespace Attribute Notices: Supported 01:19:19.367 Firmware Activation Notices: Not Supported 01:19:19.367 ANA Change Notices: Not Supported 01:19:19.367 PLE Aggregate Log Change Notices: Not Supported 01:19:19.367 LBA Status Info Alert Notices: Not Supported 01:19:19.367 EGE Aggregate Log Change Notices: Not Supported 01:19:19.367 Normal NVM Subsystem Shutdown event: Not Supported 01:19:19.367 Zone Descriptor Change Notices: Not Supported 01:19:19.367 Discovery Log Change Notices: Not Supported 01:19:19.367 Controller Attributes 01:19:19.368 128-bit Host Identifier: Not Supported 01:19:19.368 Non-Operational Permissive Mode: Not Supported 01:19:19.368 NVM Sets: Not Supported 01:19:19.368 Read Recovery Levels: Not Supported 01:19:19.368 Endurance Groups: Not Supported 01:19:19.368 Predictable Latency Mode: Not Supported 01:19:19.368 Traffic Based Keep ALive: Not Supported 01:19:19.368 Namespace Granularity: Not Supported 01:19:19.368 SQ Associations: Not Supported 01:19:19.368 UUID List: Not Supported 01:19:19.368 Multi-Domain Subsystem: Not Supported 01:19:19.368 Fixed Capacity Management: Not Supported 01:19:19.368 Variable Capacity Management: Not Supported 01:19:19.368 Delete Endurance Group: Not Supported 01:19:19.368 Delete NVM Set: Not Supported 01:19:19.368 Extended LBA Formats Supported: Supported 01:19:19.368 Flexible Data Placement Supported: Not Supported 01:19:19.368 01:19:19.368 Controller Memory Buffer Support 01:19:19.368 ================================ 01:19:19.368 Supported: No 01:19:19.368 01:19:19.368 Persistent Memory Region Support 01:19:19.368 ================================ 01:19:19.368 Supported: No 01:19:19.368 01:19:19.368 Admin Command Set Attributes 01:19:19.368 ============================ 01:19:19.368 Security Send/Receive: Not Supported 01:19:19.368 Format NVM: Supported 01:19:19.368 Firmware Activate/Download: Not Supported 01:19:19.368 Namespace Management: Supported 01:19:19.368 Device Self-Test: Not Supported 01:19:19.368 Directives: Supported 01:19:19.368 NVMe-MI: Not Supported 01:19:19.368 Virtualization Management: Not Supported 01:19:19.368 Doorbell Buffer Config: Supported 01:19:19.368 Get LBA Status Capability: Not Supported 01:19:19.368 Command & Feature Lockdown Capability: Not Supported 01:19:19.368 Abort Command Limit: 4 01:19:19.368 Async Event Request Limit: 4 01:19:19.368 Number of Firmware Slots: N/A 01:19:19.368 Firmware Slot 1 Read-Only: N/A 01:19:19.368 Firmware Activation Without Reset: N/A 01:19:19.368 Multiple Update Detection Support: N/A 01:19:19.368 Firmware Update Granularity: No Information Provided 01:19:19.368 Per-Namespace SMART Log: Yes 01:19:19.368 Asymmetric Namespace Access Log Page: Not Supported 01:19:19.368 Subsystem NQN: nqn.2019-08.org.qemu:12342 01:19:19.368 Command Effects Log Page: Supported 01:19:19.368 Get Log Page Extended Data: Supported 01:19:19.368 Telemetry Log Pages: Not Supported 01:19:19.368 Persistent Event Log Pages: Not Supported 01:19:19.368 Supported Log Pages Log Page: May Support 01:19:19.368 Commands Supported & Effects Log Page: Not Supported 01:19:19.368 Feature Identifiers & Effects Log Page:May Support 01:19:19.368 NVMe-MI Commands & Effects Log Page: May Support 01:19:19.368 Data Area 4 for Telemetry Log: Not Supported 01:19:19.368 Error Log Page Entries Supported: 1 01:19:19.368 Keep Alive: Not Supported 01:19:19.368 01:19:19.368 NVM Command Set Attributes 01:19:19.368 ========================== 01:19:19.368 Submission Queue Entry Size 01:19:19.368 Max: 64 01:19:19.368 Min: 64 01:19:19.368 Completion Queue Entry Size 01:19:19.368 Max: 16 01:19:19.368 Min: 16 01:19:19.368 Number of Namespaces: 256 01:19:19.368 Compare Command: Supported 01:19:19.368 Write Uncorrectable Command: Not Supported 01:19:19.368 Dataset Management Command: Supported 01:19:19.368 Write Zeroes Command: Supported 01:19:19.368 Set Features Save Field: Supported 01:19:19.368 Reservations: Not Supported 01:19:19.368 Timestamp: Supported 01:19:19.368 Copy: Supported 01:19:19.368 Volatile Write Cache: Present 01:19:19.368 Atomic Write Unit (Normal): 1 01:19:19.368 Atomic Write Unit (PFail): 1 01:19:19.368 Atomic Compare & Write Unit: 1 01:19:19.368 Fused Compare & Write: Not Supported 01:19:19.368 Scatter-Gather List 01:19:19.368 SGL Command Set: Supported 01:19:19.368 SGL Keyed: Not Supported 01:19:19.368 SGL Bit Bucket Descriptor: Not Supported 01:19:19.368 SGL Metadata Pointer: Not Supported 01:19:19.368 Oversized SGL: Not Supported 01:19:19.368 SGL Metadata Address: Not Supported 01:19:19.368 SGL Offset: Not Supported 01:19:19.368 Transport SGL Data Block: Not Supported 01:19:19.368 Replay Protected Memory Block: Not Supported 01:19:19.368 01:19:19.368 Firmware Slot Information 01:19:19.368 ========================= 01:19:19.368 Active slot: 1 01:19:19.368 Slot 1 Firmware Revision: 1.0 01:19:19.368 01:19:19.368 01:19:19.368 Commands Supported and Effects 01:19:19.368 ============================== 01:19:19.368 Admin Commands 01:19:19.368 -------------- 01:19:19.368 Delete I/O Submission Queue (00h): Supported 01:19:19.368 Create I/O Submission Queue (01h): Supported 01:19:19.368 Get Log Page (02h): Supported 01:19:19.368 Delete I/O Completion Queue (04h): Supported 01:19:19.368 Create I/O Completion Queue (05h): Supported 01:19:19.368 Identify (06h): Supported 01:19:19.368 Abort (08h): Supported 01:19:19.368 Set Features (09h): Supported 01:19:19.368 Get Features (0Ah): Supported 01:19:19.368 Asynchronous Event Request (0Ch): Supported 01:19:19.368 Namespace Attachment (15h): Supported NS-Inventory-Change 01:19:19.368 Directive Send (19h): Supported 01:19:19.368 Directive Receive (1Ah): Supported 01:19:19.368 Virtualization Management (1Ch): Supported 01:19:19.368 Doorbell Buffer Config (7Ch): Supported 01:19:19.368 Format NVM (80h): Supported LBA-Change 01:19:19.368 I/O Commands 01:19:19.368 ------------ 01:19:19.368 Flush (00h): Supported LBA-Change 01:19:19.368 Write (01h): Supported LBA-Change 01:19:19.368 Read (02h): Supported 01:19:19.368 Compare (05h): Supported 01:19:19.368 Write Zeroes (08h): Supported LBA-Change 01:19:19.368 Dataset Management (09h): Supported LBA-Change 01:19:19.368 Unknown (0Ch): Supported 01:19:19.368 Unknown (12h): Supported 01:19:19.368 Copy (19h): Supported LBA-Change 01:19:19.368 Unknown (1Dh): Supported LBA-Change 01:19:19.368 01:19:19.368 Error Log 01:19:19.368 ========= 01:19:19.368 01:19:19.368 Arbitration 01:19:19.368 =========== 01:19:19.368 Arbitration Burst: no limit 01:19:19.368 01:19:19.368 Power Management 01:19:19.368 ================ 01:19:19.368 Number of Power States: 1 01:19:19.368 Current Power State: Power State #0 01:19:19.368 Power State #0: 01:19:19.368 Max Power: 25.00 W 01:19:19.368 Non-Operational State: Operational 01:19:19.368 Entry Latency: 16 microseconds 01:19:19.368 Exit Latency: 4 microseconds 01:19:19.368 Relative Read Throughput: 0 01:19:19.368 Relative Read Latency: 0 01:19:19.368 Relative Write Throughput: 0 01:19:19.368 Relative Write Latency: 0 01:19:19.368 Idle Power: Not Reported 01:19:19.368 Active Power: Not Reported 01:19:19.368 Non-Operational Permissive Mode: Not Supported 01:19:19.368 01:19:19.368 Health Information 01:19:19.368 ================== 01:19:19.368 Critical Warnings: 01:19:19.368 Available Spare Space: OK 01:19:19.368 Temperature: OK 01:19:19.368 Device Reliability: OK 01:19:19.368 Read Only: No 01:19:19.368 Volatile Memory Backup: OK 01:19:19.368 Current Temperature: 323 Kelvin (50 Celsius) 01:19:19.368 Temperature Threshold: 343 Kelvin (70 Celsius) 01:19:19.368 Available Spare: 0% 01:19:19.368 Available Spare Threshold: 0% 01:19:19.368 Life Percentage Used: 0% 01:19:19.368 Data Units Read: 2486 01:19:19.368 Data Units Written: 2273 01:19:19.368 Host Read Commands: 117040 01:19:19.368 Host Write Commands: 115309 01:19:19.368 Controller Busy Time: 0 minutes 01:19:19.368 Power Cycles: 0 01:19:19.368 Power On Hours: 0 hours 01:19:19.368 Unsafe Shutdowns: 0 01:19:19.368 Unrecoverable Media Errors: 0 01:19:19.368 Lifetime Error Log Entries: 0 01:19:19.368 Warning Temperature Time: 0 minutes 01:19:19.368 Critical Temperature Time: 0 minutes 01:19:19.368 01:19:19.368 Number of Queues 01:19:19.368 ================ 01:19:19.368 Number of I/O Submission Queues: 64 01:19:19.368 Number of I/O Completion Queues: 64 01:19:19.368 01:19:19.368 ZNS Specific Controller Data 01:19:19.368 ============================ 01:19:19.368 Zone Append Size Limit: 0 01:19:19.368 01:19:19.368 01:19:19.368 Active Namespaces 01:19:19.368 ================= 01:19:19.368 Namespace ID:1 01:19:19.368 Error Recovery Timeout: Unlimited 01:19:19.368 Command Set Identifier: NVM (00h) 01:19:19.368 Deallocate: Supported 01:19:19.368 Deallocated/Unwritten Error: Supported 01:19:19.368 Deallocated Read Value: All 0x00 01:19:19.368 Deallocate in Write Zeroes: Not Supported 01:19:19.368 Deallocated Guard Field: 0xFFFF 01:19:19.368 Flush: Supported 01:19:19.368 Reservation: Not Supported 01:19:19.368 Namespace Sharing Capabilities: Private 01:19:19.369 Size (in LBAs): 1048576 (4GiB) 01:19:19.369 Capacity (in LBAs): 1048576 (4GiB) 01:19:19.369 Utilization (in LBAs): 1048576 (4GiB) 01:19:19.369 Thin Provisioning: Not Supported 01:19:19.369 Per-NS Atomic Units: No 01:19:19.369 Maximum Single Source Range Length: 128 01:19:19.369 Maximum Copy Length: 128 01:19:19.369 Maximum Source Range Count: 128 01:19:19.369 NGUID/EUI64 Never Reused: No 01:19:19.369 Namespace Write Protected: No 01:19:19.369 Number of LBA Formats: 8 01:19:19.369 Current LBA Format: LBA Format #04 01:19:19.369 LBA Format #00: Data Size: 512 Metadata Size: 0 01:19:19.369 LBA Format #01: Data Size: 512 Metadata Size: 8 01:19:19.369 LBA Format #02: Data Size: 512 Metadata Size: 16 01:19:19.369 LBA Format #03: Data Size: 512 Metadata Size: 64 01:19:19.369 LBA Format #04: Data Size: 4096 Metadata Size: 0 01:19:19.369 LBA Format #05: Data Size: 4096 Metadata Size: 8 01:19:19.369 LBA Format #06: Data Size: 4096 Metadata Size: 16 01:19:19.369 LBA Format #07: Data Size: 4096 Metadata Size: 64 01:19:19.369 01:19:19.369 NVM Specific Namespace Data 01:19:19.369 =========================== 01:19:19.369 Logical Block Storage Tag Mask: 0 01:19:19.369 Protection Information Capabilities: 01:19:19.369 16b Guard Protection Information Storage Tag Support: No 01:19:19.369 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 01:19:19.369 Storage Tag Check Read Support: No 01:19:19.369 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:19:19.369 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:19:19.369 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:19:19.369 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:19:19.369 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:19:19.369 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:19:19.369 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:19:19.369 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:19:19.369 Namespace ID:2 01:19:19.369 Error Recovery Timeout: Unlimited 01:19:19.369 Command Set Identifier: NVM (00h) 01:19:19.369 Deallocate: Supported 01:19:19.369 Deallocated/Unwritten Error: Supported 01:19:19.369 Deallocated Read Value: All 0x00 01:19:19.369 Deallocate in Write Zeroes: Not Supported 01:19:19.369 Deallocated Guard Field: 0xFFFF 01:19:19.369 Flush: Supported 01:19:19.369 Reservation: Not Supported 01:19:19.369 Namespace Sharing Capabilities: Private 01:19:19.369 Size (in LBAs): 1048576 (4GiB) 01:19:19.369 Capacity (in LBAs): 1048576 (4GiB) 01:19:19.369 Utilization (in LBAs): 1048576 (4GiB) 01:19:19.369 Thin Provisioning: Not Supported 01:19:19.369 Per-NS Atomic Units: No 01:19:19.369 Maximum Single Source Range Length: 128 01:19:19.369 Maximum Copy Length: 128 01:19:19.369 Maximum Source Range Count: 128 01:19:19.369 NGUID/EUI64 Never Reused: No 01:19:19.369 Namespace Write Protected: No 01:19:19.369 Number of LBA Formats: 8 01:19:19.369 Current LBA Format: LBA Format #04 01:19:19.369 LBA Format #00: Data Size: 512 Metadata Size: 0 01:19:19.369 LBA Format #01: Data Size: 512 Metadata Size: 8 01:19:19.369 LBA Format #02: Data Size: 512 Metadata Size: 16 01:19:19.369 LBA Format #03: Data Size: 512 Metadata Size: 64 01:19:19.369 LBA Format #04: Data Size: 4096 Metadata Size: 0 01:19:19.369 LBA Format #05: Data Size: 4096 Metadata Size: 8 01:19:19.369 LBA Format #06: Data Size: 4096 Metadata Size: 16 01:19:19.369 LBA Format #07: Data Size: 4096 Metadata Size: 64 01:19:19.369 01:19:19.369 NVM Specific Namespace Data 01:19:19.369 =========================== 01:19:19.369 Logical Block Storage Tag Mask: 0 01:19:19.369 Protection Information Capabilities: 01:19:19.369 16b Guard Protection Information Storage Tag Support: No 01:19:19.369 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 01:19:19.369 Storage Tag Check Read Support: No 01:19:19.369 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:19:19.369 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:19:19.369 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:19:19.369 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:19:19.369 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:19:19.369 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:19:19.369 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:19:19.369 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:19:19.369 Namespace ID:3 01:19:19.369 Error Recovery Timeout: Unlimited 01:19:19.369 Command Set Identifier: NVM (00h) 01:19:19.369 Deallocate: Supported 01:19:19.369 Deallocated/Unwritten Error: Supported 01:19:19.369 Deallocated Read Value: All 0x00 01:19:19.369 Deallocate in Write Zeroes: Not Supported 01:19:19.369 Deallocated Guard Field: 0xFFFF 01:19:19.369 Flush: Supported 01:19:19.369 Reservation: Not Supported 01:19:19.369 Namespace Sharing Capabilities: Private 01:19:19.369 Size (in LBAs): 1048576 (4GiB) 01:19:19.369 Capacity (in LBAs): 1048576 (4GiB) 01:19:19.369 Utilization (in LBAs): 1048576 (4GiB) 01:19:19.369 Thin Provisioning: Not Supported 01:19:19.369 Per-NS Atomic Units: No 01:19:19.369 Maximum Single Source Range Length: 128 01:19:19.369 Maximum Copy Length: 128 01:19:19.369 Maximum Source Range Count: 128 01:19:19.369 NGUID/EUI64 Never Reused: No 01:19:19.369 Namespace Write Protected: No 01:19:19.369 Number of LBA Formats: 8 01:19:19.369 Current LBA Format: LBA Format #04 01:19:19.369 LBA Format #00: Data Size: 512 Metadata Size: 0 01:19:19.369 LBA Format #01: Data Size: 512 Metadata Size: 8 01:19:19.369 LBA Format #02: Data Size: 512 Metadata Size: 16 01:19:19.369 LBA Format #03: Data Size: 512 Metadata Size: 64 01:19:19.369 LBA Format #04: Data Size: 4096 Metadata Size: 0 01:19:19.369 LBA Format #05: Data Size: 4096 Metadata Size: 8 01:19:19.369 LBA Format #06: Data Size: 4096 Metadata Size: 16 01:19:19.369 LBA Format #07: Data Size: 4096 Metadata Size: 64 01:19:19.369 01:19:19.369 NVM Specific Namespace Data 01:19:19.369 =========================== 01:19:19.369 Logical Block Storage Tag Mask: 0 01:19:19.369 Protection Information Capabilities: 01:19:19.369 16b Guard Protection Information Storage Tag Support: No 01:19:19.369 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 01:19:19.369 Storage Tag Check Read Support: No 01:19:19.369 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:19:19.369 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:19:19.369 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:19:19.369 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:19:19.369 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:19:19.369 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:19:19.369 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:19:19.369 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:19:19.627 05:14:01 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 01:19:19.627 05:14:01 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 01:19:19.885 ===================================================== 01:19:19.885 NVMe Controller at 0000:00:13.0 [1b36:0010] 01:19:19.885 ===================================================== 01:19:19.885 Controller Capabilities/Features 01:19:19.885 ================================ 01:19:19.885 Vendor ID: 1b36 01:19:19.885 Subsystem Vendor ID: 1af4 01:19:19.885 Serial Number: 12343 01:19:19.885 Model Number: QEMU NVMe Ctrl 01:19:19.885 Firmware Version: 8.0.0 01:19:19.885 Recommended Arb Burst: 6 01:19:19.885 IEEE OUI Identifier: 00 54 52 01:19:19.885 Multi-path I/O 01:19:19.885 May have multiple subsystem ports: No 01:19:19.885 May have multiple controllers: Yes 01:19:19.885 Associated with SR-IOV VF: No 01:19:19.885 Max Data Transfer Size: 524288 01:19:19.885 Max Number of Namespaces: 256 01:19:19.885 Max Number of I/O Queues: 64 01:19:19.885 NVMe Specification Version (VS): 1.4 01:19:19.885 NVMe Specification Version (Identify): 1.4 01:19:19.885 Maximum Queue Entries: 2048 01:19:19.885 Contiguous Queues Required: Yes 01:19:19.885 Arbitration Mechanisms Supported 01:19:19.885 Weighted Round Robin: Not Supported 01:19:19.885 Vendor Specific: Not Supported 01:19:19.885 Reset Timeout: 7500 ms 01:19:19.885 Doorbell Stride: 4 bytes 01:19:19.885 NVM Subsystem Reset: Not Supported 01:19:19.885 Command Sets Supported 01:19:19.885 NVM Command Set: Supported 01:19:19.885 Boot Partition: Not Supported 01:19:19.885 Memory Page Size Minimum: 4096 bytes 01:19:19.885 Memory Page Size Maximum: 65536 bytes 01:19:19.885 Persistent Memory Region: Not Supported 01:19:19.885 Optional Asynchronous Events Supported 01:19:19.885 Namespace Attribute Notices: Supported 01:19:19.885 Firmware Activation Notices: Not Supported 01:19:19.885 ANA Change Notices: Not Supported 01:19:19.885 PLE Aggregate Log Change Notices: Not Supported 01:19:19.885 LBA Status Info Alert Notices: Not Supported 01:19:19.885 EGE Aggregate Log Change Notices: Not Supported 01:19:19.885 Normal NVM Subsystem Shutdown event: Not Supported 01:19:19.885 Zone Descriptor Change Notices: Not Supported 01:19:19.885 Discovery Log Change Notices: Not Supported 01:19:19.885 Controller Attributes 01:19:19.885 128-bit Host Identifier: Not Supported 01:19:19.885 Non-Operational Permissive Mode: Not Supported 01:19:19.885 NVM Sets: Not Supported 01:19:19.885 Read Recovery Levels: Not Supported 01:19:19.885 Endurance Groups: Supported 01:19:19.885 Predictable Latency Mode: Not Supported 01:19:19.885 Traffic Based Keep ALive: Not Supported 01:19:19.885 Namespace Granularity: Not Supported 01:19:19.885 SQ Associations: Not Supported 01:19:19.885 UUID List: Not Supported 01:19:19.885 Multi-Domain Subsystem: Not Supported 01:19:19.885 Fixed Capacity Management: Not Supported 01:19:19.885 Variable Capacity Management: Not Supported 01:19:19.885 Delete Endurance Group: Not Supported 01:19:19.885 Delete NVM Set: Not Supported 01:19:19.885 Extended LBA Formats Supported: Supported 01:19:19.885 Flexible Data Placement Supported: Supported 01:19:19.885 01:19:19.885 Controller Memory Buffer Support 01:19:19.885 ================================ 01:19:19.885 Supported: No 01:19:19.885 01:19:19.885 Persistent Memory Region Support 01:19:19.885 ================================ 01:19:19.885 Supported: No 01:19:19.885 01:19:19.885 Admin Command Set Attributes 01:19:19.885 ============================ 01:19:19.885 Security Send/Receive: Not Supported 01:19:19.885 Format NVM: Supported 01:19:19.885 Firmware Activate/Download: Not Supported 01:19:19.885 Namespace Management: Supported 01:19:19.885 Device Self-Test: Not Supported 01:19:19.885 Directives: Supported 01:19:19.885 NVMe-MI: Not Supported 01:19:19.885 Virtualization Management: Not Supported 01:19:19.885 Doorbell Buffer Config: Supported 01:19:19.885 Get LBA Status Capability: Not Supported 01:19:19.885 Command & Feature Lockdown Capability: Not Supported 01:19:19.885 Abort Command Limit: 4 01:19:19.885 Async Event Request Limit: 4 01:19:19.885 Number of Firmware Slots: N/A 01:19:19.885 Firmware Slot 1 Read-Only: N/A 01:19:19.886 Firmware Activation Without Reset: N/A 01:19:19.886 Multiple Update Detection Support: N/A 01:19:19.886 Firmware Update Granularity: No Information Provided 01:19:19.886 Per-Namespace SMART Log: Yes 01:19:19.886 Asymmetric Namespace Access Log Page: Not Supported 01:19:19.886 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 01:19:19.886 Command Effects Log Page: Supported 01:19:19.886 Get Log Page Extended Data: Supported 01:19:19.886 Telemetry Log Pages: Not Supported 01:19:19.886 Persistent Event Log Pages: Not Supported 01:19:19.886 Supported Log Pages Log Page: May Support 01:19:19.886 Commands Supported & Effects Log Page: Not Supported 01:19:19.886 Feature Identifiers & Effects Log Page:May Support 01:19:19.886 NVMe-MI Commands & Effects Log Page: May Support 01:19:19.886 Data Area 4 for Telemetry Log: Not Supported 01:19:19.886 Error Log Page Entries Supported: 1 01:19:19.886 Keep Alive: Not Supported 01:19:19.886 01:19:19.886 NVM Command Set Attributes 01:19:19.886 ========================== 01:19:19.886 Submission Queue Entry Size 01:19:19.886 Max: 64 01:19:19.886 Min: 64 01:19:19.886 Completion Queue Entry Size 01:19:19.886 Max: 16 01:19:19.886 Min: 16 01:19:19.886 Number of Namespaces: 256 01:19:19.886 Compare Command: Supported 01:19:19.886 Write Uncorrectable Command: Not Supported 01:19:19.886 Dataset Management Command: Supported 01:19:19.886 Write Zeroes Command: Supported 01:19:19.886 Set Features Save Field: Supported 01:19:19.886 Reservations: Not Supported 01:19:19.886 Timestamp: Supported 01:19:19.886 Copy: Supported 01:19:19.886 Volatile Write Cache: Present 01:19:19.886 Atomic Write Unit (Normal): 1 01:19:19.886 Atomic Write Unit (PFail): 1 01:19:19.886 Atomic Compare & Write Unit: 1 01:19:19.886 Fused Compare & Write: Not Supported 01:19:19.886 Scatter-Gather List 01:19:19.886 SGL Command Set: Supported 01:19:19.886 SGL Keyed: Not Supported 01:19:19.886 SGL Bit Bucket Descriptor: Not Supported 01:19:19.886 SGL Metadata Pointer: Not Supported 01:19:19.886 Oversized SGL: Not Supported 01:19:19.886 SGL Metadata Address: Not Supported 01:19:19.886 SGL Offset: Not Supported 01:19:19.886 Transport SGL Data Block: Not Supported 01:19:19.886 Replay Protected Memory Block: Not Supported 01:19:19.886 01:19:19.886 Firmware Slot Information 01:19:19.886 ========================= 01:19:19.886 Active slot: 1 01:19:19.886 Slot 1 Firmware Revision: 1.0 01:19:19.886 01:19:19.886 01:19:19.886 Commands Supported and Effects 01:19:19.886 ============================== 01:19:19.886 Admin Commands 01:19:19.886 -------------- 01:19:19.886 Delete I/O Submission Queue (00h): Supported 01:19:19.886 Create I/O Submission Queue (01h): Supported 01:19:19.886 Get Log Page (02h): Supported 01:19:19.886 Delete I/O Completion Queue (04h): Supported 01:19:19.886 Create I/O Completion Queue (05h): Supported 01:19:19.886 Identify (06h): Supported 01:19:19.886 Abort (08h): Supported 01:19:19.886 Set Features (09h): Supported 01:19:19.886 Get Features (0Ah): Supported 01:19:19.886 Asynchronous Event Request (0Ch): Supported 01:19:19.886 Namespace Attachment (15h): Supported NS-Inventory-Change 01:19:19.886 Directive Send (19h): Supported 01:19:19.886 Directive Receive (1Ah): Supported 01:19:19.886 Virtualization Management (1Ch): Supported 01:19:19.886 Doorbell Buffer Config (7Ch): Supported 01:19:19.886 Format NVM (80h): Supported LBA-Change 01:19:19.886 I/O Commands 01:19:19.886 ------------ 01:19:19.886 Flush (00h): Supported LBA-Change 01:19:19.886 Write (01h): Supported LBA-Change 01:19:19.886 Read (02h): Supported 01:19:19.886 Compare (05h): Supported 01:19:19.886 Write Zeroes (08h): Supported LBA-Change 01:19:19.886 Dataset Management (09h): Supported LBA-Change 01:19:19.886 Unknown (0Ch): Supported 01:19:19.886 Unknown (12h): Supported 01:19:19.886 Copy (19h): Supported LBA-Change 01:19:19.886 Unknown (1Dh): Supported LBA-Change 01:19:19.886 01:19:19.886 Error Log 01:19:19.886 ========= 01:19:19.886 01:19:19.886 Arbitration 01:19:19.886 =========== 01:19:19.886 Arbitration Burst: no limit 01:19:19.886 01:19:19.886 Power Management 01:19:19.886 ================ 01:19:19.886 Number of Power States: 1 01:19:19.886 Current Power State: Power State #0 01:19:19.886 Power State #0: 01:19:19.886 Max Power: 25.00 W 01:19:19.886 Non-Operational State: Operational 01:19:19.886 Entry Latency: 16 microseconds 01:19:19.886 Exit Latency: 4 microseconds 01:19:19.886 Relative Read Throughput: 0 01:19:19.886 Relative Read Latency: 0 01:19:19.886 Relative Write Throughput: 0 01:19:19.886 Relative Write Latency: 0 01:19:19.886 Idle Power: Not Reported 01:19:19.886 Active Power: Not Reported 01:19:19.886 Non-Operational Permissive Mode: Not Supported 01:19:19.886 01:19:19.886 Health Information 01:19:19.886 ================== 01:19:19.886 Critical Warnings: 01:19:19.886 Available Spare Space: OK 01:19:19.886 Temperature: OK 01:19:19.886 Device Reliability: OK 01:19:19.886 Read Only: No 01:19:19.886 Volatile Memory Backup: OK 01:19:19.886 Current Temperature: 323 Kelvin (50 Celsius) 01:19:19.886 Temperature Threshold: 343 Kelvin (70 Celsius) 01:19:19.886 Available Spare: 0% 01:19:19.886 Available Spare Threshold: 0% 01:19:19.886 Life Percentage Used: 0% 01:19:19.886 Data Units Read: 871 01:19:19.886 Data Units Written: 800 01:19:19.886 Host Read Commands: 39341 01:19:19.886 Host Write Commands: 38764 01:19:19.886 Controller Busy Time: 0 minutes 01:19:19.886 Power Cycles: 0 01:19:19.886 Power On Hours: 0 hours 01:19:19.886 Unsafe Shutdowns: 0 01:19:19.886 Unrecoverable Media Errors: 0 01:19:19.886 Lifetime Error Log Entries: 0 01:19:19.886 Warning Temperature Time: 0 minutes 01:19:19.886 Critical Temperature Time: 0 minutes 01:19:19.886 01:19:19.886 Number of Queues 01:19:19.886 ================ 01:19:19.886 Number of I/O Submission Queues: 64 01:19:19.886 Number of I/O Completion Queues: 64 01:19:19.886 01:19:19.886 ZNS Specific Controller Data 01:19:19.886 ============================ 01:19:19.886 Zone Append Size Limit: 0 01:19:19.886 01:19:19.886 01:19:19.886 Active Namespaces 01:19:19.886 ================= 01:19:19.886 Namespace ID:1 01:19:19.886 Error Recovery Timeout: Unlimited 01:19:19.886 Command Set Identifier: NVM (00h) 01:19:19.886 Deallocate: Supported 01:19:19.886 Deallocated/Unwritten Error: Supported 01:19:19.886 Deallocated Read Value: All 0x00 01:19:19.886 Deallocate in Write Zeroes: Not Supported 01:19:19.886 Deallocated Guard Field: 0xFFFF 01:19:19.886 Flush: Supported 01:19:19.886 Reservation: Not Supported 01:19:19.886 Namespace Sharing Capabilities: Multiple Controllers 01:19:19.886 Size (in LBAs): 262144 (1GiB) 01:19:19.886 Capacity (in LBAs): 262144 (1GiB) 01:19:19.886 Utilization (in LBAs): 262144 (1GiB) 01:19:19.886 Thin Provisioning: Not Supported 01:19:19.886 Per-NS Atomic Units: No 01:19:19.886 Maximum Single Source Range Length: 128 01:19:19.886 Maximum Copy Length: 128 01:19:19.886 Maximum Source Range Count: 128 01:19:19.886 NGUID/EUI64 Never Reused: No 01:19:19.886 Namespace Write Protected: No 01:19:19.886 Endurance group ID: 1 01:19:19.886 Number of LBA Formats: 8 01:19:19.886 Current LBA Format: LBA Format #04 01:19:19.886 LBA Format #00: Data Size: 512 Metadata Size: 0 01:19:19.886 LBA Format #01: Data Size: 512 Metadata Size: 8 01:19:19.886 LBA Format #02: Data Size: 512 Metadata Size: 16 01:19:19.886 LBA Format #03: Data Size: 512 Metadata Size: 64 01:19:19.886 LBA Format #04: Data Size: 4096 Metadata Size: 0 01:19:19.886 LBA Format #05: Data Size: 4096 Metadata Size: 8 01:19:19.886 LBA Format #06: Data Size: 4096 Metadata Size: 16 01:19:19.886 LBA Format #07: Data Size: 4096 Metadata Size: 64 01:19:19.886 01:19:19.886 Get Feature FDP: 01:19:19.886 ================ 01:19:19.886 Enabled: Yes 01:19:19.886 FDP configuration index: 0 01:19:19.886 01:19:19.886 FDP configurations log page 01:19:19.886 =========================== 01:19:19.886 Number of FDP configurations: 1 01:19:19.886 Version: 0 01:19:19.886 Size: 112 01:19:19.886 FDP Configuration Descriptor: 0 01:19:19.886 Descriptor Size: 96 01:19:19.886 Reclaim Group Identifier format: 2 01:19:19.886 FDP Volatile Write Cache: Not Present 01:19:19.886 FDP Configuration: Valid 01:19:19.886 Vendor Specific Size: 0 01:19:19.886 Number of Reclaim Groups: 2 01:19:19.886 Number of Recalim Unit Handles: 8 01:19:19.886 Max Placement Identifiers: 128 01:19:19.886 Number of Namespaces Suppprted: 256 01:19:19.886 Reclaim unit Nominal Size: 6000000 bytes 01:19:19.887 Estimated Reclaim Unit Time Limit: Not Reported 01:19:19.887 RUH Desc #000: RUH Type: Initially Isolated 01:19:19.887 RUH Desc #001: RUH Type: Initially Isolated 01:19:19.887 RUH Desc #002: RUH Type: Initially Isolated 01:19:19.887 RUH Desc #003: RUH Type: Initially Isolated 01:19:19.887 RUH Desc #004: RUH Type: Initially Isolated 01:19:19.887 RUH Desc #005: RUH Type: Initially Isolated 01:19:19.887 RUH Desc #006: RUH Type: Initially Isolated 01:19:19.887 RUH Desc #007: RUH Type: Initially Isolated 01:19:19.887 01:19:19.887 FDP reclaim unit handle usage log page 01:19:19.887 ====================================== 01:19:19.887 Number of Reclaim Unit Handles: 8 01:19:19.887 RUH Usage Desc #000: RUH Attributes: Controller Specified 01:19:19.887 RUH Usage Desc #001: RUH Attributes: Unused 01:19:19.887 RUH Usage Desc #002: RUH Attributes: Unused 01:19:19.887 RUH Usage Desc #003: RUH Attributes: Unused 01:19:19.887 RUH Usage Desc #004: RUH Attributes: Unused 01:19:19.887 RUH Usage Desc #005: RUH Attributes: Unused 01:19:19.887 RUH Usage Desc #006: RUH Attributes: Unused 01:19:19.887 RUH Usage Desc #007: RUH Attributes: Unused 01:19:19.887 01:19:19.887 FDP statistics log page 01:19:19.887 ======================= 01:19:19.887 Host bytes with metadata written: 511811584 01:19:19.887 Media bytes with metadata written: 511868928 01:19:19.887 Media bytes erased: 0 01:19:19.887 01:19:19.887 FDP events log page 01:19:19.887 =================== 01:19:19.887 Number of FDP events: 0 01:19:19.887 01:19:19.887 NVM Specific Namespace Data 01:19:19.887 =========================== 01:19:19.887 Logical Block Storage Tag Mask: 0 01:19:19.887 Protection Information Capabilities: 01:19:19.887 16b Guard Protection Information Storage Tag Support: No 01:19:19.887 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 01:19:19.887 Storage Tag Check Read Support: No 01:19:19.887 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:19:19.887 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:19:19.887 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:19:19.887 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:19:19.887 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:19:19.887 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:19:19.887 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:19:19.887 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:19:19.887 01:19:19.887 real 0m2.150s 01:19:19.887 user 0m1.047s 01:19:19.887 sys 0m0.909s 01:19:19.887 05:14:02 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 01:19:19.887 05:14:02 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 01:19:19.887 ************************************ 01:19:19.887 END TEST nvme_identify 01:19:19.887 ************************************ 01:19:19.887 05:14:02 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 01:19:19.887 05:14:02 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:19:19.887 05:14:02 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 01:19:19.887 05:14:02 nvme -- common/autotest_common.sh@10 -- # set +x 01:19:19.887 ************************************ 01:19:19.887 START TEST nvme_perf 01:19:19.887 ************************************ 01:19:19.887 05:14:02 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 01:19:19.887 05:14:02 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 01:19:21.266 Initializing NVMe Controllers 01:19:21.266 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 01:19:21.266 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 01:19:21.266 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 01:19:21.266 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 01:19:21.266 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 01:19:21.266 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 01:19:21.266 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 01:19:21.266 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 01:19:21.266 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 01:19:21.266 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 01:19:21.266 Initialization complete. Launching workers. 01:19:21.266 ======================================================== 01:19:21.266 Latency(us) 01:19:21.266 Device Information : IOPS MiB/s Average min max 01:19:21.266 PCIE (0000:00:13.0) NSID 1 from core 0: 14044.62 164.59 9132.79 7821.71 47065.89 01:19:21.266 PCIE (0000:00:10.0) NSID 1 from core 0: 14044.62 164.59 9113.12 7756.39 45033.79 01:19:21.266 PCIE (0000:00:11.0) NSID 1 from core 0: 14044.62 164.59 9094.89 7821.48 42621.77 01:19:21.266 PCIE (0000:00:12.0) NSID 1 from core 0: 14044.62 164.59 9075.41 7824.31 40802.45 01:19:21.266 PCIE (0000:00:12.0) NSID 2 from core 0: 14044.62 164.59 9055.09 7828.19 38375.26 01:19:21.266 PCIE (0000:00:12.0) NSID 3 from core 0: 14044.62 164.59 9034.08 7827.33 35850.84 01:19:21.266 ======================================================== 01:19:21.266 Total : 84267.73 987.51 9084.23 7756.39 47065.89 01:19:21.266 01:19:21.266 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 01:19:21.266 ================================================================================= 01:19:21.266 1.00000% : 8053.822us 01:19:21.266 10.00000% : 8264.379us 01:19:21.266 25.00000% : 8474.937us 01:19:21.266 50.00000% : 8738.133us 01:19:21.266 75.00000% : 9053.969us 01:19:21.266 90.00000% : 9422.445us 01:19:21.266 95.00000% : 10001.478us 01:19:21.266 98.00000% : 12159.692us 01:19:21.266 99.00000% : 14212.627us 01:19:21.266 99.50000% : 37268.665us 01:19:21.266 99.90000% : 46533.192us 01:19:21.266 99.99000% : 47164.864us 01:19:21.266 99.99900% : 47164.864us 01:19:21.266 99.99990% : 47164.864us 01:19:21.266 99.99999% : 47164.864us 01:19:21.266 01:19:21.266 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 01:19:21.266 ================================================================================= 01:19:21.266 1.00000% : 7948.543us 01:19:21.266 10.00000% : 8211.740us 01:19:21.266 25.00000% : 8474.937us 01:19:21.266 50.00000% : 8790.773us 01:19:21.266 75.00000% : 9106.609us 01:19:21.266 90.00000% : 9475.084us 01:19:21.266 95.00000% : 10001.478us 01:19:21.266 98.00000% : 11843.855us 01:19:21.266 99.00000% : 14423.184us 01:19:21.266 99.50000% : 37268.665us 01:19:21.266 99.90000% : 44638.175us 01:19:21.266 99.99000% : 45059.290us 01:19:21.266 99.99900% : 45059.290us 01:19:21.266 99.99990% : 45059.290us 01:19:21.266 99.99999% : 45059.290us 01:19:21.266 01:19:21.266 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 01:19:21.266 ================================================================================= 01:19:21.266 1.00000% : 8053.822us 01:19:21.266 10.00000% : 8264.379us 01:19:21.266 25.00000% : 8474.937us 01:19:21.266 50.00000% : 8738.133us 01:19:21.266 75.00000% : 9053.969us 01:19:21.266 90.00000% : 9422.445us 01:19:21.266 95.00000% : 9948.839us 01:19:21.266 98.00000% : 11896.495us 01:19:21.266 99.00000% : 14528.463us 01:19:21.266 99.50000% : 34952.533us 01:19:21.266 99.90000% : 42111.486us 01:19:21.266 99.99000% : 42743.158us 01:19:21.266 99.99900% : 42743.158us 01:19:21.266 99.99990% : 42743.158us 01:19:21.266 99.99999% : 42743.158us 01:19:21.266 01:19:21.266 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 01:19:21.266 ================================================================================= 01:19:21.266 1.00000% : 8053.822us 01:19:21.266 10.00000% : 8264.379us 01:19:21.266 25.00000% : 8474.937us 01:19:21.266 50.00000% : 8790.773us 01:19:21.266 75.00000% : 9053.969us 01:19:21.266 90.00000% : 9422.445us 01:19:21.266 95.00000% : 9896.199us 01:19:21.266 98.00000% : 11685.937us 01:19:21.266 99.00000% : 13686.233us 01:19:21.266 99.50000% : 33268.074us 01:19:21.266 99.90000% : 40427.027us 01:19:21.266 99.99000% : 40848.141us 01:19:21.266 99.99900% : 40848.141us 01:19:21.266 99.99990% : 40848.141us 01:19:21.266 99.99999% : 40848.141us 01:19:21.266 01:19:21.266 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 01:19:21.266 ================================================================================= 01:19:21.266 1.00000% : 8053.822us 01:19:21.266 10.00000% : 8264.379us 01:19:21.266 25.00000% : 8474.937us 01:19:21.266 50.00000% : 8738.133us 01:19:21.266 75.00000% : 9053.969us 01:19:21.266 90.00000% : 9422.445us 01:19:21.266 95.00000% : 9896.199us 01:19:21.266 98.00000% : 12054.413us 01:19:21.266 99.00000% : 13896.790us 01:19:21.266 99.50000% : 30951.942us 01:19:21.266 99.90000% : 38110.895us 01:19:21.266 99.99000% : 38532.010us 01:19:21.266 99.99900% : 38532.010us 01:19:21.266 99.99990% : 38532.010us 01:19:21.266 99.99999% : 38532.010us 01:19:21.266 01:19:21.266 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 01:19:21.266 ================================================================================= 01:19:21.266 1.00000% : 8053.822us 01:19:21.266 10.00000% : 8264.379us 01:19:21.266 25.00000% : 8474.937us 01:19:21.266 50.00000% : 8738.133us 01:19:21.266 75.00000% : 9053.969us 01:19:21.266 90.00000% : 9422.445us 01:19:21.266 95.00000% : 10001.478us 01:19:21.266 98.00000% : 11843.855us 01:19:21.266 99.00000% : 14002.069us 01:19:21.266 99.50000% : 28635.810us 01:19:21.266 99.90000% : 35373.648us 01:19:21.266 99.99000% : 36005.320us 01:19:21.266 99.99900% : 36005.320us 01:19:21.266 99.99990% : 36005.320us 01:19:21.266 99.99999% : 36005.320us 01:19:21.266 01:19:21.266 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 01:19:21.266 ============================================================================== 01:19:21.266 Range in us Cumulative IO count 01:19:21.266 7790.625 - 7843.264: 0.0426% ( 6) 01:19:21.266 7843.264 - 7895.904: 0.1491% ( 15) 01:19:21.266 7895.904 - 7948.543: 0.4474% ( 42) 01:19:21.266 7948.543 - 8001.182: 0.8381% ( 55) 01:19:21.266 8001.182 - 8053.822: 1.7756% ( 132) 01:19:21.266 8053.822 - 8106.461: 3.4872% ( 241) 01:19:21.266 8106.461 - 8159.100: 5.6676% ( 307) 01:19:21.266 8159.100 - 8211.740: 8.2670% ( 366) 01:19:21.266 8211.740 - 8264.379: 11.2216% ( 416) 01:19:21.266 8264.379 - 8317.018: 14.5952% ( 475) 01:19:21.266 8317.018 - 8369.658: 18.1605% ( 502) 01:19:21.266 8369.658 - 8422.297: 21.9957% ( 540) 01:19:21.266 8422.297 - 8474.937: 26.1151% ( 580) 01:19:21.266 8474.937 - 8527.576: 30.5966% ( 631) 01:19:21.266 8527.576 - 8580.215: 35.2415% ( 654) 01:19:21.266 8580.215 - 8632.855: 40.1349% ( 689) 01:19:21.266 8632.855 - 8685.494: 45.1562% ( 707) 01:19:21.266 8685.494 - 8738.133: 50.1847% ( 708) 01:19:21.266 8738.133 - 8790.773: 55.3125% ( 722) 01:19:21.266 8790.773 - 8843.412: 60.3764% ( 713) 01:19:21.266 8843.412 - 8896.051: 65.2770% ( 690) 01:19:21.266 8896.051 - 8948.691: 69.6875% ( 621) 01:19:21.266 8948.691 - 9001.330: 73.5866% ( 549) 01:19:21.266 9001.330 - 9053.969: 76.7543% ( 446) 01:19:21.266 9053.969 - 9106.609: 79.5384% ( 392) 01:19:21.266 9106.609 - 9159.248: 81.9602% ( 341) 01:19:21.266 9159.248 - 9211.888: 84.2401% ( 321) 01:19:21.266 9211.888 - 9264.527: 86.2003% ( 276) 01:19:21.266 9264.527 - 9317.166: 87.9759% ( 250) 01:19:21.266 9317.166 - 9369.806: 89.4744% ( 211) 01:19:21.266 9369.806 - 9422.445: 90.5753% ( 155) 01:19:21.266 9422.445 - 9475.084: 91.4986% ( 130) 01:19:21.266 9475.084 - 9527.724: 92.2585% ( 107) 01:19:21.266 9527.724 - 9580.363: 92.8409% ( 82) 01:19:21.266 9580.363 - 9633.002: 93.3097% ( 66) 01:19:21.266 9633.002 - 9685.642: 93.6861% ( 53) 01:19:21.266 9685.642 - 9738.281: 94.0057% ( 45) 01:19:21.266 9738.281 - 9790.920: 94.3182% ( 44) 01:19:21.266 9790.920 - 9843.560: 94.5881% ( 38) 01:19:21.266 9843.560 - 9896.199: 94.7940% ( 29) 01:19:21.266 9896.199 - 9948.839: 94.9645% ( 24) 01:19:21.266 9948.839 - 10001.478: 95.1562% ( 27) 01:19:21.266 10001.478 - 10054.117: 95.2983% ( 20) 01:19:21.267 10054.117 - 10106.757: 95.4261% ( 18) 01:19:21.267 10106.757 - 10159.396: 95.5682% ( 20) 01:19:21.267 10159.396 - 10212.035: 95.6960% ( 18) 01:19:21.267 10212.035 - 10264.675: 95.8523% ( 22) 01:19:21.267 10264.675 - 10317.314: 96.0085% ( 22) 01:19:21.267 10317.314 - 10369.953: 96.1577% ( 21) 01:19:21.267 10369.953 - 10422.593: 96.2855% ( 18) 01:19:21.267 10422.593 - 10475.232: 96.3849% ( 14) 01:19:21.267 10475.232 - 10527.871: 96.4702% ( 12) 01:19:21.267 10527.871 - 10580.511: 96.5554% ( 12) 01:19:21.267 10580.511 - 10633.150: 96.6477% ( 13) 01:19:21.267 10633.150 - 10685.790: 96.7045% ( 8) 01:19:21.267 10685.790 - 10738.429: 96.7898% ( 12) 01:19:21.267 10738.429 - 10791.068: 96.8750% ( 12) 01:19:21.267 10791.068 - 10843.708: 96.9531% ( 11) 01:19:21.267 10843.708 - 10896.347: 97.0384% ( 12) 01:19:21.267 10896.347 - 10948.986: 97.0881% ( 7) 01:19:21.267 10948.986 - 11001.626: 97.1307% ( 6) 01:19:21.267 11001.626 - 11054.265: 97.1875% ( 8) 01:19:21.267 11054.265 - 11106.904: 97.2443% ( 8) 01:19:21.267 11106.904 - 11159.544: 97.2940% ( 7) 01:19:21.267 11159.544 - 11212.183: 97.3509% ( 8) 01:19:21.267 11212.183 - 11264.822: 97.4148% ( 9) 01:19:21.267 11264.822 - 11317.462: 97.4503% ( 5) 01:19:21.267 11317.462 - 11370.101: 97.4929% ( 6) 01:19:21.267 11370.101 - 11422.741: 97.5355% ( 6) 01:19:21.267 11422.741 - 11475.380: 97.5781% ( 6) 01:19:21.267 11475.380 - 11528.019: 97.6136% ( 5) 01:19:21.267 11528.019 - 11580.659: 97.6562% ( 6) 01:19:21.267 11580.659 - 11633.298: 97.7131% ( 8) 01:19:21.267 11633.298 - 11685.937: 97.7557% ( 6) 01:19:21.267 11685.937 - 11738.577: 97.8054% ( 7) 01:19:21.267 11738.577 - 11791.216: 97.8338% ( 4) 01:19:21.267 11791.216 - 11843.855: 97.8622% ( 4) 01:19:21.267 11843.855 - 11896.495: 97.8835% ( 3) 01:19:21.267 11896.495 - 11949.134: 97.9119% ( 4) 01:19:21.267 11949.134 - 12001.773: 97.9332% ( 3) 01:19:21.267 12001.773 - 12054.413: 97.9616% ( 4) 01:19:21.267 12054.413 - 12107.052: 97.9901% ( 4) 01:19:21.267 12107.052 - 12159.692: 98.0185% ( 4) 01:19:21.267 12159.692 - 12212.331: 98.0398% ( 3) 01:19:21.267 12212.331 - 12264.970: 98.0682% ( 4) 01:19:21.267 12264.970 - 12317.610: 98.0966% ( 4) 01:19:21.267 12317.610 - 12370.249: 98.1250% ( 4) 01:19:21.267 12370.249 - 12422.888: 98.1463% ( 3) 01:19:21.267 12422.888 - 12475.528: 98.1747% ( 4) 01:19:21.267 12475.528 - 12528.167: 98.1818% ( 1) 01:19:21.267 12844.003 - 12896.643: 98.1960% ( 2) 01:19:21.267 12896.643 - 12949.282: 98.2102% ( 2) 01:19:21.267 12949.282 - 13001.921: 98.2315% ( 3) 01:19:21.267 13001.921 - 13054.561: 98.2528% ( 3) 01:19:21.267 13054.561 - 13107.200: 98.2670% ( 2) 01:19:21.267 13107.200 - 13159.839: 98.2884% ( 3) 01:19:21.267 13159.839 - 13212.479: 98.3097% ( 3) 01:19:21.267 13212.479 - 13265.118: 98.3310% ( 3) 01:19:21.267 13265.118 - 13317.757: 98.3807% ( 7) 01:19:21.267 13317.757 - 13370.397: 98.4162% ( 5) 01:19:21.267 13370.397 - 13423.036: 98.4446% ( 4) 01:19:21.267 13423.036 - 13475.676: 98.4872% ( 6) 01:19:21.267 13475.676 - 13580.954: 98.5724% ( 12) 01:19:21.267 13580.954 - 13686.233: 98.6506% ( 11) 01:19:21.267 13686.233 - 13791.512: 98.7429% ( 13) 01:19:21.267 13791.512 - 13896.790: 98.8281% ( 12) 01:19:21.267 13896.790 - 14002.069: 98.9062% ( 11) 01:19:21.267 14002.069 - 14107.348: 98.9773% ( 10) 01:19:21.267 14107.348 - 14212.627: 99.0554% ( 11) 01:19:21.267 14212.627 - 14317.905: 99.0909% ( 5) 01:19:21.267 35163.091 - 35373.648: 99.1122% ( 3) 01:19:21.267 35373.648 - 35584.206: 99.1619% ( 7) 01:19:21.267 35584.206 - 35794.763: 99.2045% ( 6) 01:19:21.267 35794.763 - 36005.320: 99.2472% ( 6) 01:19:21.267 36005.320 - 36215.878: 99.2898% ( 6) 01:19:21.267 36215.878 - 36426.435: 99.3324% ( 6) 01:19:21.267 36426.435 - 36636.993: 99.3750% ( 6) 01:19:21.267 36636.993 - 36847.550: 99.4247% ( 7) 01:19:21.267 36847.550 - 37058.108: 99.4673% ( 6) 01:19:21.267 37058.108 - 37268.665: 99.5170% ( 7) 01:19:21.267 37268.665 - 37479.222: 99.5455% ( 4) 01:19:21.267 44638.175 - 44848.733: 99.5597% ( 2) 01:19:21.267 44848.733 - 45059.290: 99.6023% ( 6) 01:19:21.267 45059.290 - 45269.847: 99.6449% ( 6) 01:19:21.267 45269.847 - 45480.405: 99.6804% ( 5) 01:19:21.267 45480.405 - 45690.962: 99.7230% ( 6) 01:19:21.267 45690.962 - 45901.520: 99.7656% ( 6) 01:19:21.267 45901.520 - 46112.077: 99.8153% ( 7) 01:19:21.267 46112.077 - 46322.635: 99.8509% ( 5) 01:19:21.267 46322.635 - 46533.192: 99.9006% ( 7) 01:19:21.267 46533.192 - 46743.749: 99.9361% ( 5) 01:19:21.267 46743.749 - 46954.307: 99.9787% ( 6) 01:19:21.267 46954.307 - 47164.864: 100.0000% ( 3) 01:19:21.267 01:19:21.267 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 01:19:21.267 ============================================================================== 01:19:21.267 Range in us Cumulative IO count 01:19:21.267 7737.986 - 7790.625: 0.0355% ( 5) 01:19:21.267 7790.625 - 7843.264: 0.1918% ( 22) 01:19:21.267 7843.264 - 7895.904: 0.5114% ( 45) 01:19:21.267 7895.904 - 7948.543: 1.2003% ( 97) 01:19:21.267 7948.543 - 8001.182: 2.3651% ( 164) 01:19:21.267 8001.182 - 8053.822: 4.1122% ( 246) 01:19:21.267 8053.822 - 8106.461: 6.2429% ( 300) 01:19:21.267 8106.461 - 8159.100: 8.7003% ( 346) 01:19:21.267 8159.100 - 8211.740: 11.4560% ( 388) 01:19:21.267 8211.740 - 8264.379: 14.3324% ( 405) 01:19:21.267 8264.379 - 8317.018: 17.4219% ( 435) 01:19:21.267 8317.018 - 8369.658: 20.7599% ( 470) 01:19:21.267 8369.658 - 8422.297: 24.4176% ( 515) 01:19:21.267 8422.297 - 8474.937: 28.3381% ( 552) 01:19:21.267 8474.937 - 8527.576: 32.4929% ( 585) 01:19:21.267 8527.576 - 8580.215: 36.5270% ( 568) 01:19:21.267 8580.215 - 8632.855: 40.8452% ( 608) 01:19:21.267 8632.855 - 8685.494: 45.2131% ( 615) 01:19:21.267 8685.494 - 8738.133: 49.6023% ( 618) 01:19:21.267 8738.133 - 8790.773: 54.0909% ( 632) 01:19:21.267 8790.773 - 8843.412: 58.5014% ( 621) 01:19:21.267 8843.412 - 8896.051: 62.9474% ( 626) 01:19:21.267 8896.051 - 8948.691: 67.2798% ( 610) 01:19:21.267 8948.691 - 9001.330: 71.3991% ( 580) 01:19:21.267 9001.330 - 9053.969: 74.7940% ( 478) 01:19:21.267 9053.969 - 9106.609: 77.8267% ( 427) 01:19:21.267 9106.609 - 9159.248: 80.3622% ( 357) 01:19:21.267 9159.248 - 9211.888: 82.6847% ( 327) 01:19:21.267 9211.888 - 9264.527: 84.5455% ( 262) 01:19:21.267 9264.527 - 9317.166: 86.3281% ( 251) 01:19:21.267 9317.166 - 9369.806: 88.0256% ( 239) 01:19:21.267 9369.806 - 9422.445: 89.5241% ( 211) 01:19:21.267 9422.445 - 9475.084: 90.5966% ( 151) 01:19:21.267 9475.084 - 9527.724: 91.5625% ( 136) 01:19:21.267 9527.724 - 9580.363: 92.2656% ( 99) 01:19:21.267 9580.363 - 9633.002: 92.8267% ( 79) 01:19:21.267 9633.002 - 9685.642: 93.2670% ( 62) 01:19:21.267 9685.642 - 9738.281: 93.6293% ( 51) 01:19:21.267 9738.281 - 9790.920: 94.0625% ( 61) 01:19:21.267 9790.920 - 9843.560: 94.3679% ( 43) 01:19:21.267 9843.560 - 9896.199: 94.6591% ( 41) 01:19:21.267 9896.199 - 9948.839: 94.9006% ( 34) 01:19:21.267 9948.839 - 10001.478: 95.1278% ( 32) 01:19:21.267 10001.478 - 10054.117: 95.2912% ( 23) 01:19:21.267 10054.117 - 10106.757: 95.4403% ( 21) 01:19:21.267 10106.757 - 10159.396: 95.5753% ( 19) 01:19:21.267 10159.396 - 10212.035: 95.6818% ( 15) 01:19:21.267 10212.035 - 10264.675: 95.8239% ( 20) 01:19:21.267 10264.675 - 10317.314: 95.9446% ( 17) 01:19:21.267 10317.314 - 10369.953: 96.0795% ( 19) 01:19:21.267 10369.953 - 10422.593: 96.2145% ( 19) 01:19:21.267 10422.593 - 10475.232: 96.3423% ( 18) 01:19:21.267 10475.232 - 10527.871: 96.4702% ( 18) 01:19:21.267 10527.871 - 10580.511: 96.5767% ( 15) 01:19:21.267 10580.511 - 10633.150: 96.6832% ( 15) 01:19:21.267 10633.150 - 10685.790: 96.7969% ( 16) 01:19:21.267 10685.790 - 10738.429: 96.8892% ( 13) 01:19:21.267 10738.429 - 10791.068: 96.9957% ( 15) 01:19:21.267 10791.068 - 10843.708: 97.0810% ( 12) 01:19:21.267 10843.708 - 10896.347: 97.1520% ( 10) 01:19:21.267 10896.347 - 10948.986: 97.2301% ( 11) 01:19:21.267 10948.986 - 11001.626: 97.3082% ( 11) 01:19:21.267 11001.626 - 11054.265: 97.4006% ( 13) 01:19:21.267 11054.265 - 11106.904: 97.4645% ( 9) 01:19:21.267 11106.904 - 11159.544: 97.5213% ( 8) 01:19:21.267 11159.544 - 11212.183: 97.5852% ( 9) 01:19:21.267 11212.183 - 11264.822: 97.6349% ( 7) 01:19:21.267 11264.822 - 11317.462: 97.6918% ( 8) 01:19:21.267 11317.462 - 11370.101: 97.7486% ( 8) 01:19:21.267 11370.101 - 11422.741: 97.7841% ( 5) 01:19:21.267 11422.741 - 11475.380: 97.8267% ( 6) 01:19:21.267 11475.380 - 11528.019: 97.8480% ( 3) 01:19:21.267 11528.019 - 11580.659: 97.8764% ( 4) 01:19:21.267 11580.659 - 11633.298: 97.9048% ( 4) 01:19:21.267 11633.298 - 11685.937: 97.9403% ( 5) 01:19:21.267 11685.937 - 11738.577: 97.9688% ( 4) 01:19:21.267 11738.577 - 11791.216: 97.9901% ( 3) 01:19:21.267 11791.216 - 11843.855: 98.0114% ( 3) 01:19:21.267 11843.855 - 11896.495: 98.0469% ( 5) 01:19:21.267 11896.495 - 11949.134: 98.1108% ( 9) 01:19:21.267 11949.134 - 12001.773: 98.1605% ( 7) 01:19:21.267 12001.773 - 12054.413: 98.2031% ( 6) 01:19:21.267 12054.413 - 12107.052: 98.2386% ( 5) 01:19:21.267 12107.052 - 12159.692: 98.2670% ( 4) 01:19:21.267 12159.692 - 12212.331: 98.3026% ( 5) 01:19:21.267 12212.331 - 12264.970: 98.3452% ( 6) 01:19:21.267 12264.970 - 12317.610: 98.3736% ( 4) 01:19:21.267 12317.610 - 12370.249: 98.3949% ( 3) 01:19:21.267 12370.249 - 12422.888: 98.4233% ( 4) 01:19:21.267 12422.888 - 12475.528: 98.4375% ( 2) 01:19:21.267 12475.528 - 12528.167: 98.4588% ( 3) 01:19:21.267 12528.167 - 12580.806: 98.4872% ( 4) 01:19:21.267 12580.806 - 12633.446: 98.5085% ( 3) 01:19:21.267 12633.446 - 12686.085: 98.5298% ( 3) 01:19:21.268 12686.085 - 12738.724: 98.5511% ( 3) 01:19:21.268 12738.724 - 12791.364: 98.5724% ( 3) 01:19:21.268 12791.364 - 12844.003: 98.6009% ( 4) 01:19:21.268 12844.003 - 12896.643: 98.6080% ( 1) 01:19:21.268 12896.643 - 12949.282: 98.6364% ( 4) 01:19:21.268 13370.397 - 13423.036: 98.6506% ( 2) 01:19:21.268 13423.036 - 13475.676: 98.6648% ( 2) 01:19:21.268 13475.676 - 13580.954: 98.7074% ( 6) 01:19:21.268 13580.954 - 13686.233: 98.7429% ( 5) 01:19:21.268 13686.233 - 13791.512: 98.7784% ( 5) 01:19:21.268 13791.512 - 13896.790: 98.8210% ( 6) 01:19:21.268 13896.790 - 14002.069: 98.8636% ( 6) 01:19:21.268 14002.069 - 14107.348: 98.8991% ( 5) 01:19:21.268 14107.348 - 14212.627: 98.9418% ( 6) 01:19:21.268 14212.627 - 14317.905: 98.9773% ( 5) 01:19:21.268 14317.905 - 14423.184: 99.0199% ( 6) 01:19:21.268 14423.184 - 14528.463: 99.0696% ( 7) 01:19:21.268 14528.463 - 14633.741: 99.0909% ( 3) 01:19:21.268 34741.976 - 34952.533: 99.1122% ( 3) 01:19:21.268 34952.533 - 35163.091: 99.1548% ( 6) 01:19:21.268 35163.091 - 35373.648: 99.1903% ( 5) 01:19:21.268 35373.648 - 35584.206: 99.2188% ( 4) 01:19:21.268 35584.206 - 35794.763: 99.2614% ( 6) 01:19:21.268 35794.763 - 36005.320: 99.3040% ( 6) 01:19:21.268 36005.320 - 36215.878: 99.3395% ( 5) 01:19:21.268 36215.878 - 36426.435: 99.3750% ( 5) 01:19:21.268 36426.435 - 36636.993: 99.4105% ( 5) 01:19:21.268 36636.993 - 36847.550: 99.4460% ( 5) 01:19:21.268 36847.550 - 37058.108: 99.4815% ( 5) 01:19:21.268 37058.108 - 37268.665: 99.5170% ( 5) 01:19:21.268 37268.665 - 37479.222: 99.5455% ( 4) 01:19:21.268 42322.043 - 42532.601: 99.5597% ( 2) 01:19:21.268 42532.601 - 42743.158: 99.6023% ( 6) 01:19:21.268 42743.158 - 42953.716: 99.6378% ( 5) 01:19:21.268 42953.716 - 43164.273: 99.6662% ( 4) 01:19:21.268 43164.273 - 43374.831: 99.7017% ( 5) 01:19:21.268 43374.831 - 43585.388: 99.7443% ( 6) 01:19:21.268 43585.388 - 43795.945: 99.7727% ( 4) 01:19:21.268 43795.945 - 44006.503: 99.8153% ( 6) 01:19:21.268 44006.503 - 44217.060: 99.8509% ( 5) 01:19:21.268 44217.060 - 44427.618: 99.8864% ( 5) 01:19:21.268 44427.618 - 44638.175: 99.9290% ( 6) 01:19:21.268 44638.175 - 44848.733: 99.9716% ( 6) 01:19:21.268 44848.733 - 45059.290: 100.0000% ( 4) 01:19:21.268 01:19:21.268 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 01:19:21.268 ============================================================================== 01:19:21.268 Range in us Cumulative IO count 01:19:21.268 7790.625 - 7843.264: 0.0284% ( 4) 01:19:21.268 7843.264 - 7895.904: 0.1065% ( 11) 01:19:21.268 7895.904 - 7948.543: 0.3764% ( 38) 01:19:21.268 7948.543 - 8001.182: 0.9020% ( 74) 01:19:21.268 8001.182 - 8053.822: 1.9531% ( 148) 01:19:21.268 8053.822 - 8106.461: 3.5014% ( 218) 01:19:21.268 8106.461 - 8159.100: 5.7457% ( 316) 01:19:21.268 8159.100 - 8211.740: 8.4020% ( 374) 01:19:21.268 8211.740 - 8264.379: 11.4418% ( 428) 01:19:21.268 8264.379 - 8317.018: 14.7159% ( 461) 01:19:21.268 8317.018 - 8369.658: 18.4162% ( 521) 01:19:21.268 8369.658 - 8422.297: 22.3722% ( 557) 01:19:21.268 8422.297 - 8474.937: 26.6335% ( 600) 01:19:21.268 8474.937 - 8527.576: 30.9375% ( 606) 01:19:21.268 8527.576 - 8580.215: 35.6463% ( 663) 01:19:21.268 8580.215 - 8632.855: 40.4830% ( 681) 01:19:21.268 8632.855 - 8685.494: 45.4830% ( 704) 01:19:21.268 8685.494 - 8738.133: 50.4403% ( 698) 01:19:21.268 8738.133 - 8790.773: 55.3977% ( 698) 01:19:21.268 8790.773 - 8843.412: 60.3906% ( 703) 01:19:21.268 8843.412 - 8896.051: 65.1491% ( 670) 01:19:21.268 8896.051 - 8948.691: 69.3608% ( 593) 01:19:21.268 8948.691 - 9001.330: 73.1179% ( 529) 01:19:21.268 9001.330 - 9053.969: 76.4560% ( 470) 01:19:21.268 9053.969 - 9106.609: 79.2969% ( 400) 01:19:21.268 9106.609 - 9159.248: 81.7472% ( 345) 01:19:21.268 9159.248 - 9211.888: 83.9844% ( 315) 01:19:21.268 9211.888 - 9264.527: 86.0227% ( 287) 01:19:21.268 9264.527 - 9317.166: 87.7486% ( 243) 01:19:21.268 9317.166 - 9369.806: 89.1832% ( 202) 01:19:21.268 9369.806 - 9422.445: 90.3977% ( 171) 01:19:21.268 9422.445 - 9475.084: 91.3707% ( 137) 01:19:21.268 9475.084 - 9527.724: 92.1236% ( 106) 01:19:21.268 9527.724 - 9580.363: 92.6420% ( 73) 01:19:21.268 9580.363 - 9633.002: 93.0966% ( 64) 01:19:21.268 9633.002 - 9685.642: 93.5795% ( 68) 01:19:21.268 9685.642 - 9738.281: 93.9489% ( 52) 01:19:21.268 9738.281 - 9790.920: 94.2685% ( 45) 01:19:21.268 9790.920 - 9843.560: 94.5810% ( 44) 01:19:21.268 9843.560 - 9896.199: 94.8935% ( 44) 01:19:21.268 9896.199 - 9948.839: 95.1278% ( 33) 01:19:21.268 9948.839 - 10001.478: 95.3338% ( 29) 01:19:21.268 10001.478 - 10054.117: 95.4830% ( 21) 01:19:21.268 10054.117 - 10106.757: 95.6463% ( 23) 01:19:21.268 10106.757 - 10159.396: 95.8168% ( 24) 01:19:21.268 10159.396 - 10212.035: 95.9588% ( 20) 01:19:21.268 10212.035 - 10264.675: 96.0938% ( 19) 01:19:21.268 10264.675 - 10317.314: 96.2074% ( 16) 01:19:21.268 10317.314 - 10369.953: 96.3210% ( 16) 01:19:21.268 10369.953 - 10422.593: 96.4205% ( 14) 01:19:21.268 10422.593 - 10475.232: 96.5270% ( 15) 01:19:21.268 10475.232 - 10527.871: 96.6264% ( 14) 01:19:21.268 10527.871 - 10580.511: 96.7401% ( 16) 01:19:21.268 10580.511 - 10633.150: 96.8608% ( 17) 01:19:21.268 10633.150 - 10685.790: 96.9531% ( 13) 01:19:21.268 10685.790 - 10738.429: 97.0312% ( 11) 01:19:21.268 10738.429 - 10791.068: 97.1094% ( 11) 01:19:21.268 10791.068 - 10843.708: 97.1662% ( 8) 01:19:21.268 10843.708 - 10896.347: 97.2230% ( 8) 01:19:21.268 10896.347 - 10948.986: 97.2798% ( 8) 01:19:21.268 10948.986 - 11001.626: 97.3651% ( 12) 01:19:21.268 11001.626 - 11054.265: 97.4503% ( 12) 01:19:21.268 11054.265 - 11106.904: 97.5071% ( 8) 01:19:21.268 11106.904 - 11159.544: 97.5497% ( 6) 01:19:21.268 11159.544 - 11212.183: 97.5852% ( 5) 01:19:21.268 11212.183 - 11264.822: 97.6207% ( 5) 01:19:21.268 11264.822 - 11317.462: 97.6491% ( 4) 01:19:21.268 11317.462 - 11370.101: 97.6776% ( 4) 01:19:21.268 11370.101 - 11422.741: 97.7131% ( 5) 01:19:21.268 11422.741 - 11475.380: 97.7415% ( 4) 01:19:21.268 11475.380 - 11528.019: 97.7770% ( 5) 01:19:21.268 11528.019 - 11580.659: 97.7983% ( 3) 01:19:21.268 11580.659 - 11633.298: 97.8409% ( 6) 01:19:21.268 11633.298 - 11685.937: 97.8622% ( 3) 01:19:21.268 11685.937 - 11738.577: 97.8977% ( 5) 01:19:21.268 11738.577 - 11791.216: 97.9332% ( 5) 01:19:21.268 11791.216 - 11843.855: 97.9688% ( 5) 01:19:21.268 11843.855 - 11896.495: 98.0043% ( 5) 01:19:21.268 11896.495 - 11949.134: 98.0327% ( 4) 01:19:21.268 11949.134 - 12001.773: 98.0611% ( 4) 01:19:21.268 12001.773 - 12054.413: 98.0895% ( 4) 01:19:21.268 12054.413 - 12107.052: 98.1108% ( 3) 01:19:21.268 12107.052 - 12159.692: 98.1250% ( 2) 01:19:21.268 12159.692 - 12212.331: 98.1534% ( 4) 01:19:21.268 12212.331 - 12264.970: 98.2031% ( 7) 01:19:21.268 12264.970 - 12317.610: 98.2457% ( 6) 01:19:21.268 12317.610 - 12370.249: 98.2741% ( 4) 01:19:21.268 12370.249 - 12422.888: 98.3026% ( 4) 01:19:21.268 12422.888 - 12475.528: 98.3310% ( 4) 01:19:21.268 12475.528 - 12528.167: 98.3594% ( 4) 01:19:21.268 12528.167 - 12580.806: 98.3878% ( 4) 01:19:21.268 12580.806 - 12633.446: 98.4162% ( 4) 01:19:21.268 12633.446 - 12686.085: 98.4375% ( 3) 01:19:21.268 12686.085 - 12738.724: 98.4588% ( 3) 01:19:21.268 12738.724 - 12791.364: 98.4872% ( 4) 01:19:21.268 12791.364 - 12844.003: 98.5156% ( 4) 01:19:21.268 12844.003 - 12896.643: 98.5440% ( 4) 01:19:21.268 12896.643 - 12949.282: 98.5653% ( 3) 01:19:21.268 12949.282 - 13001.921: 98.5938% ( 4) 01:19:21.268 13001.921 - 13054.561: 98.6222% ( 4) 01:19:21.268 13054.561 - 13107.200: 98.6364% ( 2) 01:19:21.268 13580.954 - 13686.233: 98.6577% ( 3) 01:19:21.268 13686.233 - 13791.512: 98.7003% ( 6) 01:19:21.268 13791.512 - 13896.790: 98.7429% ( 6) 01:19:21.268 13896.790 - 14002.069: 98.7926% ( 7) 01:19:21.268 14002.069 - 14107.348: 98.8352% ( 6) 01:19:21.268 14107.348 - 14212.627: 98.8849% ( 7) 01:19:21.268 14212.627 - 14317.905: 98.9276% ( 6) 01:19:21.268 14317.905 - 14423.184: 98.9773% ( 7) 01:19:21.268 14423.184 - 14528.463: 99.0199% ( 6) 01:19:21.268 14528.463 - 14633.741: 99.0625% ( 6) 01:19:21.268 14633.741 - 14739.020: 99.0909% ( 4) 01:19:21.268 32636.402 - 32846.959: 99.1122% ( 3) 01:19:21.268 32846.959 - 33057.516: 99.1548% ( 6) 01:19:21.268 33057.516 - 33268.074: 99.1974% ( 6) 01:19:21.268 33268.074 - 33478.631: 99.2401% ( 6) 01:19:21.268 33478.631 - 33689.189: 99.2756% ( 5) 01:19:21.268 33689.189 - 33899.746: 99.3182% ( 6) 01:19:21.268 33899.746 - 34110.304: 99.3608% ( 6) 01:19:21.268 34110.304 - 34320.861: 99.3963% ( 5) 01:19:21.268 34320.861 - 34531.418: 99.4389% ( 6) 01:19:21.268 34531.418 - 34741.976: 99.4815% ( 6) 01:19:21.268 34741.976 - 34952.533: 99.5170% ( 5) 01:19:21.268 34952.533 - 35163.091: 99.5455% ( 4) 01:19:21.268 40216.469 - 40427.027: 99.5810% ( 5) 01:19:21.268 40427.027 - 40637.584: 99.6165% ( 5) 01:19:21.268 40637.584 - 40848.141: 99.6591% ( 6) 01:19:21.268 40848.141 - 41058.699: 99.6946% ( 5) 01:19:21.268 41058.699 - 41269.256: 99.7301% ( 5) 01:19:21.268 41269.256 - 41479.814: 99.7727% ( 6) 01:19:21.268 41479.814 - 41690.371: 99.8153% ( 6) 01:19:21.268 41690.371 - 41900.929: 99.8580% ( 6) 01:19:21.268 41900.929 - 42111.486: 99.9006% ( 6) 01:19:21.268 42111.486 - 42322.043: 99.9432% ( 6) 01:19:21.268 42322.043 - 42532.601: 99.9787% ( 5) 01:19:21.268 42532.601 - 42743.158: 100.0000% ( 3) 01:19:21.268 01:19:21.268 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 01:19:21.268 ============================================================================== 01:19:21.268 Range in us Cumulative IO count 01:19:21.269 7790.625 - 7843.264: 0.0355% ( 5) 01:19:21.269 7843.264 - 7895.904: 0.0923% ( 8) 01:19:21.269 7895.904 - 7948.543: 0.3480% ( 36) 01:19:21.269 7948.543 - 8001.182: 0.9375% ( 83) 01:19:21.269 8001.182 - 8053.822: 1.7330% ( 112) 01:19:21.269 8053.822 - 8106.461: 3.6009% ( 263) 01:19:21.269 8106.461 - 8159.100: 5.7244% ( 299) 01:19:21.269 8159.100 - 8211.740: 8.5014% ( 391) 01:19:21.269 8211.740 - 8264.379: 11.4489% ( 415) 01:19:21.269 8264.379 - 8317.018: 14.5384% ( 435) 01:19:21.269 8317.018 - 8369.658: 18.1605% ( 510) 01:19:21.269 8369.658 - 8422.297: 22.0597% ( 549) 01:19:21.269 8422.297 - 8474.937: 26.3565% ( 605) 01:19:21.269 8474.937 - 8527.576: 30.8097% ( 627) 01:19:21.269 8527.576 - 8580.215: 35.3267% ( 636) 01:19:21.269 8580.215 - 8632.855: 40.0071% ( 659) 01:19:21.269 8632.855 - 8685.494: 44.8295% ( 679) 01:19:21.269 8685.494 - 8738.133: 49.9503% ( 721) 01:19:21.269 8738.133 - 8790.773: 55.2060% ( 740) 01:19:21.269 8790.773 - 8843.412: 60.1705% ( 699) 01:19:21.269 8843.412 - 8896.051: 64.8793% ( 663) 01:19:21.269 8896.051 - 8948.691: 69.2188% ( 611) 01:19:21.269 8948.691 - 9001.330: 72.8906% ( 517) 01:19:21.269 9001.330 - 9053.969: 76.1577% ( 460) 01:19:21.269 9053.969 - 9106.609: 78.9134% ( 388) 01:19:21.269 9106.609 - 9159.248: 81.5199% ( 367) 01:19:21.269 9159.248 - 9211.888: 83.7287% ( 311) 01:19:21.269 9211.888 - 9264.527: 85.8239% ( 295) 01:19:21.269 9264.527 - 9317.166: 87.5000% ( 236) 01:19:21.269 9317.166 - 9369.806: 88.9702% ( 207) 01:19:21.269 9369.806 - 9422.445: 90.1705% ( 169) 01:19:21.269 9422.445 - 9475.084: 91.1293% ( 135) 01:19:21.269 9475.084 - 9527.724: 92.0170% ( 125) 01:19:21.269 9527.724 - 9580.363: 92.7557% ( 104) 01:19:21.269 9580.363 - 9633.002: 93.2315% ( 67) 01:19:21.269 9633.002 - 9685.642: 93.6506% ( 59) 01:19:21.269 9685.642 - 9738.281: 94.1122% ( 65) 01:19:21.269 9738.281 - 9790.920: 94.5312% ( 59) 01:19:21.269 9790.920 - 9843.560: 94.8651% ( 47) 01:19:21.269 9843.560 - 9896.199: 95.1847% ( 45) 01:19:21.269 9896.199 - 9948.839: 95.4261% ( 34) 01:19:21.269 9948.839 - 10001.478: 95.6250% ( 28) 01:19:21.269 10001.478 - 10054.117: 95.7884% ( 23) 01:19:21.269 10054.117 - 10106.757: 95.9446% ( 22) 01:19:21.269 10106.757 - 10159.396: 96.0938% ( 21) 01:19:21.269 10159.396 - 10212.035: 96.2500% ( 22) 01:19:21.269 10212.035 - 10264.675: 96.4062% ( 22) 01:19:21.269 10264.675 - 10317.314: 96.5625% ( 22) 01:19:21.269 10317.314 - 10369.953: 96.6974% ( 19) 01:19:21.269 10369.953 - 10422.593: 96.8253% ( 18) 01:19:21.269 10422.593 - 10475.232: 96.9673% ( 20) 01:19:21.269 10475.232 - 10527.871: 97.0810% ( 16) 01:19:21.269 10527.871 - 10580.511: 97.1804% ( 14) 01:19:21.269 10580.511 - 10633.150: 97.2514% ( 10) 01:19:21.269 10633.150 - 10685.790: 97.3082% ( 8) 01:19:21.269 10685.790 - 10738.429: 97.3651% ( 8) 01:19:21.269 10738.429 - 10791.068: 97.4077% ( 6) 01:19:21.269 10791.068 - 10843.708: 97.4716% ( 9) 01:19:21.269 10843.708 - 10896.347: 97.5071% ( 5) 01:19:21.269 10896.347 - 10948.986: 97.5426% ( 5) 01:19:21.269 10948.986 - 11001.626: 97.5781% ( 5) 01:19:21.269 11001.626 - 11054.265: 97.6065% ( 4) 01:19:21.269 11054.265 - 11106.904: 97.6420% ( 5) 01:19:21.269 11106.904 - 11159.544: 97.6776% ( 5) 01:19:21.269 11159.544 - 11212.183: 97.7060% ( 4) 01:19:21.269 11212.183 - 11264.822: 97.7415% ( 5) 01:19:21.269 11264.822 - 11317.462: 97.7699% ( 4) 01:19:21.269 11317.462 - 11370.101: 97.8054% ( 5) 01:19:21.269 11370.101 - 11422.741: 97.8409% ( 5) 01:19:21.269 11422.741 - 11475.380: 97.8693% ( 4) 01:19:21.269 11475.380 - 11528.019: 97.9048% ( 5) 01:19:21.269 11528.019 - 11580.659: 97.9332% ( 4) 01:19:21.269 11580.659 - 11633.298: 97.9688% ( 5) 01:19:21.269 11633.298 - 11685.937: 98.0043% ( 5) 01:19:21.269 11685.937 - 11738.577: 98.0256% ( 3) 01:19:21.269 11738.577 - 11791.216: 98.0469% ( 3) 01:19:21.269 11791.216 - 11843.855: 98.0611% ( 2) 01:19:21.269 11843.855 - 11896.495: 98.0753% ( 2) 01:19:21.269 11896.495 - 11949.134: 98.0966% ( 3) 01:19:21.269 11949.134 - 12001.773: 98.1108% ( 2) 01:19:21.269 12001.773 - 12054.413: 98.1250% ( 2) 01:19:21.269 12054.413 - 12107.052: 98.1463% ( 3) 01:19:21.269 12107.052 - 12159.692: 98.1605% ( 2) 01:19:21.269 12159.692 - 12212.331: 98.1747% ( 2) 01:19:21.269 12212.331 - 12264.970: 98.1818% ( 1) 01:19:21.269 12738.724 - 12791.364: 98.1960% ( 2) 01:19:21.269 12791.364 - 12844.003: 98.2670% ( 10) 01:19:21.269 12844.003 - 12896.643: 98.3239% ( 8) 01:19:21.269 12896.643 - 12949.282: 98.3594% ( 5) 01:19:21.269 12949.282 - 13001.921: 98.4091% ( 7) 01:19:21.269 13001.921 - 13054.561: 98.4446% ( 5) 01:19:21.269 13054.561 - 13107.200: 98.4943% ( 7) 01:19:21.269 13107.200 - 13159.839: 98.5440% ( 7) 01:19:21.269 13159.839 - 13212.479: 98.5866% ( 6) 01:19:21.269 13212.479 - 13265.118: 98.6435% ( 8) 01:19:21.269 13265.118 - 13317.757: 98.6861% ( 6) 01:19:21.269 13317.757 - 13370.397: 98.7287% ( 6) 01:19:21.269 13370.397 - 13423.036: 98.7784% ( 7) 01:19:21.269 13423.036 - 13475.676: 98.8281% ( 7) 01:19:21.269 13475.676 - 13580.954: 98.9205% ( 13) 01:19:21.269 13580.954 - 13686.233: 99.0128% ( 13) 01:19:21.269 13686.233 - 13791.512: 99.0909% ( 11) 01:19:21.269 30951.942 - 31162.500: 99.1193% ( 4) 01:19:21.269 31162.500 - 31373.057: 99.1690% ( 7) 01:19:21.269 31373.057 - 31583.614: 99.2045% ( 5) 01:19:21.269 31583.614 - 31794.172: 99.2472% ( 6) 01:19:21.269 31794.172 - 32004.729: 99.2827% ( 5) 01:19:21.269 32004.729 - 32215.287: 99.3182% ( 5) 01:19:21.269 32215.287 - 32425.844: 99.3608% ( 6) 01:19:21.269 32425.844 - 32636.402: 99.4034% ( 6) 01:19:21.269 32636.402 - 32846.959: 99.4318% ( 4) 01:19:21.269 32846.959 - 33057.516: 99.4744% ( 6) 01:19:21.269 33057.516 - 33268.074: 99.5170% ( 6) 01:19:21.269 33268.074 - 33478.631: 99.5455% ( 4) 01:19:21.269 38321.452 - 38532.010: 99.5668% ( 3) 01:19:21.269 38532.010 - 38742.567: 99.6094% ( 6) 01:19:21.269 38742.567 - 38953.124: 99.6449% ( 5) 01:19:21.269 38953.124 - 39163.682: 99.6804% ( 5) 01:19:21.269 39163.682 - 39374.239: 99.7230% ( 6) 01:19:21.269 39374.239 - 39584.797: 99.7585% ( 5) 01:19:21.269 39584.797 - 39795.354: 99.8011% ( 6) 01:19:21.269 39795.354 - 40005.912: 99.8438% ( 6) 01:19:21.269 40005.912 - 40216.469: 99.8793% ( 5) 01:19:21.269 40216.469 - 40427.027: 99.9219% ( 6) 01:19:21.269 40427.027 - 40637.584: 99.9645% ( 6) 01:19:21.269 40637.584 - 40848.141: 100.0000% ( 5) 01:19:21.269 01:19:21.269 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 01:19:21.269 ============================================================================== 01:19:21.269 Range in us Cumulative IO count 01:19:21.269 7790.625 - 7843.264: 0.0142% ( 2) 01:19:21.269 7843.264 - 7895.904: 0.0639% ( 7) 01:19:21.269 7895.904 - 7948.543: 0.3906% ( 46) 01:19:21.269 7948.543 - 8001.182: 0.8310% ( 62) 01:19:21.269 8001.182 - 8053.822: 1.8750% ( 147) 01:19:21.269 8053.822 - 8106.461: 3.5227% ( 232) 01:19:21.269 8106.461 - 8159.100: 5.6392% ( 298) 01:19:21.269 8159.100 - 8211.740: 8.3452% ( 381) 01:19:21.269 8211.740 - 8264.379: 11.2713% ( 412) 01:19:21.269 8264.379 - 8317.018: 14.5455% ( 461) 01:19:21.269 8317.018 - 8369.658: 18.1463% ( 507) 01:19:21.269 8369.658 - 8422.297: 21.9815% ( 540) 01:19:21.269 8422.297 - 8474.937: 26.0724% ( 576) 01:19:21.269 8474.937 - 8527.576: 30.3693% ( 605) 01:19:21.269 8527.576 - 8580.215: 35.1278% ( 670) 01:19:21.269 8580.215 - 8632.855: 40.0355% ( 691) 01:19:21.269 8632.855 - 8685.494: 45.0142% ( 701) 01:19:21.269 8685.494 - 8738.133: 50.1989% ( 730) 01:19:21.269 8738.133 - 8790.773: 55.1918% ( 703) 01:19:21.269 8790.773 - 8843.412: 60.1349% ( 696) 01:19:21.269 8843.412 - 8896.051: 64.8864% ( 669) 01:19:21.269 8896.051 - 8948.691: 69.2472% ( 614) 01:19:21.269 8948.691 - 9001.330: 72.9901% ( 527) 01:19:21.269 9001.330 - 9053.969: 76.3494% ( 473) 01:19:21.269 9053.969 - 9106.609: 79.1193% ( 390) 01:19:21.269 9106.609 - 9159.248: 81.7188% ( 366) 01:19:21.269 9159.248 - 9211.888: 83.9418% ( 313) 01:19:21.269 9211.888 - 9264.527: 86.0298% ( 294) 01:19:21.269 9264.527 - 9317.166: 87.7983% ( 249) 01:19:21.269 9317.166 - 9369.806: 89.2969% ( 211) 01:19:21.269 9369.806 - 9422.445: 90.4190% ( 158) 01:19:21.269 9422.445 - 9475.084: 91.3423% ( 130) 01:19:21.269 9475.084 - 9527.724: 92.1094% ( 108) 01:19:21.269 9527.724 - 9580.363: 92.6847% ( 81) 01:19:21.269 9580.363 - 9633.002: 93.1818% ( 70) 01:19:21.269 9633.002 - 9685.642: 93.6222% ( 62) 01:19:21.269 9685.642 - 9738.281: 94.0554% ( 61) 01:19:21.269 9738.281 - 9790.920: 94.4957% ( 62) 01:19:21.269 9790.920 - 9843.560: 94.8153% ( 45) 01:19:21.269 9843.560 - 9896.199: 95.0568% ( 34) 01:19:21.269 9896.199 - 9948.839: 95.2344% ( 25) 01:19:21.269 9948.839 - 10001.478: 95.4048% ( 24) 01:19:21.269 10001.478 - 10054.117: 95.5682% ( 23) 01:19:21.269 10054.117 - 10106.757: 95.6960% ( 18) 01:19:21.269 10106.757 - 10159.396: 95.8097% ( 16) 01:19:21.269 10159.396 - 10212.035: 95.9446% ( 19) 01:19:21.269 10212.035 - 10264.675: 96.0866% ( 20) 01:19:21.269 10264.675 - 10317.314: 96.2287% ( 20) 01:19:21.269 10317.314 - 10369.953: 96.3991% ( 24) 01:19:21.269 10369.953 - 10422.593: 96.5341% ( 19) 01:19:21.269 10422.593 - 10475.232: 96.6761% ( 20) 01:19:21.269 10475.232 - 10527.871: 96.8182% ( 20) 01:19:21.269 10527.871 - 10580.511: 96.9460% ( 18) 01:19:21.269 10580.511 - 10633.150: 97.0668% ( 17) 01:19:21.269 10633.150 - 10685.790: 97.1520% ( 12) 01:19:21.269 10685.790 - 10738.429: 97.2088% ( 8) 01:19:21.269 10738.429 - 10791.068: 97.2727% ( 9) 01:19:21.269 10791.068 - 10843.708: 97.3224% ( 7) 01:19:21.269 10843.708 - 10896.347: 97.3793% ( 8) 01:19:21.270 10896.347 - 10948.986: 97.4432% ( 9) 01:19:21.270 10948.986 - 11001.626: 97.4858% ( 6) 01:19:21.270 11001.626 - 11054.265: 97.5284% ( 6) 01:19:21.270 11054.265 - 11106.904: 97.5710% ( 6) 01:19:21.270 11106.904 - 11159.544: 97.5994% ( 4) 01:19:21.270 11159.544 - 11212.183: 97.6207% ( 3) 01:19:21.270 11212.183 - 11264.822: 97.6491% ( 4) 01:19:21.270 11264.822 - 11317.462: 97.6918% ( 6) 01:19:21.270 11317.462 - 11370.101: 97.7273% ( 5) 01:19:21.270 11370.101 - 11422.741: 97.7557% ( 4) 01:19:21.270 11422.741 - 11475.380: 97.7912% ( 5) 01:19:21.270 11475.380 - 11528.019: 97.8267% ( 5) 01:19:21.270 11528.019 - 11580.659: 97.8480% ( 3) 01:19:21.270 11580.659 - 11633.298: 97.8693% ( 3) 01:19:21.270 11633.298 - 11685.937: 97.8835% ( 2) 01:19:21.270 11685.937 - 11738.577: 97.8977% ( 2) 01:19:21.270 11738.577 - 11791.216: 97.9190% ( 3) 01:19:21.270 11791.216 - 11843.855: 97.9332% ( 2) 01:19:21.270 11843.855 - 11896.495: 97.9545% ( 3) 01:19:21.270 11896.495 - 11949.134: 97.9759% ( 3) 01:19:21.270 11949.134 - 12001.773: 97.9901% ( 2) 01:19:21.270 12001.773 - 12054.413: 98.0114% ( 3) 01:19:21.270 12054.413 - 12107.052: 98.0540% ( 6) 01:19:21.270 12107.052 - 12159.692: 98.0895% ( 5) 01:19:21.270 12159.692 - 12212.331: 98.1321% ( 6) 01:19:21.270 12212.331 - 12264.970: 98.1818% ( 7) 01:19:21.270 12264.970 - 12317.610: 98.2244% ( 6) 01:19:21.270 12317.610 - 12370.249: 98.2599% ( 5) 01:19:21.270 12370.249 - 12422.888: 98.2955% ( 5) 01:19:21.270 12422.888 - 12475.528: 98.3381% ( 6) 01:19:21.270 12475.528 - 12528.167: 98.3807% ( 6) 01:19:21.270 12528.167 - 12580.806: 98.4162% ( 5) 01:19:21.270 12580.806 - 12633.446: 98.4446% ( 4) 01:19:21.270 12633.446 - 12686.085: 98.4730% ( 4) 01:19:21.270 12686.085 - 12738.724: 98.4943% ( 3) 01:19:21.270 12738.724 - 12791.364: 98.5085% ( 2) 01:19:21.270 12791.364 - 12844.003: 98.5369% ( 4) 01:19:21.270 12844.003 - 12896.643: 98.5582% ( 3) 01:19:21.270 12896.643 - 12949.282: 98.5795% ( 3) 01:19:21.270 12949.282 - 13001.921: 98.6009% ( 3) 01:19:21.270 13001.921 - 13054.561: 98.6435% ( 6) 01:19:21.270 13054.561 - 13107.200: 98.6790% ( 5) 01:19:21.270 13107.200 - 13159.839: 98.7003% ( 3) 01:19:21.270 13159.839 - 13212.479: 98.7216% ( 3) 01:19:21.270 13212.479 - 13265.118: 98.7429% ( 3) 01:19:21.270 13265.118 - 13317.757: 98.7784% ( 5) 01:19:21.270 13317.757 - 13370.397: 98.8068% ( 4) 01:19:21.270 13370.397 - 13423.036: 98.8281% ( 3) 01:19:21.270 13423.036 - 13475.676: 98.8494% ( 3) 01:19:21.270 13475.676 - 13580.954: 98.8849% ( 5) 01:19:21.270 13580.954 - 13686.233: 98.9347% ( 7) 01:19:21.270 13686.233 - 13791.512: 98.9844% ( 7) 01:19:21.270 13791.512 - 13896.790: 99.0270% ( 6) 01:19:21.270 13896.790 - 14002.069: 99.0767% ( 7) 01:19:21.270 14002.069 - 14107.348: 99.0909% ( 2) 01:19:21.270 28635.810 - 28846.368: 99.1335% ( 6) 01:19:21.270 28846.368 - 29056.925: 99.1761% ( 6) 01:19:21.270 29056.925 - 29267.483: 99.2116% ( 5) 01:19:21.270 29267.483 - 29478.040: 99.2543% ( 6) 01:19:21.270 29478.040 - 29688.598: 99.2969% ( 6) 01:19:21.270 29688.598 - 29899.155: 99.3324% ( 5) 01:19:21.270 29899.155 - 30109.712: 99.3821% ( 7) 01:19:21.270 30109.712 - 30320.270: 99.4176% ( 5) 01:19:21.270 30320.270 - 30530.827: 99.4460% ( 4) 01:19:21.270 30530.827 - 30741.385: 99.4886% ( 6) 01:19:21.270 30741.385 - 30951.942: 99.5241% ( 5) 01:19:21.270 30951.942 - 31162.500: 99.5455% ( 3) 01:19:21.270 36005.320 - 36215.878: 99.5810% ( 5) 01:19:21.270 36215.878 - 36426.435: 99.6236% ( 6) 01:19:21.270 36426.435 - 36636.993: 99.6591% ( 5) 01:19:21.270 36636.993 - 36847.550: 99.6946% ( 5) 01:19:21.270 36847.550 - 37058.108: 99.7372% ( 6) 01:19:21.270 37058.108 - 37268.665: 99.7727% ( 5) 01:19:21.270 37268.665 - 37479.222: 99.8082% ( 5) 01:19:21.270 37479.222 - 37689.780: 99.8580% ( 7) 01:19:21.270 37689.780 - 37900.337: 99.8935% ( 5) 01:19:21.270 37900.337 - 38110.895: 99.9432% ( 7) 01:19:21.270 38110.895 - 38321.452: 99.9858% ( 6) 01:19:21.270 38321.452 - 38532.010: 100.0000% ( 2) 01:19:21.270 01:19:21.270 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 01:19:21.270 ============================================================================== 01:19:21.270 Range in us Cumulative IO count 01:19:21.270 7790.625 - 7843.264: 0.0142% ( 2) 01:19:21.270 7843.264 - 7895.904: 0.1705% ( 22) 01:19:21.270 7895.904 - 7948.543: 0.3906% ( 31) 01:19:21.270 7948.543 - 8001.182: 0.9375% ( 77) 01:19:21.270 8001.182 - 8053.822: 1.8608% ( 130) 01:19:21.270 8053.822 - 8106.461: 3.4375% ( 222) 01:19:21.270 8106.461 - 8159.100: 5.8807% ( 344) 01:19:21.270 8159.100 - 8211.740: 8.6648% ( 392) 01:19:21.270 8211.740 - 8264.379: 11.5909% ( 412) 01:19:21.270 8264.379 - 8317.018: 14.8366% ( 457) 01:19:21.270 8317.018 - 8369.658: 18.4020% ( 502) 01:19:21.270 8369.658 - 8422.297: 22.2159% ( 537) 01:19:21.270 8422.297 - 8474.937: 26.4773% ( 600) 01:19:21.270 8474.937 - 8527.576: 30.9517% ( 630) 01:19:21.270 8527.576 - 8580.215: 35.5043% ( 641) 01:19:21.270 8580.215 - 8632.855: 40.4119% ( 691) 01:19:21.270 8632.855 - 8685.494: 45.5540% ( 724) 01:19:21.270 8685.494 - 8738.133: 50.5114% ( 698) 01:19:21.270 8738.133 - 8790.773: 55.5398% ( 708) 01:19:21.270 8790.773 - 8843.412: 60.3906% ( 683) 01:19:21.270 8843.412 - 8896.051: 65.1989% ( 677) 01:19:21.270 8896.051 - 8948.691: 69.5312% ( 610) 01:19:21.270 8948.691 - 9001.330: 73.3310% ( 535) 01:19:21.270 9001.330 - 9053.969: 76.4489% ( 439) 01:19:21.270 9053.969 - 9106.609: 79.2543% ( 395) 01:19:21.270 9106.609 - 9159.248: 81.7472% ( 351) 01:19:21.270 9159.248 - 9211.888: 84.0270% ( 321) 01:19:21.270 9211.888 - 9264.527: 86.0511% ( 285) 01:19:21.270 9264.527 - 9317.166: 87.8054% ( 247) 01:19:21.270 9317.166 - 9369.806: 89.2472% ( 203) 01:19:21.270 9369.806 - 9422.445: 90.4545% ( 170) 01:19:21.270 9422.445 - 9475.084: 91.4205% ( 136) 01:19:21.270 9475.084 - 9527.724: 92.1662% ( 105) 01:19:21.270 9527.724 - 9580.363: 92.7344% ( 80) 01:19:21.270 9580.363 - 9633.002: 93.2102% ( 67) 01:19:21.270 9633.002 - 9685.642: 93.6435% ( 61) 01:19:21.270 9685.642 - 9738.281: 94.0412% ( 56) 01:19:21.270 9738.281 - 9790.920: 94.3750% ( 47) 01:19:21.270 9790.920 - 9843.560: 94.6165% ( 34) 01:19:21.270 9843.560 - 9896.199: 94.7869% ( 24) 01:19:21.270 9896.199 - 9948.839: 94.9929% ( 29) 01:19:21.270 9948.839 - 10001.478: 95.1634% ( 24) 01:19:21.270 10001.478 - 10054.117: 95.3054% ( 20) 01:19:21.270 10054.117 - 10106.757: 95.4332% ( 18) 01:19:21.270 10106.757 - 10159.396: 95.5682% ( 19) 01:19:21.270 10159.396 - 10212.035: 95.7244% ( 22) 01:19:21.270 10212.035 - 10264.675: 95.8807% ( 22) 01:19:21.270 10264.675 - 10317.314: 95.9943% ( 16) 01:19:21.270 10317.314 - 10369.953: 96.1222% ( 18) 01:19:21.270 10369.953 - 10422.593: 96.2571% ( 19) 01:19:21.270 10422.593 - 10475.232: 96.3636% ( 15) 01:19:21.270 10475.232 - 10527.871: 96.4631% ( 14) 01:19:21.270 10527.871 - 10580.511: 96.5625% ( 14) 01:19:21.270 10580.511 - 10633.150: 96.6335% ( 10) 01:19:21.270 10633.150 - 10685.790: 96.6974% ( 9) 01:19:21.270 10685.790 - 10738.429: 96.7543% ( 8) 01:19:21.270 10738.429 - 10791.068: 96.8182% ( 9) 01:19:21.270 10791.068 - 10843.708: 96.8608% ( 6) 01:19:21.270 10843.708 - 10896.347: 96.9176% ( 8) 01:19:21.270 10896.347 - 10948.986: 96.9744% ( 8) 01:19:21.270 10948.986 - 11001.626: 97.0312% ( 8) 01:19:21.270 11001.626 - 11054.265: 97.0881% ( 8) 01:19:21.270 11054.265 - 11106.904: 97.1449% ( 8) 01:19:21.270 11106.904 - 11159.544: 97.1946% ( 7) 01:19:21.270 11159.544 - 11212.183: 97.2514% ( 8) 01:19:21.270 11212.183 - 11264.822: 97.3580% ( 15) 01:19:21.270 11264.822 - 11317.462: 97.4219% ( 9) 01:19:21.270 11317.462 - 11370.101: 97.4858% ( 9) 01:19:21.270 11370.101 - 11422.741: 97.5426% ( 8) 01:19:21.270 11422.741 - 11475.380: 97.5994% ( 8) 01:19:21.270 11475.380 - 11528.019: 97.6420% ( 6) 01:19:21.270 11528.019 - 11580.659: 97.7273% ( 12) 01:19:21.270 11580.659 - 11633.298: 97.8125% ( 12) 01:19:21.270 11633.298 - 11685.937: 97.8622% ( 7) 01:19:21.270 11685.937 - 11738.577: 97.9119% ( 7) 01:19:21.270 11738.577 - 11791.216: 97.9759% ( 9) 01:19:21.270 11791.216 - 11843.855: 98.0469% ( 10) 01:19:21.271 11843.855 - 11896.495: 98.1108% ( 9) 01:19:21.271 11896.495 - 11949.134: 98.1676% ( 8) 01:19:21.271 11949.134 - 12001.773: 98.2457% ( 11) 01:19:21.271 12001.773 - 12054.413: 98.3168% ( 10) 01:19:21.271 12054.413 - 12107.052: 98.3807% ( 9) 01:19:21.271 12107.052 - 12159.692: 98.4091% ( 4) 01:19:21.271 12159.692 - 12212.331: 98.4304% ( 3) 01:19:21.271 12212.331 - 12264.970: 98.4517% ( 3) 01:19:21.271 12264.970 - 12317.610: 98.4659% ( 2) 01:19:21.271 12317.610 - 12370.249: 98.4801% ( 2) 01:19:21.271 12370.249 - 12422.888: 98.5014% ( 3) 01:19:21.271 12422.888 - 12475.528: 98.5156% ( 2) 01:19:21.271 12475.528 - 12528.167: 98.5298% ( 2) 01:19:21.271 12528.167 - 12580.806: 98.5511% ( 3) 01:19:21.271 12580.806 - 12633.446: 98.5724% ( 3) 01:19:21.271 12633.446 - 12686.085: 98.5866% ( 2) 01:19:21.271 12686.085 - 12738.724: 98.5938% ( 1) 01:19:21.271 12738.724 - 12791.364: 98.6080% ( 2) 01:19:21.271 12791.364 - 12844.003: 98.6293% ( 3) 01:19:21.271 12844.003 - 12896.643: 98.6364% ( 1) 01:19:21.271 13212.479 - 13265.118: 98.6932% ( 8) 01:19:21.271 13265.118 - 13317.757: 98.7145% ( 3) 01:19:21.271 13317.757 - 13370.397: 98.7287% ( 2) 01:19:21.271 13370.397 - 13423.036: 98.7500% ( 3) 01:19:21.271 13423.036 - 13475.676: 98.7784% ( 4) 01:19:21.271 13475.676 - 13580.954: 98.8210% ( 6) 01:19:21.271 13580.954 - 13686.233: 98.8707% ( 7) 01:19:21.271 13686.233 - 13791.512: 98.9134% ( 6) 01:19:21.271 13791.512 - 13896.790: 98.9631% ( 7) 01:19:21.271 13896.790 - 14002.069: 99.0057% ( 6) 01:19:21.271 14002.069 - 14107.348: 99.0483% ( 6) 01:19:21.271 14107.348 - 14212.627: 99.0909% ( 6) 01:19:21.271 26319.679 - 26424.957: 99.1051% ( 2) 01:19:21.271 26424.957 - 26530.236: 99.1264% ( 3) 01:19:21.271 26530.236 - 26635.515: 99.1548% ( 4) 01:19:21.271 26635.515 - 26740.794: 99.1761% ( 3) 01:19:21.271 26740.794 - 26846.072: 99.1903% ( 2) 01:19:21.271 26846.072 - 26951.351: 99.2116% ( 3) 01:19:21.271 26951.351 - 27161.908: 99.2472% ( 5) 01:19:21.271 27161.908 - 27372.466: 99.2827% ( 5) 01:19:21.271 27372.466 - 27583.023: 99.3182% ( 5) 01:19:21.271 27583.023 - 27793.581: 99.3608% ( 6) 01:19:21.271 27793.581 - 28004.138: 99.4034% ( 6) 01:19:21.271 28004.138 - 28214.696: 99.4460% ( 6) 01:19:21.271 28214.696 - 28425.253: 99.4957% ( 7) 01:19:21.271 28425.253 - 28635.810: 99.5312% ( 5) 01:19:21.271 28635.810 - 28846.368: 99.5455% ( 2) 01:19:21.271 33478.631 - 33689.189: 99.5739% ( 4) 01:19:21.271 33689.189 - 33899.746: 99.6165% ( 6) 01:19:21.271 33899.746 - 34110.304: 99.6591% ( 6) 01:19:21.271 34110.304 - 34320.861: 99.7017% ( 6) 01:19:21.271 34320.861 - 34531.418: 99.7443% ( 6) 01:19:21.271 34531.418 - 34741.976: 99.7727% ( 4) 01:19:21.271 34741.976 - 34952.533: 99.8153% ( 6) 01:19:21.271 34952.533 - 35163.091: 99.8580% ( 6) 01:19:21.271 35163.091 - 35373.648: 99.9006% ( 6) 01:19:21.271 35373.648 - 35584.206: 99.9432% ( 6) 01:19:21.271 35584.206 - 35794.763: 99.9858% ( 6) 01:19:21.271 35794.763 - 36005.320: 100.0000% ( 2) 01:19:21.271 01:19:21.530 05:14:03 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 01:19:22.909 Initializing NVMe Controllers 01:19:22.909 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 01:19:22.909 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 01:19:22.909 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 01:19:22.909 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 01:19:22.909 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 01:19:22.909 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 01:19:22.909 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 01:19:22.909 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 01:19:22.909 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 01:19:22.909 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 01:19:22.909 Initialization complete. Launching workers. 01:19:22.909 ======================================================== 01:19:22.909 Latency(us) 01:19:22.909 Device Information : IOPS MiB/s Average min max 01:19:22.909 PCIE (0000:00:13.0) NSID 1 from core 0: 13596.54 159.33 9438.88 7135.29 46793.95 01:19:22.909 PCIE (0000:00:10.0) NSID 1 from core 0: 13596.54 159.33 9421.03 7220.13 44829.74 01:19:22.909 PCIE (0000:00:11.0) NSID 1 from core 0: 13596.54 159.33 9404.15 7263.08 42280.24 01:19:22.909 PCIE (0000:00:12.0) NSID 1 from core 0: 13596.54 159.33 9388.08 7046.91 40411.55 01:19:22.909 PCIE (0000:00:12.0) NSID 2 from core 0: 13596.54 159.33 9371.98 7132.14 38416.31 01:19:22.909 PCIE (0000:00:12.0) NSID 3 from core 0: 13660.37 160.08 9311.18 7367.77 31168.36 01:19:22.909 ======================================================== 01:19:22.909 Total : 81643.07 956.75 9389.16 7046.91 46793.95 01:19:22.909 01:19:22.909 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 01:19:22.909 ================================================================================= 01:19:22.909 1.00000% : 7580.067us 01:19:22.909 10.00000% : 7948.543us 01:19:22.909 25.00000% : 8211.740us 01:19:22.909 50.00000% : 8580.215us 01:19:22.909 75.00000% : 9264.527us 01:19:22.909 90.00000% : 11159.544us 01:19:22.909 95.00000% : 12896.643us 01:19:22.909 98.00000% : 18950.169us 01:19:22.909 99.00000% : 22634.924us 01:19:22.909 99.50000% : 39163.682us 01:19:22.909 99.90000% : 46533.192us 01:19:22.909 99.99000% : 46954.307us 01:19:22.909 99.99900% : 46954.307us 01:19:22.909 99.99990% : 46954.307us 01:19:22.909 99.99999% : 46954.307us 01:19:22.909 01:19:22.909 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 01:19:22.909 ================================================================================= 01:19:22.909 1.00000% : 7527.428us 01:19:22.909 10.00000% : 7895.904us 01:19:22.909 25.00000% : 8211.740us 01:19:22.909 50.00000% : 8632.855us 01:19:22.909 75.00000% : 9264.527us 01:19:22.909 90.00000% : 11212.183us 01:19:22.909 95.00000% : 13159.839us 01:19:22.909 98.00000% : 18844.890us 01:19:22.909 99.00000% : 22424.366us 01:19:22.909 99.50000% : 37058.108us 01:19:22.909 99.90000% : 44427.618us 01:19:22.909 99.99000% : 44848.733us 01:19:22.909 99.99900% : 44848.733us 01:19:22.909 99.99990% : 44848.733us 01:19:22.909 99.99999% : 44848.733us 01:19:22.909 01:19:22.909 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 01:19:22.909 ================================================================================= 01:19:22.909 1.00000% : 7580.067us 01:19:22.909 10.00000% : 7948.543us 01:19:22.909 25.00000% : 8211.740us 01:19:22.909 50.00000% : 8580.215us 01:19:22.909 75.00000% : 9264.527us 01:19:22.909 90.00000% : 11159.544us 01:19:22.909 95.00000% : 13686.233us 01:19:22.909 98.00000% : 18213.218us 01:19:22.909 99.00000% : 22003.251us 01:19:22.909 99.50000% : 35373.648us 01:19:22.910 99.90000% : 41900.929us 01:19:22.910 99.99000% : 42322.043us 01:19:22.910 99.99900% : 42322.043us 01:19:22.910 99.99990% : 42322.043us 01:19:22.910 99.99999% : 42322.043us 01:19:22.910 01:19:22.910 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 01:19:22.910 ================================================================================= 01:19:22.910 1.00000% : 7527.428us 01:19:22.910 10.00000% : 7948.543us 01:19:22.910 25.00000% : 8211.740us 01:19:22.910 50.00000% : 8632.855us 01:19:22.910 75.00000% : 9317.166us 01:19:22.910 90.00000% : 11054.265us 01:19:22.910 95.00000% : 13791.512us 01:19:22.910 98.00000% : 17055.152us 01:19:22.910 99.00000% : 22950.760us 01:19:22.910 99.50000% : 34110.304us 01:19:22.910 99.90000% : 40005.912us 01:19:22.910 99.99000% : 40427.027us 01:19:22.910 99.99900% : 40427.027us 01:19:22.910 99.99990% : 40427.027us 01:19:22.910 99.99999% : 40427.027us 01:19:22.910 01:19:22.910 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 01:19:22.910 ================================================================================= 01:19:22.910 1.00000% : 7632.707us 01:19:22.910 10.00000% : 7948.543us 01:19:22.910 25.00000% : 8211.740us 01:19:22.910 50.00000% : 8580.215us 01:19:22.910 75.00000% : 9317.166us 01:19:22.910 90.00000% : 11106.904us 01:19:22.910 95.00000% : 13423.036us 01:19:22.910 98.00000% : 16949.873us 01:19:22.910 99.00000% : 22740.202us 01:19:22.910 99.50000% : 32004.729us 01:19:22.910 99.90000% : 38110.895us 01:19:22.910 99.99000% : 38532.010us 01:19:22.910 99.99900% : 38532.010us 01:19:22.910 99.99990% : 38532.010us 01:19:22.910 99.99999% : 38532.010us 01:19:22.910 01:19:22.910 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 01:19:22.910 ================================================================================= 01:19:22.910 1.00000% : 7632.707us 01:19:22.910 10.00000% : 7948.543us 01:19:22.910 25.00000% : 8211.740us 01:19:22.910 50.00000% : 8580.215us 01:19:22.910 75.00000% : 9317.166us 01:19:22.910 90.00000% : 11264.822us 01:19:22.910 95.00000% : 13423.036us 01:19:22.910 98.00000% : 17897.382us 01:19:22.910 99.00000% : 22424.366us 01:19:22.910 99.50000% : 23687.711us 01:19:22.910 99.90000% : 30951.942us 01:19:22.910 99.99000% : 31162.500us 01:19:22.910 99.99900% : 31373.057us 01:19:22.910 99.99990% : 31373.057us 01:19:22.910 99.99999% : 31373.057us 01:19:22.910 01:19:22.910 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 01:19:22.910 ============================================================================== 01:19:22.910 Range in us Cumulative IO count 01:19:22.910 7106.313 - 7158.953: 0.0073% ( 1) 01:19:22.910 7158.953 - 7211.592: 0.0147% ( 1) 01:19:22.910 7211.592 - 7264.231: 0.0440% ( 4) 01:19:22.910 7264.231 - 7316.871: 0.1247% ( 11) 01:19:22.910 7316.871 - 7369.510: 0.2347% ( 15) 01:19:22.910 7369.510 - 7422.149: 0.4842% ( 34) 01:19:22.910 7422.149 - 7474.789: 0.6822% ( 27) 01:19:22.910 7474.789 - 7527.428: 0.8950% ( 29) 01:19:22.910 7527.428 - 7580.067: 1.0930% ( 27) 01:19:22.910 7580.067 - 7632.707: 1.5405% ( 61) 01:19:22.910 7632.707 - 7685.346: 2.1053% ( 77) 01:19:22.910 7685.346 - 7737.986: 3.0737% ( 132) 01:19:22.910 7737.986 - 7790.625: 4.2033% ( 154) 01:19:22.910 7790.625 - 7843.264: 6.0666% ( 254) 01:19:22.910 7843.264 - 7895.904: 8.3333% ( 309) 01:19:22.910 7895.904 - 7948.543: 10.4974% ( 295) 01:19:22.910 7948.543 - 8001.182: 12.7054% ( 301) 01:19:22.910 8001.182 - 8053.822: 15.8744% ( 432) 01:19:22.910 8053.822 - 8106.461: 19.3809% ( 478) 01:19:22.910 8106.461 - 8159.100: 22.9900% ( 492) 01:19:22.910 8159.100 - 8211.740: 26.8486% ( 526) 01:19:22.910 8211.740 - 8264.379: 30.7512% ( 532) 01:19:22.910 8264.379 - 8317.018: 34.3897% ( 496) 01:19:22.910 8317.018 - 8369.658: 38.0648% ( 501) 01:19:22.910 8369.658 - 8422.297: 41.6153% ( 484) 01:19:22.910 8422.297 - 8474.937: 45.0337% ( 466) 01:19:22.910 8474.937 - 8527.576: 47.9900% ( 403) 01:19:22.910 8527.576 - 8580.215: 50.7409% ( 375) 01:19:22.910 8580.215 - 8632.855: 53.3304% ( 353) 01:19:22.910 8632.855 - 8685.494: 56.1620% ( 386) 01:19:22.910 8685.494 - 8738.133: 58.6268% ( 336) 01:19:22.910 8738.133 - 8790.773: 60.9228% ( 313) 01:19:22.910 8790.773 - 8843.412: 63.2996% ( 324) 01:19:22.910 8843.412 - 8896.051: 65.2362% ( 264) 01:19:22.910 8896.051 - 8948.691: 66.8721% ( 223) 01:19:22.910 8948.691 - 9001.330: 68.3465% ( 201) 01:19:22.910 9001.330 - 9053.969: 69.5202% ( 160) 01:19:22.910 9053.969 - 9106.609: 71.0167% ( 204) 01:19:22.910 9106.609 - 9159.248: 72.7333% ( 234) 01:19:22.910 9159.248 - 9211.888: 74.2444% ( 206) 01:19:22.910 9211.888 - 9264.527: 75.1907% ( 129) 01:19:22.910 9264.527 - 9317.166: 76.5258% ( 182) 01:19:22.910 9317.166 - 9369.806: 77.8242% ( 177) 01:19:22.910 9369.806 - 9422.445: 78.9759% ( 157) 01:19:22.910 9422.445 - 9475.084: 80.1350% ( 158) 01:19:22.910 9475.084 - 9527.724: 81.0813% ( 129) 01:19:22.910 9527.724 - 9580.363: 81.9542% ( 119) 01:19:22.910 9580.363 - 9633.002: 82.8638% ( 124) 01:19:22.910 9633.002 - 9685.642: 83.3187% ( 62) 01:19:22.910 9685.642 - 9738.281: 83.8615% ( 74) 01:19:22.910 9738.281 - 9790.920: 84.3383% ( 65) 01:19:22.910 9790.920 - 9843.560: 84.6317% ( 40) 01:19:22.910 9843.560 - 9896.199: 84.9692% ( 46) 01:19:22.910 9896.199 - 9948.839: 85.3653% ( 54) 01:19:22.910 9948.839 - 10001.478: 85.6221% ( 35) 01:19:22.910 10001.478 - 10054.117: 85.8788% ( 35) 01:19:22.910 10054.117 - 10106.757: 86.1282% ( 34) 01:19:22.910 10106.757 - 10159.396: 86.3336% ( 28) 01:19:22.910 10159.396 - 10212.035: 86.4950% ( 22) 01:19:22.910 10212.035 - 10264.675: 86.7518% ( 35) 01:19:22.910 10264.675 - 10317.314: 86.8618% ( 15) 01:19:22.910 10317.314 - 10369.953: 87.2359% ( 51) 01:19:22.910 10369.953 - 10422.593: 87.3606% ( 17) 01:19:22.910 10422.593 - 10475.232: 87.5073% ( 20) 01:19:22.910 10475.232 - 10527.871: 87.6687% ( 22) 01:19:22.910 10527.871 - 10580.511: 87.8961% ( 31) 01:19:22.910 10580.511 - 10633.150: 88.0942% ( 27) 01:19:22.910 10633.150 - 10685.790: 88.3216% ( 31) 01:19:22.910 10685.790 - 10738.429: 88.4316% ( 15) 01:19:22.910 10738.429 - 10791.068: 88.5197% ( 12) 01:19:22.910 10791.068 - 10843.708: 88.6150% ( 13) 01:19:22.910 10843.708 - 10896.347: 88.7471% ( 18) 01:19:22.910 10896.347 - 10948.986: 88.9451% ( 27) 01:19:22.910 10948.986 - 11001.626: 89.2459% ( 41) 01:19:22.910 11001.626 - 11054.265: 89.5613% ( 43) 01:19:22.910 11054.265 - 11106.904: 89.8841% ( 44) 01:19:22.910 11106.904 - 11159.544: 90.0748% ( 26) 01:19:22.910 11159.544 - 11212.183: 90.2656% ( 26) 01:19:22.910 11212.183 - 11264.822: 90.3609% ( 13) 01:19:22.910 11264.822 - 11317.462: 90.4416% ( 11) 01:19:22.910 11317.462 - 11370.101: 90.5370% ( 13) 01:19:22.910 11370.101 - 11422.741: 90.6617% ( 17) 01:19:22.910 11422.741 - 11475.380: 90.7497% ( 12) 01:19:22.910 11475.380 - 11528.019: 90.8451% ( 13) 01:19:22.910 11528.019 - 11580.659: 90.8891% ( 6) 01:19:22.910 11580.659 - 11633.298: 90.9478% ( 8) 01:19:22.910 11633.298 - 11685.937: 91.0285% ( 11) 01:19:22.910 11685.937 - 11738.577: 91.1165% ( 12) 01:19:22.910 11738.577 - 11791.216: 91.2119% ( 13) 01:19:22.910 11791.216 - 11843.855: 91.3366% ( 17) 01:19:22.910 11843.855 - 11896.495: 91.4393% ( 14) 01:19:22.910 11896.495 - 11949.134: 91.5200% ( 11) 01:19:22.910 11949.134 - 12001.773: 91.6887% ( 23) 01:19:22.910 12001.773 - 12054.413: 91.8501% ( 22) 01:19:22.910 12054.413 - 12107.052: 92.1508% ( 41) 01:19:22.910 12107.052 - 12159.692: 92.5396% ( 53) 01:19:22.910 12159.692 - 12212.331: 92.8917% ( 48) 01:19:22.910 12212.331 - 12264.970: 93.1705% ( 38) 01:19:22.910 12264.970 - 12317.610: 93.3759% ( 28) 01:19:22.910 12317.610 - 12370.249: 93.5519% ( 24) 01:19:22.910 12370.249 - 12422.888: 93.8013% ( 34) 01:19:22.910 12422.888 - 12475.528: 93.9774% ( 24) 01:19:22.910 12475.528 - 12528.167: 94.1094% ( 18) 01:19:22.910 12528.167 - 12580.806: 94.2488% ( 19) 01:19:22.910 12580.806 - 12633.446: 94.4322% ( 25) 01:19:22.910 12633.446 - 12686.085: 94.6009% ( 23) 01:19:22.910 12686.085 - 12738.724: 94.7770% ( 24) 01:19:22.910 12738.724 - 12791.364: 94.8724% ( 13) 01:19:22.910 12791.364 - 12844.003: 94.9677% ( 13) 01:19:22.910 12844.003 - 12896.643: 95.0851% ( 16) 01:19:22.910 12896.643 - 12949.282: 95.2098% ( 17) 01:19:22.910 12949.282 - 13001.921: 95.3345% ( 17) 01:19:22.910 13001.921 - 13054.561: 95.4372% ( 14) 01:19:22.910 13054.561 - 13107.200: 95.5252% ( 12) 01:19:22.910 13107.200 - 13159.839: 95.5766% ( 7) 01:19:22.910 13159.839 - 13212.479: 95.6206% ( 6) 01:19:22.910 13212.479 - 13265.118: 95.6426% ( 3) 01:19:22.910 13265.118 - 13317.757: 95.6646% ( 3) 01:19:22.910 13317.757 - 13370.397: 95.6793% ( 2) 01:19:22.910 13370.397 - 13423.036: 95.6866% ( 1) 01:19:22.910 13423.036 - 13475.676: 95.7013% ( 2) 01:19:22.910 13475.676 - 13580.954: 95.7233% ( 3) 01:19:22.910 13580.954 - 13686.233: 95.7453% ( 3) 01:19:22.910 13686.233 - 13791.512: 95.7746% ( 4) 01:19:22.910 14107.348 - 14212.627: 95.7820% ( 1) 01:19:22.910 14212.627 - 14317.905: 95.8627% ( 11) 01:19:22.910 14317.905 - 14423.184: 95.9727% ( 15) 01:19:22.910 14423.184 - 14528.463: 96.0754% ( 14) 01:19:22.910 14528.463 - 14633.741: 96.1928% ( 16) 01:19:22.910 14633.741 - 14739.020: 96.2808% ( 12) 01:19:22.910 14739.020 - 14844.299: 96.3395% ( 8) 01:19:22.910 14844.299 - 14949.578: 96.4055% ( 9) 01:19:22.911 14949.578 - 15054.856: 96.4715% ( 9) 01:19:22.911 15054.856 - 15160.135: 96.5156% ( 6) 01:19:22.911 15160.135 - 15265.414: 96.5522% ( 5) 01:19:22.911 15265.414 - 15370.692: 96.6183% ( 9) 01:19:22.911 15370.692 - 15475.971: 96.6769% ( 8) 01:19:22.911 15475.971 - 15581.250: 96.7503% ( 10) 01:19:22.911 15581.250 - 15686.529: 96.8457% ( 13) 01:19:22.911 15686.529 - 15791.807: 97.0290% ( 25) 01:19:22.911 15791.807 - 15897.086: 97.2638% ( 32) 01:19:22.911 15897.086 - 16002.365: 97.4839% ( 30) 01:19:22.911 16002.365 - 16107.643: 97.5719% ( 12) 01:19:22.911 16107.643 - 16212.922: 97.6526% ( 11) 01:19:22.911 18318.496 - 18423.775: 97.6673% ( 2) 01:19:22.911 18423.775 - 18529.054: 97.7113% ( 6) 01:19:22.911 18529.054 - 18634.333: 97.7479% ( 5) 01:19:22.911 18634.333 - 18739.611: 97.7993% ( 7) 01:19:22.911 18739.611 - 18844.890: 97.8727% ( 10) 01:19:22.911 18844.890 - 18950.169: 98.0120% ( 19) 01:19:22.911 18950.169 - 19055.447: 98.2248% ( 29) 01:19:22.911 19055.447 - 19160.726: 98.3421% ( 16) 01:19:22.911 19160.726 - 19266.005: 98.4155% ( 10) 01:19:22.911 19266.005 - 19371.284: 98.4522% ( 5) 01:19:22.911 19371.284 - 19476.562: 98.4815% ( 4) 01:19:22.911 19476.562 - 19581.841: 98.5182% ( 5) 01:19:22.911 19581.841 - 19687.120: 98.5475% ( 4) 01:19:22.911 19687.120 - 19792.398: 98.5695% ( 3) 01:19:22.911 19792.398 - 19897.677: 98.5915% ( 3) 01:19:22.911 21476.858 - 21582.137: 98.6356% ( 6) 01:19:22.911 21582.137 - 21687.415: 98.6796% ( 6) 01:19:22.911 21687.415 - 21792.694: 98.7089% ( 4) 01:19:22.911 21792.694 - 21897.973: 98.7456% ( 5) 01:19:22.911 21897.973 - 22003.251: 98.7823% ( 5) 01:19:22.911 22003.251 - 22108.530: 98.8263% ( 6) 01:19:22.911 22108.530 - 22213.809: 98.8630% ( 5) 01:19:22.911 22213.809 - 22319.088: 98.9070% ( 6) 01:19:22.911 22319.088 - 22424.366: 98.9437% ( 5) 01:19:22.911 22424.366 - 22529.645: 98.9730% ( 4) 01:19:22.911 22529.645 - 22634.924: 99.0097% ( 5) 01:19:22.911 22634.924 - 22740.202: 99.0464% ( 5) 01:19:22.911 22740.202 - 22845.481: 99.0610% ( 2) 01:19:22.911 37058.108 - 37268.665: 99.0830% ( 3) 01:19:22.911 37268.665 - 37479.222: 99.1344% ( 7) 01:19:22.911 37479.222 - 37689.780: 99.1857% ( 7) 01:19:22.911 37689.780 - 37900.337: 99.2371% ( 7) 01:19:22.911 37900.337 - 38110.895: 99.2811% ( 6) 01:19:22.911 38110.895 - 38321.452: 99.3398% ( 8) 01:19:22.911 38321.452 - 38532.010: 99.3838% ( 6) 01:19:22.911 38532.010 - 38742.567: 99.4352% ( 7) 01:19:22.911 38742.567 - 38953.124: 99.4865% ( 7) 01:19:22.911 38953.124 - 39163.682: 99.5305% ( 6) 01:19:22.911 44848.733 - 45059.290: 99.5672% ( 5) 01:19:22.911 45059.290 - 45269.847: 99.6185% ( 7) 01:19:22.911 45269.847 - 45480.405: 99.6699% ( 7) 01:19:22.911 45480.405 - 45690.962: 99.7212% ( 7) 01:19:22.911 45690.962 - 45901.520: 99.7726% ( 7) 01:19:22.911 45901.520 - 46112.077: 99.8239% ( 7) 01:19:22.911 46112.077 - 46322.635: 99.8753% ( 7) 01:19:22.911 46322.635 - 46533.192: 99.9340% ( 8) 01:19:22.911 46533.192 - 46743.749: 99.9853% ( 7) 01:19:22.911 46743.749 - 46954.307: 100.0000% ( 2) 01:19:22.911 01:19:22.911 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 01:19:22.911 ============================================================================== 01:19:22.911 Range in us Cumulative IO count 01:19:22.911 7211.592 - 7264.231: 0.0073% ( 1) 01:19:22.911 7264.231 - 7316.871: 0.0440% ( 5) 01:19:22.911 7316.871 - 7369.510: 0.0954% ( 7) 01:19:22.911 7369.510 - 7422.149: 0.2934% ( 27) 01:19:22.911 7422.149 - 7474.789: 0.7189% ( 58) 01:19:22.911 7474.789 - 7527.428: 1.2251% ( 69) 01:19:22.911 7527.428 - 7580.067: 1.8706% ( 88) 01:19:22.911 7580.067 - 7632.707: 2.8096% ( 128) 01:19:22.911 7632.707 - 7685.346: 3.8292% ( 139) 01:19:22.911 7685.346 - 7737.986: 5.1276% ( 177) 01:19:22.911 7737.986 - 7790.625: 6.3894% ( 172) 01:19:22.911 7790.625 - 7843.264: 8.1499% ( 240) 01:19:22.911 7843.264 - 7895.904: 10.0939% ( 265) 01:19:22.911 7895.904 - 7948.543: 12.2066% ( 288) 01:19:22.911 7948.543 - 8001.182: 14.6860% ( 338) 01:19:22.911 8001.182 - 8053.822: 17.6643% ( 406) 01:19:22.911 8053.822 - 8106.461: 20.5326% ( 391) 01:19:22.911 8106.461 - 8159.100: 23.4888% ( 403) 01:19:22.911 8159.100 - 8211.740: 26.9880% ( 477) 01:19:22.911 8211.740 - 8264.379: 30.8759% ( 530) 01:19:22.911 8264.379 - 8317.018: 34.6831% ( 519) 01:19:22.911 8317.018 - 8369.658: 37.8741% ( 435) 01:19:22.911 8369.658 - 8422.297: 41.3586% ( 475) 01:19:22.911 8422.297 - 8474.937: 44.1094% ( 375) 01:19:22.911 8474.937 - 8527.576: 46.8090% ( 368) 01:19:22.911 8527.576 - 8580.215: 49.4498% ( 360) 01:19:22.911 8580.215 - 8632.855: 52.2447% ( 381) 01:19:22.911 8632.855 - 8685.494: 54.7682% ( 344) 01:19:22.911 8685.494 - 8738.133: 57.8052% ( 414) 01:19:22.911 8738.133 - 8790.773: 60.6441% ( 387) 01:19:22.911 8790.773 - 8843.412: 62.4266% ( 243) 01:19:22.911 8843.412 - 8896.051: 64.1212% ( 231) 01:19:22.911 8896.051 - 8948.691: 65.7130% ( 217) 01:19:22.911 8948.691 - 9001.330: 67.5983% ( 257) 01:19:22.911 9001.330 - 9053.969: 69.4909% ( 258) 01:19:22.911 9053.969 - 9106.609: 71.3908% ( 259) 01:19:22.911 9106.609 - 9159.248: 72.9387% ( 211) 01:19:22.911 9159.248 - 9211.888: 74.1417% ( 164) 01:19:22.911 9211.888 - 9264.527: 75.3374% ( 163) 01:19:22.911 9264.527 - 9317.166: 76.5698% ( 168) 01:19:22.911 9317.166 - 9369.806: 77.6335% ( 145) 01:19:22.911 9369.806 - 9422.445: 78.6898% ( 144) 01:19:22.911 9422.445 - 9475.084: 79.7975% ( 151) 01:19:22.911 9475.084 - 9527.724: 80.7292% ( 127) 01:19:22.911 9527.724 - 9580.363: 81.5654% ( 114) 01:19:22.911 9580.363 - 9633.002: 82.2110% ( 88) 01:19:22.911 9633.002 - 9685.642: 82.7391% ( 72) 01:19:22.911 9685.642 - 9738.281: 83.2600% ( 71) 01:19:22.911 9738.281 - 9790.920: 83.8175% ( 76) 01:19:22.911 9790.920 - 9843.560: 84.2796% ( 63) 01:19:22.911 9843.560 - 9896.199: 84.7418% ( 63) 01:19:22.911 9896.199 - 9948.839: 85.2113% ( 64) 01:19:22.911 9948.839 - 10001.478: 85.6221% ( 56) 01:19:22.911 10001.478 - 10054.117: 85.8568% ( 32) 01:19:22.911 10054.117 - 10106.757: 86.1062% ( 34) 01:19:22.911 10106.757 - 10159.396: 86.2749% ( 23) 01:19:22.911 10159.396 - 10212.035: 86.4143% ( 19) 01:19:22.911 10212.035 - 10264.675: 86.5464% ( 18) 01:19:22.911 10264.675 - 10317.314: 86.7151% ( 23) 01:19:22.911 10317.314 - 10369.953: 86.7958% ( 11) 01:19:22.911 10369.953 - 10422.593: 86.8251% ( 4) 01:19:22.911 10422.593 - 10475.232: 87.0379% ( 29) 01:19:22.911 10475.232 - 10527.871: 87.4413% ( 55) 01:19:22.911 10527.871 - 10580.511: 87.7127% ( 37) 01:19:22.911 10580.511 - 10633.150: 87.9108% ( 27) 01:19:22.911 10633.150 - 10685.790: 88.2556% ( 47) 01:19:22.911 10685.790 - 10738.429: 88.5197% ( 36) 01:19:22.911 10738.429 - 10791.068: 88.7324% ( 29) 01:19:22.911 10791.068 - 10843.708: 88.9891% ( 35) 01:19:22.911 10843.708 - 10896.347: 89.2239% ( 32) 01:19:22.911 10896.347 - 10948.986: 89.3779% ( 21) 01:19:22.911 10948.986 - 11001.626: 89.5467% ( 23) 01:19:22.911 11001.626 - 11054.265: 89.6200% ( 10) 01:19:22.911 11054.265 - 11106.904: 89.7741% ( 21) 01:19:22.911 11106.904 - 11159.544: 89.9501% ( 24) 01:19:22.911 11159.544 - 11212.183: 90.0528% ( 14) 01:19:22.911 11212.183 - 11264.822: 90.1922% ( 19) 01:19:22.911 11264.822 - 11317.462: 90.2949% ( 14) 01:19:22.911 11317.462 - 11370.101: 90.4489% ( 21) 01:19:22.911 11370.101 - 11422.741: 90.5737% ( 17) 01:19:22.911 11422.741 - 11475.380: 90.7937% ( 30) 01:19:22.911 11475.380 - 11528.019: 90.8597% ( 9) 01:19:22.911 11528.019 - 11580.659: 90.9331% ( 10) 01:19:22.911 11580.659 - 11633.298: 90.9918% ( 8) 01:19:22.911 11633.298 - 11685.937: 91.1605% ( 23) 01:19:22.911 11685.937 - 11738.577: 91.3732% ( 29) 01:19:22.911 11738.577 - 11791.216: 91.5200% ( 20) 01:19:22.911 11791.216 - 11843.855: 91.6667% ( 20) 01:19:22.911 11843.855 - 11896.495: 91.7620% ( 13) 01:19:22.911 11896.495 - 11949.134: 92.0041% ( 33) 01:19:22.911 11949.134 - 12001.773: 92.1435% ( 19) 01:19:22.911 12001.773 - 12054.413: 92.2755% ( 18) 01:19:22.911 12054.413 - 12107.052: 92.4883% ( 29) 01:19:22.911 12107.052 - 12159.692: 92.7157% ( 31) 01:19:22.911 12159.692 - 12212.331: 92.8404% ( 17) 01:19:22.911 12212.331 - 12264.970: 92.9064% ( 9) 01:19:22.911 12264.970 - 12317.610: 93.0238% ( 16) 01:19:22.911 12317.610 - 12370.249: 93.1631% ( 19) 01:19:22.911 12370.249 - 12422.888: 93.2732% ( 15) 01:19:22.911 12422.888 - 12475.528: 93.3539% ( 11) 01:19:22.911 12475.528 - 12528.167: 93.4492% ( 13) 01:19:22.911 12528.167 - 12580.806: 93.5446% ( 13) 01:19:22.911 12580.806 - 12633.446: 93.6253% ( 11) 01:19:22.911 12633.446 - 12686.085: 93.7353% ( 15) 01:19:22.911 12686.085 - 12738.724: 93.8234% ( 12) 01:19:22.911 12738.724 - 12791.364: 93.9040% ( 11) 01:19:22.911 12791.364 - 12844.003: 94.0067% ( 14) 01:19:22.911 12844.003 - 12896.643: 94.1681% ( 22) 01:19:22.911 12896.643 - 12949.282: 94.3442% ( 24) 01:19:22.911 12949.282 - 13001.921: 94.5936% ( 34) 01:19:22.911 13001.921 - 13054.561: 94.7183% ( 17) 01:19:22.911 13054.561 - 13107.200: 94.8870% ( 23) 01:19:22.911 13107.200 - 13159.839: 95.0411% ( 21) 01:19:22.911 13159.839 - 13212.479: 95.1585% ( 16) 01:19:22.911 13212.479 - 13265.118: 95.2978% ( 19) 01:19:22.911 13265.118 - 13317.757: 95.4079% ( 15) 01:19:22.911 13317.757 - 13370.397: 95.4592% ( 7) 01:19:22.911 13370.397 - 13423.036: 95.5032% ( 6) 01:19:22.911 13423.036 - 13475.676: 95.5766% ( 10) 01:19:22.911 13475.676 - 13580.954: 95.7380% ( 22) 01:19:22.911 13580.954 - 13686.233: 95.7967% ( 8) 01:19:22.911 13686.233 - 13791.512: 95.8480% ( 7) 01:19:22.912 13791.512 - 13896.790: 95.8920% ( 6) 01:19:22.912 13896.790 - 14002.069: 95.9727% ( 11) 01:19:22.912 14002.069 - 14107.348: 96.1121% ( 19) 01:19:22.912 14107.348 - 14212.627: 96.2001% ( 12) 01:19:22.912 14212.627 - 14317.905: 96.2588% ( 8) 01:19:22.912 14317.905 - 14423.184: 96.3028% ( 6) 01:19:22.912 14423.184 - 14528.463: 96.3615% ( 8) 01:19:22.912 14528.463 - 14633.741: 96.4495% ( 12) 01:19:22.912 14633.741 - 14739.020: 96.5229% ( 10) 01:19:22.912 14739.020 - 14844.299: 96.5962% ( 10) 01:19:22.912 14844.299 - 14949.578: 96.6696% ( 10) 01:19:22.912 14949.578 - 15054.856: 96.7063% ( 5) 01:19:22.912 15054.856 - 15160.135: 96.7503% ( 6) 01:19:22.912 15160.135 - 15265.414: 96.8237% ( 10) 01:19:22.912 15265.414 - 15370.692: 96.9190% ( 13) 01:19:22.912 15370.692 - 15475.971: 97.0437% ( 17) 01:19:22.912 15475.971 - 15581.250: 97.0804% ( 5) 01:19:22.912 15581.250 - 15686.529: 97.1024% ( 3) 01:19:22.912 15897.086 - 16002.365: 97.1171% ( 2) 01:19:22.912 16107.643 - 16212.922: 97.1831% ( 9) 01:19:22.912 16212.922 - 16318.201: 97.2565% ( 10) 01:19:22.912 16318.201 - 16423.480: 97.3371% ( 11) 01:19:22.912 16423.480 - 16528.758: 97.4545% ( 16) 01:19:22.912 16528.758 - 16634.037: 97.5939% ( 19) 01:19:22.912 16634.037 - 16739.316: 97.6159% ( 3) 01:19:22.912 16739.316 - 16844.594: 97.6232% ( 1) 01:19:22.912 16949.873 - 17055.152: 97.6452% ( 3) 01:19:22.912 17055.152 - 17160.431: 97.6526% ( 1) 01:19:22.912 17581.545 - 17686.824: 97.6966% ( 6) 01:19:22.912 17686.824 - 17792.103: 97.7113% ( 2) 01:19:22.912 17792.103 - 17897.382: 97.7406% ( 4) 01:19:22.912 17897.382 - 18002.660: 97.7700% ( 4) 01:19:22.912 18002.660 - 18107.939: 97.7993% ( 4) 01:19:22.912 18107.939 - 18213.218: 97.8213% ( 3) 01:19:22.912 18213.218 - 18318.496: 97.8506% ( 4) 01:19:22.912 18318.496 - 18423.775: 97.8800% ( 4) 01:19:22.912 18423.775 - 18529.054: 97.9093% ( 4) 01:19:22.912 18529.054 - 18634.333: 97.9387% ( 4) 01:19:22.912 18634.333 - 18739.611: 97.9680% ( 4) 01:19:22.912 18739.611 - 18844.890: 98.0047% ( 5) 01:19:22.912 18844.890 - 18950.169: 98.0194% ( 2) 01:19:22.912 18950.169 - 19055.447: 98.0487% ( 4) 01:19:22.912 19055.447 - 19160.726: 98.0781% ( 4) 01:19:22.912 19160.726 - 19266.005: 98.1001% ( 3) 01:19:22.912 19266.005 - 19371.284: 98.1221% ( 3) 01:19:22.912 19581.841 - 19687.120: 98.1587% ( 5) 01:19:22.912 19687.120 - 19792.398: 98.2028% ( 6) 01:19:22.912 19792.398 - 19897.677: 98.2394% ( 5) 01:19:22.912 19897.677 - 20002.956: 98.3128% ( 10) 01:19:22.912 20002.956 - 20108.235: 98.4228% ( 15) 01:19:22.912 20108.235 - 20213.513: 98.4668% ( 6) 01:19:22.912 20213.513 - 20318.792: 98.5182% ( 7) 01:19:22.912 20318.792 - 20424.071: 98.5622% ( 6) 01:19:22.912 20424.071 - 20529.349: 98.5915% ( 4) 01:19:22.912 21055.743 - 21161.022: 98.6209% ( 4) 01:19:22.912 21161.022 - 21266.300: 98.6576% ( 5) 01:19:22.912 21266.300 - 21371.579: 98.6869% ( 4) 01:19:22.912 21371.579 - 21476.858: 98.7236% ( 5) 01:19:22.912 21476.858 - 21582.137: 98.7529% ( 4) 01:19:22.912 21582.137 - 21687.415: 98.7896% ( 5) 01:19:22.912 21687.415 - 21792.694: 98.8263% ( 5) 01:19:22.912 21792.694 - 21897.973: 98.8556% ( 4) 01:19:22.912 21897.973 - 22003.251: 98.8923% ( 5) 01:19:22.912 22003.251 - 22108.530: 98.9217% ( 4) 01:19:22.912 22108.530 - 22213.809: 98.9510% ( 4) 01:19:22.912 22213.809 - 22319.088: 98.9877% ( 5) 01:19:22.912 22319.088 - 22424.366: 99.0170% ( 4) 01:19:22.912 22424.366 - 22529.645: 99.0610% ( 6) 01:19:22.912 34952.533 - 35163.091: 99.1050% ( 6) 01:19:22.912 35163.091 - 35373.648: 99.1491% ( 6) 01:19:22.912 35373.648 - 35584.206: 99.2004% ( 7) 01:19:22.912 35584.206 - 35794.763: 99.2518% ( 7) 01:19:22.912 35794.763 - 36005.320: 99.2884% ( 5) 01:19:22.912 36005.320 - 36215.878: 99.3398% ( 7) 01:19:22.912 36215.878 - 36426.435: 99.3838% ( 6) 01:19:22.912 36426.435 - 36636.993: 99.4278% ( 6) 01:19:22.912 36636.993 - 36847.550: 99.4792% ( 7) 01:19:22.912 36847.550 - 37058.108: 99.5232% ( 6) 01:19:22.912 37058.108 - 37268.665: 99.5305% ( 1) 01:19:22.912 42532.601 - 42743.158: 99.5525% ( 3) 01:19:22.912 42743.158 - 42953.716: 99.5965% ( 6) 01:19:22.912 42953.716 - 43164.273: 99.6479% ( 7) 01:19:22.912 43164.273 - 43374.831: 99.6846% ( 5) 01:19:22.912 43374.831 - 43585.388: 99.7212% ( 5) 01:19:22.912 43585.388 - 43795.945: 99.7726% ( 7) 01:19:22.912 43795.945 - 44006.503: 99.8166% ( 6) 01:19:22.912 44006.503 - 44217.060: 99.8680% ( 7) 01:19:22.912 44217.060 - 44427.618: 99.9193% ( 7) 01:19:22.912 44427.618 - 44638.175: 99.9560% ( 5) 01:19:22.912 44638.175 - 44848.733: 100.0000% ( 6) 01:19:22.912 01:19:22.912 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 01:19:22.912 ============================================================================== 01:19:22.912 Range in us Cumulative IO count 01:19:22.912 7211.592 - 7264.231: 0.0073% ( 1) 01:19:22.912 7264.231 - 7316.871: 0.0367% ( 4) 01:19:22.912 7316.871 - 7369.510: 0.0734% ( 5) 01:19:22.912 7369.510 - 7422.149: 0.1540% ( 11) 01:19:22.912 7422.149 - 7474.789: 0.5208% ( 50) 01:19:22.912 7474.789 - 7527.428: 0.8803% ( 49) 01:19:22.912 7527.428 - 7580.067: 1.0490% ( 23) 01:19:22.912 7580.067 - 7632.707: 1.3571% ( 42) 01:19:22.912 7632.707 - 7685.346: 2.0540% ( 95) 01:19:22.912 7685.346 - 7737.986: 3.0076% ( 130) 01:19:22.912 7737.986 - 7790.625: 3.9833% ( 133) 01:19:22.912 7790.625 - 7843.264: 5.7952% ( 247) 01:19:22.912 7843.264 - 7895.904: 8.0766% ( 311) 01:19:22.912 7895.904 - 7948.543: 10.7541% ( 365) 01:19:22.912 7948.543 - 8001.182: 13.9085% ( 430) 01:19:22.912 8001.182 - 8053.822: 17.9357% ( 549) 01:19:22.912 8053.822 - 8106.461: 21.3908% ( 471) 01:19:22.912 8106.461 - 8159.100: 24.0610% ( 364) 01:19:22.912 8159.100 - 8211.740: 27.5602% ( 477) 01:19:22.912 8211.740 - 8264.379: 31.3380% ( 515) 01:19:22.912 8264.379 - 8317.018: 35.5854% ( 579) 01:19:22.912 8317.018 - 8369.658: 38.8644% ( 447) 01:19:22.912 8369.658 - 8422.297: 42.1728% ( 451) 01:19:22.912 8422.297 - 8474.937: 45.1585% ( 407) 01:19:22.912 8474.937 - 8527.576: 47.5939% ( 332) 01:19:22.912 8527.576 - 8580.215: 50.9096% ( 452) 01:19:22.912 8580.215 - 8632.855: 53.3451% ( 332) 01:19:22.912 8632.855 - 8685.494: 55.6118% ( 309) 01:19:22.912 8685.494 - 8738.133: 57.9959% ( 325) 01:19:22.912 8738.133 - 8790.773: 60.3506% ( 321) 01:19:22.912 8790.773 - 8843.412: 62.7788% ( 331) 01:19:22.912 8843.412 - 8896.051: 64.8621% ( 284) 01:19:22.912 8896.051 - 8948.691: 66.8867% ( 276) 01:19:22.912 8948.691 - 9001.330: 68.4419% ( 212) 01:19:22.912 9001.330 - 9053.969: 69.8210% ( 188) 01:19:22.912 9053.969 - 9106.609: 71.2515% ( 195) 01:19:22.912 9106.609 - 9159.248: 72.4839% ( 168) 01:19:22.912 9159.248 - 9211.888: 73.8336% ( 184) 01:19:22.912 9211.888 - 9264.527: 75.0220% ( 162) 01:19:22.912 9264.527 - 9317.166: 76.2691% ( 170) 01:19:22.912 9317.166 - 9369.806: 77.5822% ( 179) 01:19:22.912 9369.806 - 9422.445: 78.7925% ( 165) 01:19:22.912 9422.445 - 9475.084: 79.8782% ( 148) 01:19:22.912 9475.084 - 9527.724: 81.0079% ( 154) 01:19:22.912 9527.724 - 9580.363: 82.0789% ( 146) 01:19:22.912 9580.363 - 9633.002: 82.8712% ( 108) 01:19:22.912 9633.002 - 9685.642: 83.5974% ( 99) 01:19:22.912 9685.642 - 9738.281: 84.2136% ( 84) 01:19:22.912 9738.281 - 9790.920: 84.6538% ( 60) 01:19:22.912 9790.920 - 9843.560: 85.1012% ( 61) 01:19:22.912 9843.560 - 9896.199: 85.2626% ( 22) 01:19:22.912 9896.199 - 9948.839: 85.4093% ( 20) 01:19:22.912 9948.839 - 10001.478: 85.5047% ( 13) 01:19:22.912 10001.478 - 10054.117: 85.6294% ( 17) 01:19:22.912 10054.117 - 10106.757: 85.7761% ( 20) 01:19:22.912 10106.757 - 10159.396: 85.9228% ( 20) 01:19:22.912 10159.396 - 10212.035: 86.1869% ( 36) 01:19:22.912 10212.035 - 10264.675: 86.5390% ( 48) 01:19:22.912 10264.675 - 10317.314: 86.8691% ( 45) 01:19:22.912 10317.314 - 10369.953: 87.3680% ( 68) 01:19:22.912 10369.953 - 10422.593: 87.6834% ( 43) 01:19:22.912 10422.593 - 10475.232: 87.8668% ( 25) 01:19:22.912 10475.232 - 10527.871: 88.1089% ( 33) 01:19:22.912 10527.871 - 10580.511: 88.3729% ( 36) 01:19:22.912 10580.511 - 10633.150: 88.6957% ( 44) 01:19:22.912 10633.150 - 10685.790: 88.9671% ( 37) 01:19:22.912 10685.790 - 10738.429: 89.0845% ( 16) 01:19:22.912 10738.429 - 10791.068: 89.2092% ( 17) 01:19:22.912 10791.068 - 10843.708: 89.3926% ( 25) 01:19:22.912 10843.708 - 10896.347: 89.5173% ( 17) 01:19:22.912 10896.347 - 10948.986: 89.6127% ( 13) 01:19:22.912 10948.986 - 11001.626: 89.6640% ( 7) 01:19:22.912 11001.626 - 11054.265: 89.7447% ( 11) 01:19:22.912 11054.265 - 11106.904: 89.8548% ( 15) 01:19:22.912 11106.904 - 11159.544: 90.0308% ( 24) 01:19:22.912 11159.544 - 11212.183: 90.2142% ( 25) 01:19:22.912 11212.183 - 11264.822: 90.3903% ( 24) 01:19:22.912 11264.822 - 11317.462: 90.5150% ( 17) 01:19:22.912 11317.462 - 11370.101: 90.6690% ( 21) 01:19:22.912 11370.101 - 11422.741: 90.8231% ( 21) 01:19:22.912 11422.741 - 11475.380: 90.9844% ( 22) 01:19:22.912 11475.380 - 11528.019: 91.1312% ( 20) 01:19:22.912 11528.019 - 11580.659: 91.2265% ( 13) 01:19:22.912 11580.659 - 11633.298: 91.3072% ( 11) 01:19:22.912 11633.298 - 11685.937: 91.4026% ( 13) 01:19:22.912 11685.937 - 11738.577: 91.4613% ( 8) 01:19:22.912 11738.577 - 11791.216: 91.5053% ( 6) 01:19:22.912 11791.216 - 11843.855: 91.6300% ( 17) 01:19:22.912 11843.855 - 11896.495: 91.7033% ( 10) 01:19:22.912 11896.495 - 11949.134: 91.8941% ( 26) 01:19:22.912 11949.134 - 12001.773: 92.0408% ( 20) 01:19:22.912 12001.773 - 12054.413: 92.2535% ( 29) 01:19:22.912 12054.413 - 12107.052: 92.4736% ( 30) 01:19:22.912 12107.052 - 12159.692: 92.7597% ( 39) 01:19:22.913 12159.692 - 12212.331: 93.0164% ( 35) 01:19:22.913 12212.331 - 12264.970: 93.2145% ( 27) 01:19:22.913 12264.970 - 12317.610: 93.3319% ( 16) 01:19:22.913 12317.610 - 12370.249: 93.4492% ( 16) 01:19:22.913 12370.249 - 12422.888: 93.5299% ( 11) 01:19:22.913 12422.888 - 12475.528: 93.6033% ( 10) 01:19:22.913 12475.528 - 12528.167: 93.6693% ( 9) 01:19:22.913 12528.167 - 12580.806: 93.7207% ( 7) 01:19:22.913 12580.806 - 12633.446: 93.7720% ( 7) 01:19:22.913 12633.446 - 12686.085: 93.7940% ( 3) 01:19:22.913 12686.085 - 12738.724: 93.8234% ( 4) 01:19:22.913 12738.724 - 12791.364: 93.8454% ( 3) 01:19:22.913 12791.364 - 12844.003: 93.8747% ( 4) 01:19:22.913 12844.003 - 12896.643: 93.8967% ( 3) 01:19:22.913 12896.643 - 12949.282: 93.9040% ( 1) 01:19:22.913 12949.282 - 13001.921: 93.9261% ( 3) 01:19:22.913 13001.921 - 13054.561: 93.9407% ( 2) 01:19:22.913 13054.561 - 13107.200: 93.9627% ( 3) 01:19:22.913 13107.200 - 13159.839: 93.9847% ( 3) 01:19:22.913 13159.839 - 13212.479: 94.0067% ( 3) 01:19:22.913 13212.479 - 13265.118: 94.0508% ( 6) 01:19:22.913 13265.118 - 13317.757: 94.1094% ( 8) 01:19:22.913 13317.757 - 13370.397: 94.2195% ( 15) 01:19:22.913 13370.397 - 13423.036: 94.3735% ( 21) 01:19:22.913 13423.036 - 13475.676: 94.6229% ( 34) 01:19:22.913 13475.676 - 13580.954: 94.8944% ( 37) 01:19:22.913 13580.954 - 13686.233: 95.0778% ( 25) 01:19:22.913 13686.233 - 13791.512: 95.2832% ( 28) 01:19:22.913 13791.512 - 13896.790: 95.5986% ( 43) 01:19:22.913 13896.790 - 14002.069: 96.0021% ( 55) 01:19:22.913 14002.069 - 14107.348: 96.1854% ( 25) 01:19:22.913 14107.348 - 14212.627: 96.2808% ( 13) 01:19:22.913 14212.627 - 14317.905: 96.3615% ( 11) 01:19:22.913 14317.905 - 14423.184: 96.4129% ( 7) 01:19:22.913 14423.184 - 14528.463: 96.4715% ( 8) 01:19:22.913 14528.463 - 14633.741: 96.5302% ( 8) 01:19:22.913 14633.741 - 14739.020: 96.5596% ( 4) 01:19:22.913 14739.020 - 14844.299: 96.6183% ( 8) 01:19:22.913 14844.299 - 14949.578: 96.8310% ( 29) 01:19:22.913 14949.578 - 15054.856: 97.0144% ( 25) 01:19:22.913 15054.856 - 15160.135: 97.0731% ( 8) 01:19:22.913 15160.135 - 15265.414: 97.1171% ( 6) 01:19:22.913 15265.414 - 15370.692: 97.1464% ( 4) 01:19:22.913 15370.692 - 15475.971: 97.1684% ( 3) 01:19:22.913 15475.971 - 15581.250: 97.1831% ( 2) 01:19:22.913 16528.758 - 16634.037: 97.2124% ( 4) 01:19:22.913 16634.037 - 16739.316: 97.3005% ( 12) 01:19:22.913 16739.316 - 16844.594: 97.3592% ( 8) 01:19:22.913 16844.594 - 16949.873: 97.3885% ( 4) 01:19:22.913 16949.873 - 17055.152: 97.4178% ( 4) 01:19:22.913 17055.152 - 17160.431: 97.5059% ( 12) 01:19:22.913 17160.431 - 17265.709: 97.5792% ( 10) 01:19:22.913 17265.709 - 17370.988: 97.6599% ( 11) 01:19:22.913 17370.988 - 17476.267: 97.7333% ( 10) 01:19:22.913 17476.267 - 17581.545: 97.8066% ( 10) 01:19:22.913 17581.545 - 17686.824: 97.8506% ( 6) 01:19:22.913 17686.824 - 17792.103: 97.8873% ( 5) 01:19:22.913 17792.103 - 17897.382: 97.9240% ( 5) 01:19:22.913 17897.382 - 18002.660: 97.9607% ( 5) 01:19:22.913 18002.660 - 18107.939: 97.9974% ( 5) 01:19:22.913 18107.939 - 18213.218: 98.0267% ( 4) 01:19:22.913 18213.218 - 18318.496: 98.0560% ( 4) 01:19:22.913 18318.496 - 18423.775: 98.0854% ( 4) 01:19:22.913 18423.775 - 18529.054: 98.1147% ( 4) 01:19:22.913 18529.054 - 18634.333: 98.1221% ( 1) 01:19:22.913 20529.349 - 20634.628: 98.1514% ( 4) 01:19:22.913 20634.628 - 20739.907: 98.1954% ( 6) 01:19:22.913 20739.907 - 20845.186: 98.2321% ( 5) 01:19:22.913 20845.186 - 20950.464: 98.3128% ( 11) 01:19:22.913 20950.464 - 21055.743: 98.3935% ( 11) 01:19:22.913 21055.743 - 21161.022: 98.4815% ( 12) 01:19:22.913 21161.022 - 21266.300: 98.5622% ( 11) 01:19:22.913 21266.300 - 21371.579: 98.6429% ( 11) 01:19:22.913 21371.579 - 21476.858: 98.7236% ( 11) 01:19:22.913 21476.858 - 21582.137: 98.8190% ( 13) 01:19:22.913 21582.137 - 21687.415: 98.8923% ( 10) 01:19:22.913 21687.415 - 21792.694: 98.9363% ( 6) 01:19:22.913 21792.694 - 21897.973: 98.9730% ( 5) 01:19:22.913 21897.973 - 22003.251: 99.0023% ( 4) 01:19:22.913 22003.251 - 22108.530: 99.0390% ( 5) 01:19:22.913 22108.530 - 22213.809: 99.0610% ( 3) 01:19:22.913 33268.074 - 33478.631: 99.0684% ( 1) 01:19:22.913 33478.631 - 33689.189: 99.1197% ( 7) 01:19:22.913 33689.189 - 33899.746: 99.1784% ( 8) 01:19:22.913 33899.746 - 34110.304: 99.2298% ( 7) 01:19:22.913 34110.304 - 34320.861: 99.2811% ( 7) 01:19:22.913 34320.861 - 34531.418: 99.3325% ( 7) 01:19:22.913 34531.418 - 34741.976: 99.3838% ( 7) 01:19:22.913 34741.976 - 34952.533: 99.4352% ( 7) 01:19:22.913 34952.533 - 35163.091: 99.4865% ( 7) 01:19:22.913 35163.091 - 35373.648: 99.5305% ( 6) 01:19:22.913 40216.469 - 40427.027: 99.5599% ( 4) 01:19:22.913 40427.027 - 40637.584: 99.6112% ( 7) 01:19:22.913 40637.584 - 40848.141: 99.6552% ( 6) 01:19:22.913 40848.141 - 41058.699: 99.6992% ( 6) 01:19:22.913 41058.699 - 41269.256: 99.7433% ( 6) 01:19:22.913 41269.256 - 41479.814: 99.8019% ( 8) 01:19:22.913 41479.814 - 41690.371: 99.8460% ( 6) 01:19:22.913 41690.371 - 41900.929: 99.9046% ( 8) 01:19:22.913 41900.929 - 42111.486: 99.9560% ( 7) 01:19:22.913 42111.486 - 42322.043: 100.0000% ( 6) 01:19:22.913 01:19:22.913 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 01:19:22.913 ============================================================================== 01:19:22.913 Range in us Cumulative IO count 01:19:22.913 7001.035 - 7053.674: 0.0073% ( 1) 01:19:22.913 7053.674 - 7106.313: 0.0147% ( 1) 01:19:22.913 7106.313 - 7158.953: 0.0220% ( 1) 01:19:22.913 7211.592 - 7264.231: 0.0293% ( 1) 01:19:22.913 7264.231 - 7316.871: 0.0880% ( 8) 01:19:22.913 7316.871 - 7369.510: 0.2347% ( 20) 01:19:22.913 7369.510 - 7422.149: 0.3741% ( 19) 01:19:22.913 7422.149 - 7474.789: 0.6529% ( 38) 01:19:22.913 7474.789 - 7527.428: 1.0563% ( 55) 01:19:22.913 7527.428 - 7580.067: 1.5478% ( 67) 01:19:22.913 7580.067 - 7632.707: 2.2447% ( 95) 01:19:22.913 7632.707 - 7685.346: 2.8976% ( 89) 01:19:22.913 7685.346 - 7737.986: 3.9173% ( 139) 01:19:22.913 7737.986 - 7790.625: 5.1790% ( 172) 01:19:22.913 7790.625 - 7843.264: 6.5141% ( 182) 01:19:22.913 7843.264 - 7895.904: 8.5241% ( 274) 01:19:22.913 7895.904 - 7948.543: 10.9888% ( 336) 01:19:22.913 7948.543 - 8001.182: 13.5490% ( 349) 01:19:22.913 8001.182 - 8053.822: 16.4613% ( 397) 01:19:22.913 8053.822 - 8106.461: 19.6670% ( 437) 01:19:22.913 8106.461 - 8159.100: 23.1734% ( 478) 01:19:22.913 8159.100 - 8211.740: 26.8413% ( 500) 01:19:22.913 8211.740 - 8264.379: 30.3257% ( 475) 01:19:22.913 8264.379 - 8317.018: 33.9275% ( 491) 01:19:22.913 8317.018 - 8369.658: 37.6687% ( 510) 01:19:22.913 8369.658 - 8422.297: 41.1312% ( 472) 01:19:22.913 8422.297 - 8474.937: 43.8894% ( 376) 01:19:22.913 8474.937 - 8527.576: 46.7356% ( 388) 01:19:22.913 8527.576 - 8580.215: 49.5819% ( 388) 01:19:22.913 8580.215 - 8632.855: 51.9293% ( 320) 01:19:22.913 8632.855 - 8685.494: 54.4234% ( 340) 01:19:22.913 8685.494 - 8738.133: 57.2770% ( 389) 01:19:22.913 8738.133 - 8790.773: 59.8958% ( 357) 01:19:22.913 8790.773 - 8843.412: 62.2139% ( 316) 01:19:22.913 8843.412 - 8896.051: 64.2092% ( 272) 01:19:22.913 8896.051 - 8948.691: 65.9111% ( 232) 01:19:22.913 8948.691 - 9001.330: 67.3929% ( 202) 01:19:22.913 9001.330 - 9053.969: 68.8820% ( 203) 01:19:22.913 9053.969 - 9106.609: 70.5032% ( 221) 01:19:22.913 9106.609 - 9159.248: 71.7136% ( 165) 01:19:22.913 9159.248 - 9211.888: 73.1147% ( 191) 01:19:22.913 9211.888 - 9264.527: 74.3031% ( 162) 01:19:22.913 9264.527 - 9317.166: 75.4401% ( 155) 01:19:22.913 9317.166 - 9369.806: 76.4305% ( 135) 01:19:22.913 9369.806 - 9422.445: 77.6335% ( 164) 01:19:22.913 9422.445 - 9475.084: 79.0640% ( 195) 01:19:22.913 9475.084 - 9527.724: 80.2157% ( 157) 01:19:22.913 9527.724 - 9580.363: 81.3747% ( 158) 01:19:22.913 9580.363 - 9633.002: 82.6218% ( 170) 01:19:22.913 9633.002 - 9685.642: 83.6561% ( 141) 01:19:22.913 9685.642 - 9738.281: 84.4117% ( 103) 01:19:22.913 9738.281 - 9790.920: 85.1893% ( 106) 01:19:22.913 9790.920 - 9843.560: 85.7614% ( 78) 01:19:22.913 9843.560 - 9896.199: 86.1282% ( 50) 01:19:22.913 9896.199 - 9948.839: 86.4217% ( 40) 01:19:22.913 9948.839 - 10001.478: 86.6711% ( 34) 01:19:22.913 10001.478 - 10054.117: 86.8985% ( 31) 01:19:22.913 10054.117 - 10106.757: 87.1185% ( 30) 01:19:22.913 10106.757 - 10159.396: 87.2873% ( 23) 01:19:22.913 10159.396 - 10212.035: 87.4560% ( 23) 01:19:22.913 10212.035 - 10264.675: 87.6540% ( 27) 01:19:22.913 10264.675 - 10317.314: 88.0135% ( 49) 01:19:22.913 10317.314 - 10369.953: 88.3363% ( 44) 01:19:22.913 10369.953 - 10422.593: 88.7177% ( 52) 01:19:22.913 10422.593 - 10475.232: 88.8571% ( 19) 01:19:22.913 10475.232 - 10527.871: 88.9671% ( 15) 01:19:22.913 10527.871 - 10580.511: 89.0552% ( 12) 01:19:22.913 10580.511 - 10633.150: 89.1359% ( 11) 01:19:22.913 10633.150 - 10685.790: 89.1872% ( 7) 01:19:22.913 10685.790 - 10738.429: 89.2386% ( 7) 01:19:22.913 10738.429 - 10791.068: 89.3192% ( 11) 01:19:22.913 10791.068 - 10843.708: 89.4366% ( 16) 01:19:22.913 10843.708 - 10896.347: 89.5540% ( 16) 01:19:22.913 10896.347 - 10948.986: 89.7154% ( 22) 01:19:22.913 10948.986 - 11001.626: 89.8988% ( 25) 01:19:22.913 11001.626 - 11054.265: 90.1482% ( 34) 01:19:22.914 11054.265 - 11106.904: 90.2802% ( 18) 01:19:22.914 11106.904 - 11159.544: 90.4196% ( 19) 01:19:22.914 11159.544 - 11212.183: 90.5150% ( 13) 01:19:22.914 11212.183 - 11264.822: 90.6250% ( 15) 01:19:22.914 11264.822 - 11317.462: 90.7497% ( 17) 01:19:22.914 11317.462 - 11370.101: 90.9258% ( 24) 01:19:22.914 11370.101 - 11422.741: 91.1238% ( 27) 01:19:22.914 11422.741 - 11475.380: 91.2705% ( 20) 01:19:22.914 11475.380 - 11528.019: 91.3952% ( 17) 01:19:22.914 11528.019 - 11580.659: 91.5053% ( 15) 01:19:22.914 11580.659 - 11633.298: 91.5640% ( 8) 01:19:22.914 11633.298 - 11685.937: 91.6373% ( 10) 01:19:22.914 11685.937 - 11738.577: 91.7400% ( 14) 01:19:22.914 11738.577 - 11791.216: 91.8647% ( 17) 01:19:22.914 11791.216 - 11843.855: 92.1215% ( 35) 01:19:22.914 11843.855 - 11896.495: 92.2022% ( 11) 01:19:22.914 11896.495 - 11949.134: 92.2462% ( 6) 01:19:22.914 11949.134 - 12001.773: 92.3782% ( 18) 01:19:22.914 12001.773 - 12054.413: 92.5763% ( 27) 01:19:22.914 12054.413 - 12107.052: 92.7744% ( 27) 01:19:22.914 12107.052 - 12159.692: 92.9504% ( 24) 01:19:22.914 12159.692 - 12212.331: 93.1411% ( 26) 01:19:22.914 12212.331 - 12264.970: 93.2438% ( 14) 01:19:22.914 12264.970 - 12317.610: 93.2952% ( 7) 01:19:22.914 12317.610 - 12370.249: 93.3759% ( 11) 01:19:22.914 12370.249 - 12422.888: 93.4712% ( 13) 01:19:22.914 12422.888 - 12475.528: 93.5593% ( 12) 01:19:22.914 12475.528 - 12528.167: 93.6473% ( 12) 01:19:22.914 12528.167 - 12580.806: 93.7207% ( 10) 01:19:22.914 12580.806 - 12633.446: 93.8160% ( 13) 01:19:22.914 12633.446 - 12686.085: 94.1094% ( 40) 01:19:22.914 12686.085 - 12738.724: 94.2048% ( 13) 01:19:22.914 12738.724 - 12791.364: 94.2342% ( 4) 01:19:22.914 12791.364 - 12844.003: 94.2635% ( 4) 01:19:22.914 12844.003 - 12896.643: 94.2855% ( 3) 01:19:22.914 12896.643 - 12949.282: 94.3148% ( 4) 01:19:22.914 12949.282 - 13001.921: 94.3809% ( 9) 01:19:22.914 13001.921 - 13054.561: 94.4322% ( 7) 01:19:22.914 13054.561 - 13107.200: 94.5056% ( 10) 01:19:22.914 13107.200 - 13159.839: 94.5349% ( 4) 01:19:22.914 13159.839 - 13212.479: 94.5569% ( 3) 01:19:22.914 13212.479 - 13265.118: 94.5789% ( 3) 01:19:22.914 13265.118 - 13317.757: 94.6156% ( 5) 01:19:22.914 13317.757 - 13370.397: 94.6743% ( 8) 01:19:22.914 13370.397 - 13423.036: 94.7550% ( 11) 01:19:22.914 13423.036 - 13475.676: 94.8504% ( 13) 01:19:22.914 13475.676 - 13580.954: 94.9384% ( 12) 01:19:22.914 13580.954 - 13686.233: 94.9897% ( 7) 01:19:22.914 13686.233 - 13791.512: 95.0484% ( 8) 01:19:22.914 13791.512 - 13896.790: 95.0924% ( 6) 01:19:22.914 13896.790 - 14002.069: 95.1364% ( 6) 01:19:22.914 14002.069 - 14107.348: 95.2025% ( 9) 01:19:22.914 14107.348 - 14212.627: 95.3198% ( 16) 01:19:22.914 14212.627 - 14317.905: 95.5326% ( 29) 01:19:22.914 14317.905 - 14423.184: 95.8407% ( 42) 01:19:22.914 14423.184 - 14528.463: 96.2368% ( 54) 01:19:22.914 14528.463 - 14633.741: 96.4862% ( 34) 01:19:22.914 14633.741 - 14739.020: 96.5742% ( 12) 01:19:22.914 14739.020 - 14844.299: 96.6183% ( 6) 01:19:22.914 14844.299 - 14949.578: 96.6403% ( 3) 01:19:22.914 14949.578 - 15054.856: 96.6549% ( 2) 01:19:22.914 15054.856 - 15160.135: 96.6696% ( 2) 01:19:22.914 15160.135 - 15265.414: 96.6989% ( 4) 01:19:22.914 15265.414 - 15370.692: 96.7136% ( 2) 01:19:22.914 15370.692 - 15475.971: 96.7210% ( 1) 01:19:22.914 15475.971 - 15581.250: 96.7430% ( 3) 01:19:22.914 15581.250 - 15686.529: 96.8457% ( 14) 01:19:22.914 15686.529 - 15791.807: 97.1244% ( 38) 01:19:22.914 15791.807 - 15897.086: 97.2491% ( 17) 01:19:22.914 15897.086 - 16002.365: 97.3078% ( 8) 01:19:22.914 16002.365 - 16107.643: 97.3592% ( 7) 01:19:22.914 16107.643 - 16212.922: 97.4765% ( 16) 01:19:22.914 16212.922 - 16318.201: 97.5572% ( 11) 01:19:22.914 16318.201 - 16423.480: 97.6379% ( 11) 01:19:22.914 16423.480 - 16528.758: 97.7039% ( 9) 01:19:22.914 16528.758 - 16634.037: 97.7700% ( 9) 01:19:22.914 16634.037 - 16739.316: 97.8360% ( 9) 01:19:22.914 16739.316 - 16844.594: 97.9167% ( 11) 01:19:22.914 16844.594 - 16949.873: 97.9680% ( 7) 01:19:22.914 16949.873 - 17055.152: 98.0047% ( 5) 01:19:22.914 17055.152 - 17160.431: 98.0340% ( 4) 01:19:22.914 17160.431 - 17265.709: 98.0707% ( 5) 01:19:22.914 17265.709 - 17370.988: 98.1074% ( 5) 01:19:22.914 17370.988 - 17476.267: 98.1221% ( 2) 01:19:22.914 19897.677 - 20002.956: 98.1367% ( 2) 01:19:22.914 20002.956 - 20108.235: 98.1734% ( 5) 01:19:22.914 20108.235 - 20213.513: 98.2174% ( 6) 01:19:22.914 20213.513 - 20318.792: 98.2541% ( 5) 01:19:22.914 20318.792 - 20424.071: 98.2981% ( 6) 01:19:22.914 20424.071 - 20529.349: 98.3348% ( 5) 01:19:22.914 20529.349 - 20634.628: 98.3715% ( 5) 01:19:22.914 20634.628 - 20739.907: 98.4082% ( 5) 01:19:22.914 20739.907 - 20845.186: 98.4375% ( 4) 01:19:22.914 20845.186 - 20950.464: 98.4742% ( 5) 01:19:22.914 20950.464 - 21055.743: 98.5109% ( 5) 01:19:22.914 21055.743 - 21161.022: 98.5475% ( 5) 01:19:22.914 21161.022 - 21266.300: 98.5842% ( 5) 01:19:22.914 21266.300 - 21371.579: 98.5915% ( 1) 01:19:22.914 22003.251 - 22108.530: 98.6356% ( 6) 01:19:22.914 22108.530 - 22213.809: 98.6942% ( 8) 01:19:22.914 22213.809 - 22319.088: 98.7309% ( 5) 01:19:22.914 22319.088 - 22424.366: 98.7823% ( 7) 01:19:22.914 22424.366 - 22529.645: 98.8263% ( 6) 01:19:22.914 22529.645 - 22634.924: 98.8776% ( 7) 01:19:22.914 22634.924 - 22740.202: 98.9217% ( 6) 01:19:22.914 22740.202 - 22845.481: 98.9657% ( 6) 01:19:22.914 22845.481 - 22950.760: 99.0097% ( 6) 01:19:22.914 22950.760 - 23056.039: 99.0537% ( 6) 01:19:22.914 23056.039 - 23161.317: 99.0610% ( 1) 01:19:22.914 32215.287 - 32425.844: 99.0757% ( 2) 01:19:22.914 32425.844 - 32636.402: 99.1344% ( 8) 01:19:22.914 32636.402 - 32846.959: 99.1857% ( 7) 01:19:22.914 32846.959 - 33057.516: 99.2371% ( 7) 01:19:22.914 33057.516 - 33268.074: 99.2884% ( 7) 01:19:22.914 33268.074 - 33478.631: 99.3398% ( 7) 01:19:22.914 33478.631 - 33689.189: 99.3985% ( 8) 01:19:22.914 33689.189 - 33899.746: 99.4498% ( 7) 01:19:22.914 33899.746 - 34110.304: 99.5012% ( 7) 01:19:22.914 34110.304 - 34320.861: 99.5305% ( 4) 01:19:22.914 38321.452 - 38532.010: 99.5525% ( 3) 01:19:22.914 38532.010 - 38742.567: 99.6039% ( 7) 01:19:22.914 38742.567 - 38953.124: 99.6552% ( 7) 01:19:22.914 38953.124 - 39163.682: 99.7066% ( 7) 01:19:22.914 39163.682 - 39374.239: 99.7506% ( 6) 01:19:22.914 39374.239 - 39584.797: 99.8019% ( 7) 01:19:22.914 39584.797 - 39795.354: 99.8460% ( 6) 01:19:22.914 39795.354 - 40005.912: 99.9046% ( 8) 01:19:22.914 40005.912 - 40216.469: 99.9487% ( 6) 01:19:22.914 40216.469 - 40427.027: 100.0000% ( 7) 01:19:22.914 01:19:22.914 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 01:19:22.914 ============================================================================== 01:19:22.914 Range in us Cumulative IO count 01:19:22.914 7106.313 - 7158.953: 0.0147% ( 2) 01:19:22.914 7158.953 - 7211.592: 0.0440% ( 4) 01:19:22.914 7211.592 - 7264.231: 0.1027% ( 8) 01:19:22.914 7264.231 - 7316.871: 0.1687% ( 9) 01:19:22.914 7316.871 - 7369.510: 0.3154% ( 20) 01:19:22.914 7369.510 - 7422.149: 0.3741% ( 8) 01:19:22.914 7422.149 - 7474.789: 0.4401% ( 9) 01:19:22.914 7474.789 - 7527.428: 0.6162% ( 24) 01:19:22.914 7527.428 - 7580.067: 0.8143% ( 27) 01:19:22.914 7580.067 - 7632.707: 1.3131% ( 68) 01:19:22.915 7632.707 - 7685.346: 1.9953% ( 93) 01:19:22.915 7685.346 - 7737.986: 3.2277% ( 168) 01:19:22.915 7737.986 - 7790.625: 4.8856% ( 226) 01:19:22.915 7790.625 - 7843.264: 6.9616% ( 283) 01:19:22.915 7843.264 - 7895.904: 8.7515% ( 244) 01:19:22.915 7895.904 - 7948.543: 11.1356% ( 325) 01:19:22.915 7948.543 - 8001.182: 14.0038% ( 391) 01:19:22.915 8001.182 - 8053.822: 16.5933% ( 353) 01:19:22.915 8053.822 - 8106.461: 19.8724% ( 447) 01:19:22.915 8106.461 - 8159.100: 22.9827% ( 424) 01:19:22.915 8159.100 - 8211.740: 26.2031% ( 439) 01:19:22.915 8211.740 - 8264.379: 29.2547% ( 416) 01:19:22.915 8264.379 - 8317.018: 32.8198% ( 486) 01:19:22.915 8317.018 - 8369.658: 36.3630% ( 483) 01:19:22.915 8369.658 - 8422.297: 40.3316% ( 541) 01:19:22.915 8422.297 - 8474.937: 43.7353% ( 464) 01:19:22.915 8474.937 - 8527.576: 47.1684% ( 468) 01:19:22.915 8527.576 - 8580.215: 50.4255% ( 444) 01:19:22.915 8580.215 - 8632.855: 53.9246% ( 477) 01:19:22.915 8632.855 - 8685.494: 57.0129% ( 421) 01:19:22.915 8685.494 - 8738.133: 59.6097% ( 354) 01:19:22.915 8738.133 - 8790.773: 61.7224% ( 288) 01:19:22.915 8790.773 - 8843.412: 63.4096% ( 230) 01:19:22.915 8843.412 - 8896.051: 65.1702% ( 240) 01:19:22.915 8896.051 - 8948.691: 66.5053% ( 182) 01:19:22.915 8948.691 - 9001.330: 67.9357% ( 195) 01:19:22.915 9001.330 - 9053.969: 69.5056% ( 214) 01:19:22.915 9053.969 - 9106.609: 70.6719% ( 159) 01:19:22.915 9106.609 - 9159.248: 71.9043% ( 168) 01:19:22.915 9159.248 - 9211.888: 73.1367% ( 168) 01:19:22.915 9211.888 - 9264.527: 74.1124% ( 133) 01:19:22.915 9264.527 - 9317.166: 75.0293% ( 125) 01:19:22.915 9317.166 - 9369.806: 76.1150% ( 148) 01:19:22.915 9369.806 - 9422.445: 77.5161% ( 191) 01:19:22.915 9422.445 - 9475.084: 78.9686% ( 198) 01:19:22.915 9475.084 - 9527.724: 80.3477% ( 188) 01:19:22.915 9527.724 - 9580.363: 81.3087% ( 131) 01:19:22.915 9580.363 - 9633.002: 82.2843% ( 133) 01:19:22.915 9633.002 - 9685.642: 83.2086% ( 126) 01:19:22.915 9685.642 - 9738.281: 83.8102% ( 82) 01:19:22.915 9738.281 - 9790.920: 84.3823% ( 78) 01:19:22.915 9790.920 - 9843.560: 84.6978% ( 43) 01:19:22.915 9843.560 - 9896.199: 84.9619% ( 36) 01:19:22.915 9896.199 - 9948.839: 85.1012% ( 19) 01:19:22.915 9948.839 - 10001.478: 85.2406% ( 19) 01:19:22.915 10001.478 - 10054.117: 85.4387% ( 27) 01:19:22.915 10054.117 - 10106.757: 85.6074% ( 23) 01:19:22.915 10106.757 - 10159.396: 85.8495% ( 33) 01:19:22.915 10159.396 - 10212.035: 86.2236% ( 51) 01:19:22.915 10212.035 - 10264.675: 86.7444% ( 71) 01:19:22.915 10264.675 - 10317.314: 87.1332% ( 53) 01:19:22.915 10317.314 - 10369.953: 87.4707% ( 46) 01:19:22.915 10369.953 - 10422.593: 87.9328% ( 63) 01:19:22.915 10422.593 - 10475.232: 88.3289% ( 54) 01:19:22.915 10475.232 - 10527.871: 88.6150% ( 39) 01:19:22.915 10527.871 - 10580.511: 88.7911% ( 24) 01:19:22.915 10580.511 - 10633.150: 88.9671% ( 24) 01:19:22.915 10633.150 - 10685.790: 89.2092% ( 33) 01:19:22.915 10685.790 - 10738.429: 89.3413% ( 18) 01:19:22.915 10738.429 - 10791.068: 89.4440% ( 14) 01:19:22.915 10791.068 - 10843.708: 89.5173% ( 10) 01:19:22.915 10843.708 - 10896.347: 89.5907% ( 10) 01:19:22.915 10896.347 - 10948.986: 89.6787% ( 12) 01:19:22.915 10948.986 - 11001.626: 89.7961% ( 16) 01:19:22.915 11001.626 - 11054.265: 89.9281% ( 18) 01:19:22.915 11054.265 - 11106.904: 90.0235% ( 13) 01:19:22.915 11106.904 - 11159.544: 90.0895% ( 9) 01:19:22.915 11159.544 - 11212.183: 90.1555% ( 9) 01:19:22.915 11212.183 - 11264.822: 90.1995% ( 6) 01:19:22.915 11264.822 - 11317.462: 90.2729% ( 10) 01:19:22.915 11317.462 - 11370.101: 90.4049% ( 18) 01:19:22.915 11370.101 - 11422.741: 90.6323% ( 31) 01:19:22.915 11422.741 - 11475.380: 90.8597% ( 31) 01:19:22.915 11475.380 - 11528.019: 91.1165% ( 35) 01:19:22.915 11528.019 - 11580.659: 91.3659% ( 34) 01:19:22.915 11580.659 - 11633.298: 91.5786% ( 29) 01:19:22.915 11633.298 - 11685.937: 91.7694% ( 26) 01:19:22.915 11685.937 - 11738.577: 91.9821% ( 29) 01:19:22.915 11738.577 - 11791.216: 92.1948% ( 29) 01:19:22.915 11791.216 - 11843.855: 92.3489% ( 21) 01:19:22.915 11843.855 - 11896.495: 92.4809% ( 18) 01:19:22.915 11896.495 - 11949.134: 92.6056% ( 17) 01:19:22.915 11949.134 - 12001.773: 92.7523% ( 20) 01:19:22.915 12001.773 - 12054.413: 92.8991% ( 20) 01:19:22.915 12054.413 - 12107.052: 93.0018% ( 14) 01:19:22.915 12107.052 - 12159.692: 93.0751% ( 10) 01:19:22.915 12159.692 - 12212.331: 93.2658% ( 26) 01:19:22.915 12212.331 - 12264.970: 93.3319% ( 9) 01:19:22.915 12264.970 - 12317.610: 93.4052% ( 10) 01:19:22.915 12317.610 - 12370.249: 93.4492% ( 6) 01:19:22.915 12370.249 - 12422.888: 93.4786% ( 4) 01:19:22.915 12422.888 - 12475.528: 93.5006% ( 3) 01:19:22.915 12475.528 - 12528.167: 93.6326% ( 18) 01:19:22.915 12528.167 - 12580.806: 93.7427% ( 15) 01:19:22.915 12580.806 - 12633.446: 93.7573% ( 2) 01:19:22.915 12633.446 - 12686.085: 93.7940% ( 5) 01:19:22.915 12686.085 - 12738.724: 93.8527% ( 8) 01:19:22.915 12738.724 - 12791.364: 93.9187% ( 9) 01:19:22.915 12791.364 - 12844.003: 94.0141% ( 13) 01:19:22.915 12844.003 - 12896.643: 94.1388% ( 17) 01:19:22.915 12896.643 - 12949.282: 94.2635% ( 17) 01:19:22.915 12949.282 - 13001.921: 94.3882% ( 17) 01:19:22.915 13001.921 - 13054.561: 94.5202% ( 18) 01:19:22.915 13054.561 - 13107.200: 94.5863% ( 9) 01:19:22.915 13107.200 - 13159.839: 94.6450% ( 8) 01:19:22.915 13159.839 - 13212.479: 94.7550% ( 15) 01:19:22.915 13212.479 - 13265.118: 94.8210% ( 9) 01:19:22.915 13265.118 - 13317.757: 94.8577% ( 5) 01:19:22.915 13317.757 - 13370.397: 94.9384% ( 11) 01:19:22.915 13370.397 - 13423.036: 95.0044% ( 9) 01:19:22.915 13423.036 - 13475.676: 95.0484% ( 6) 01:19:22.915 13475.676 - 13580.954: 95.1438% ( 13) 01:19:22.915 13580.954 - 13686.233: 95.2025% ( 8) 01:19:22.915 13686.233 - 13791.512: 95.2538% ( 7) 01:19:22.915 13791.512 - 13896.790: 95.2978% ( 6) 01:19:22.915 13896.790 - 14002.069: 95.3712% ( 10) 01:19:22.915 14002.069 - 14107.348: 95.4739% ( 14) 01:19:22.915 14107.348 - 14212.627: 95.5546% ( 11) 01:19:22.915 14212.627 - 14317.905: 95.6279% ( 10) 01:19:22.915 14317.905 - 14423.184: 95.6793% ( 7) 01:19:22.915 14423.184 - 14528.463: 95.7380% ( 8) 01:19:22.915 14528.463 - 14633.741: 95.8773% ( 19) 01:19:22.915 14633.741 - 14739.020: 96.0681% ( 26) 01:19:22.915 14739.020 - 14844.299: 96.3175% ( 34) 01:19:22.915 14844.299 - 14949.578: 96.5156% ( 27) 01:19:22.915 14949.578 - 15054.856: 96.6403% ( 17) 01:19:22.915 15054.856 - 15160.135: 96.6916% ( 7) 01:19:22.915 15160.135 - 15265.414: 96.8016% ( 15) 01:19:22.915 15265.414 - 15370.692: 96.8823% ( 11) 01:19:22.915 15370.692 - 15475.971: 96.9704% ( 12) 01:19:22.915 15475.971 - 15581.250: 97.0437% ( 10) 01:19:22.915 15581.250 - 15686.529: 97.1317% ( 12) 01:19:22.915 15686.529 - 15791.807: 97.2051% ( 10) 01:19:22.915 15791.807 - 15897.086: 97.2858% ( 11) 01:19:22.915 15897.086 - 16002.365: 97.3665% ( 11) 01:19:22.915 16002.365 - 16107.643: 97.4325% ( 9) 01:19:22.915 16107.643 - 16212.922: 97.4619% ( 4) 01:19:22.915 16212.922 - 16318.201: 97.4912% ( 4) 01:19:22.915 16318.201 - 16423.480: 97.5279% ( 5) 01:19:22.915 16423.480 - 16528.758: 97.5572% ( 4) 01:19:22.915 16528.758 - 16634.037: 97.5939% ( 5) 01:19:22.915 16634.037 - 16739.316: 97.6819% ( 12) 01:19:22.915 16739.316 - 16844.594: 97.7993% ( 16) 01:19:22.915 16844.594 - 16949.873: 98.0194% ( 30) 01:19:22.915 16949.873 - 17055.152: 98.0707% ( 7) 01:19:22.915 17055.152 - 17160.431: 98.1147% ( 6) 01:19:22.915 17160.431 - 17265.709: 98.1221% ( 1) 01:19:22.915 19687.120 - 19792.398: 98.1734% ( 7) 01:19:22.915 19792.398 - 19897.677: 98.2028% ( 4) 01:19:22.915 19897.677 - 20002.956: 98.2468% ( 6) 01:19:22.915 20002.956 - 20108.235: 98.2835% ( 5) 01:19:22.915 20108.235 - 20213.513: 98.3275% ( 6) 01:19:22.915 20213.513 - 20318.792: 98.3641% ( 5) 01:19:22.915 20318.792 - 20424.071: 98.3935% ( 4) 01:19:22.915 20424.071 - 20529.349: 98.4302% ( 5) 01:19:22.915 20529.349 - 20634.628: 98.4668% ( 5) 01:19:22.915 20634.628 - 20739.907: 98.5035% ( 5) 01:19:22.915 20739.907 - 20845.186: 98.5402% ( 5) 01:19:22.915 20845.186 - 20950.464: 98.5769% ( 5) 01:19:22.915 20950.464 - 21055.743: 98.5915% ( 2) 01:19:22.915 21792.694 - 21897.973: 98.5989% ( 1) 01:19:22.915 21897.973 - 22003.251: 98.6796% ( 11) 01:19:22.915 22003.251 - 22108.530: 98.7163% ( 5) 01:19:22.915 22108.530 - 22213.809: 98.7529% ( 5) 01:19:22.915 22213.809 - 22319.088: 98.8043% ( 7) 01:19:22.915 22319.088 - 22424.366: 98.8556% ( 7) 01:19:22.915 22424.366 - 22529.645: 98.8996% ( 6) 01:19:22.915 22529.645 - 22634.924: 98.9510% ( 7) 01:19:22.915 22634.924 - 22740.202: 99.0023% ( 7) 01:19:22.915 22740.202 - 22845.481: 99.0464% ( 6) 01:19:22.915 22845.481 - 22950.760: 99.0610% ( 2) 01:19:22.915 29899.155 - 30109.712: 99.0977% ( 5) 01:19:22.915 30109.712 - 30320.270: 99.1491% ( 7) 01:19:22.915 30320.270 - 30530.827: 99.2004% ( 7) 01:19:22.915 30530.827 - 30741.385: 99.2371% ( 5) 01:19:22.915 30741.385 - 30951.942: 99.2811% ( 6) 01:19:22.915 30951.942 - 31162.500: 99.3325% ( 7) 01:19:22.915 31162.500 - 31373.057: 99.3838% ( 7) 01:19:22.915 31373.057 - 31583.614: 99.4352% ( 7) 01:19:22.915 31583.614 - 31794.172: 99.4865% ( 7) 01:19:22.915 31794.172 - 32004.729: 99.5305% ( 6) 01:19:22.915 36426.435 - 36636.993: 99.5819% ( 7) 01:19:22.915 36636.993 - 36847.550: 99.6259% ( 6) 01:19:22.915 36847.550 - 37058.108: 99.6699% ( 6) 01:19:22.915 37058.108 - 37268.665: 99.7212% ( 7) 01:19:22.915 37268.665 - 37479.222: 99.7726% ( 7) 01:19:22.915 37479.222 - 37689.780: 99.8239% ( 7) 01:19:22.915 37689.780 - 37900.337: 99.8753% ( 7) 01:19:22.915 37900.337 - 38110.895: 99.9193% ( 6) 01:19:22.915 38110.895 - 38321.452: 99.9707% ( 7) 01:19:22.915 38321.452 - 38532.010: 100.0000% ( 4) 01:19:22.915 01:19:22.915 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 01:19:22.915 ============================================================================== 01:19:22.915 Range in us Cumulative IO count 01:19:22.915 7316.871 - 7369.510: 0.0073% ( 1) 01:19:22.916 7369.510 - 7422.149: 0.1022% ( 13) 01:19:22.916 7422.149 - 7474.789: 0.2190% ( 16) 01:19:22.916 7474.789 - 7527.428: 0.4600% ( 33) 01:19:22.916 7527.428 - 7580.067: 0.7959% ( 46) 01:19:22.916 7580.067 - 7632.707: 1.3581% ( 77) 01:19:22.916 7632.707 - 7685.346: 2.1539% ( 109) 01:19:22.916 7685.346 - 7737.986: 3.1980% ( 143) 01:19:22.916 7737.986 - 7790.625: 4.6583% ( 200) 01:19:22.916 7790.625 - 7843.264: 6.4179% ( 241) 01:19:22.916 7843.264 - 7895.904: 8.7982% ( 326) 01:19:22.916 7895.904 - 7948.543: 11.1346% ( 320) 01:19:22.916 7948.543 - 8001.182: 13.7193% ( 354) 01:19:22.916 8001.182 - 8053.822: 16.7202% ( 411) 01:19:22.916 8053.822 - 8106.461: 19.7722% ( 418) 01:19:22.916 8106.461 - 8159.100: 22.8899% ( 427) 01:19:22.916 8159.100 - 8211.740: 26.4822% ( 492) 01:19:22.916 8211.740 - 8264.379: 29.7751% ( 451) 01:19:22.916 8264.379 - 8317.018: 33.3017% ( 483) 01:19:22.916 8317.018 - 8369.658: 36.8064% ( 480) 01:19:22.916 8369.658 - 8422.297: 40.1723% ( 461) 01:19:22.916 8422.297 - 8474.937: 43.4579% ( 450) 01:19:22.916 8474.937 - 8527.576: 47.1013% ( 499) 01:19:22.916 8527.576 - 8580.215: 50.1752% ( 421) 01:19:22.916 8580.215 - 8632.855: 53.8478% ( 503) 01:19:22.916 8632.855 - 8685.494: 56.8998% ( 418) 01:19:22.916 8685.494 - 8738.133: 59.3385% ( 334) 01:19:22.916 8738.133 - 8790.773: 61.7845% ( 335) 01:19:22.916 8790.773 - 8843.412: 63.5952% ( 248) 01:19:22.916 8843.412 - 8896.051: 65.3548% ( 241) 01:19:22.916 8896.051 - 8948.691: 66.8881% ( 210) 01:19:22.916 8948.691 - 9001.330: 68.1951% ( 179) 01:19:22.916 9001.330 - 9053.969: 69.2611% ( 146) 01:19:22.916 9053.969 - 9106.609: 70.4293% ( 160) 01:19:22.916 9106.609 - 9159.248: 71.4880% ( 145) 01:19:22.916 9159.248 - 9211.888: 72.4372% ( 130) 01:19:22.916 9211.888 - 9264.527: 73.6857% ( 171) 01:19:22.916 9264.527 - 9317.166: 75.1022% ( 194) 01:19:22.916 9317.166 - 9369.806: 76.4968% ( 191) 01:19:22.916 9369.806 - 9422.445: 77.9644% ( 201) 01:19:22.916 9422.445 - 9475.084: 79.3078% ( 184) 01:19:22.916 9475.084 - 9527.724: 80.4395% ( 155) 01:19:22.916 9527.724 - 9580.363: 81.5275% ( 149) 01:19:22.916 9580.363 - 9633.002: 82.5204% ( 136) 01:19:22.916 9633.002 - 9685.642: 82.9877% ( 64) 01:19:22.916 9685.642 - 9738.281: 83.3674% ( 52) 01:19:22.916 9738.281 - 9790.920: 83.6668% ( 41) 01:19:22.916 9790.920 - 9843.560: 84.0099% ( 47) 01:19:22.916 9843.560 - 9896.199: 84.3750% ( 50) 01:19:22.916 9896.199 - 9948.839: 84.7109% ( 46) 01:19:22.916 9948.839 - 10001.478: 84.9664% ( 35) 01:19:22.916 10001.478 - 10054.117: 85.2147% ( 34) 01:19:22.916 10054.117 - 10106.757: 85.5505% ( 46) 01:19:22.916 10106.757 - 10159.396: 85.7112% ( 22) 01:19:22.916 10159.396 - 10212.035: 85.8864% ( 24) 01:19:22.916 10212.035 - 10264.675: 86.0543% ( 23) 01:19:22.916 10264.675 - 10317.314: 86.2004% ( 20) 01:19:22.916 10317.314 - 10369.953: 86.5508% ( 48) 01:19:22.916 10369.953 - 10422.593: 86.7261% ( 24) 01:19:22.916 10422.593 - 10475.232: 86.9524% ( 31) 01:19:22.916 10475.232 - 10527.871: 87.1860% ( 32) 01:19:22.916 10527.871 - 10580.511: 87.4270% ( 33) 01:19:22.916 10580.511 - 10633.150: 87.6168% ( 26) 01:19:22.916 10633.150 - 10685.790: 87.8724% ( 35) 01:19:22.916 10685.790 - 10738.429: 88.0403% ( 23) 01:19:22.916 10738.429 - 10791.068: 88.2301% ( 26) 01:19:22.916 10791.068 - 10843.708: 88.3251% ( 13) 01:19:22.916 10843.708 - 10896.347: 88.4711% ( 20) 01:19:22.916 10896.347 - 10948.986: 88.6536% ( 25) 01:19:22.916 10948.986 - 11001.626: 88.8289% ( 24) 01:19:22.916 11001.626 - 11054.265: 89.0406% ( 29) 01:19:22.916 11054.265 - 11106.904: 89.3473% ( 42) 01:19:22.916 11106.904 - 11159.544: 89.5736% ( 31) 01:19:22.916 11159.544 - 11212.183: 89.9095% ( 46) 01:19:22.916 11212.183 - 11264.822: 90.1358% ( 31) 01:19:22.916 11264.822 - 11317.462: 90.3914% ( 35) 01:19:22.916 11317.462 - 11370.101: 90.6323% ( 33) 01:19:22.916 11370.101 - 11422.741: 90.9609% ( 45) 01:19:22.916 11422.741 - 11475.380: 91.0704% ( 15) 01:19:22.916 11475.380 - 11528.019: 91.1726% ( 14) 01:19:22.916 11528.019 - 11580.659: 91.2602% ( 12) 01:19:22.916 11580.659 - 11633.298: 91.3697% ( 15) 01:19:22.916 11633.298 - 11685.937: 91.4793% ( 15) 01:19:22.916 11685.937 - 11738.577: 91.6253% ( 20) 01:19:22.916 11738.577 - 11791.216: 91.7494% ( 17) 01:19:22.916 11791.216 - 11843.855: 92.0196% ( 37) 01:19:22.916 11843.855 - 11896.495: 92.1802% ( 22) 01:19:22.916 11896.495 - 11949.134: 92.3627% ( 25) 01:19:22.916 11949.134 - 12001.773: 92.5818% ( 30) 01:19:22.916 12001.773 - 12054.413: 92.7132% ( 18) 01:19:22.916 12054.413 - 12107.052: 92.8519% ( 19) 01:19:22.916 12107.052 - 12159.692: 93.0053% ( 21) 01:19:22.916 12159.692 - 12212.331: 93.1294% ( 17) 01:19:22.916 12212.331 - 12264.970: 93.2097% ( 11) 01:19:22.916 12264.970 - 12317.610: 93.2681% ( 8) 01:19:22.916 12317.610 - 12370.249: 93.3411% ( 10) 01:19:22.916 12370.249 - 12422.888: 93.4214% ( 11) 01:19:22.916 12422.888 - 12475.528: 93.5018% ( 11) 01:19:22.916 12475.528 - 12528.167: 93.5967% ( 13) 01:19:22.916 12528.167 - 12580.806: 93.6989% ( 14) 01:19:22.916 12580.806 - 12633.446: 94.0129% ( 43) 01:19:22.916 12633.446 - 12686.085: 94.1151% ( 14) 01:19:22.916 12686.085 - 12738.724: 94.1808% ( 9) 01:19:22.916 12738.724 - 12791.364: 94.2757% ( 13) 01:19:22.916 12791.364 - 12844.003: 94.3414% ( 9) 01:19:22.916 12844.003 - 12896.643: 94.4874% ( 20) 01:19:22.916 12896.643 - 12949.282: 94.5605% ( 10) 01:19:22.916 12949.282 - 13001.921: 94.6043% ( 6) 01:19:22.916 13001.921 - 13054.561: 94.6335% ( 4) 01:19:22.916 13054.561 - 13107.200: 94.6773% ( 6) 01:19:22.916 13107.200 - 13159.839: 94.7065% ( 4) 01:19:22.916 13159.839 - 13212.479: 94.7430% ( 5) 01:19:22.916 13212.479 - 13265.118: 94.7941% ( 7) 01:19:22.916 13265.118 - 13317.757: 94.8525% ( 8) 01:19:22.916 13317.757 - 13370.397: 94.9328% ( 11) 01:19:22.916 13370.397 - 13423.036: 95.0277% ( 13) 01:19:22.916 13423.036 - 13475.676: 95.0935% ( 9) 01:19:22.916 13475.676 - 13580.954: 95.1738% ( 11) 01:19:22.916 13580.954 - 13686.233: 95.2760% ( 14) 01:19:22.916 13686.233 - 13791.512: 95.4220% ( 20) 01:19:22.916 13791.512 - 13896.790: 95.5461% ( 17) 01:19:22.916 13896.790 - 14002.069: 95.7068% ( 22) 01:19:22.916 14002.069 - 14107.348: 95.7506% ( 6) 01:19:22.916 14107.348 - 14212.627: 95.7725% ( 3) 01:19:22.916 14212.627 - 14317.905: 95.7944% ( 3) 01:19:22.916 14528.463 - 14633.741: 95.8017% ( 1) 01:19:22.916 14633.741 - 14739.020: 95.8455% ( 6) 01:19:22.916 14739.020 - 14844.299: 95.9623% ( 16) 01:19:22.916 14844.299 - 14949.578: 96.1303% ( 23) 01:19:22.916 14949.578 - 15054.856: 96.2617% ( 18) 01:19:22.916 15054.856 - 15160.135: 96.4004% ( 19) 01:19:22.916 15160.135 - 15265.414: 96.6633% ( 36) 01:19:22.916 15265.414 - 15370.692: 97.0575% ( 54) 01:19:22.916 15370.692 - 15475.971: 97.2766% ( 30) 01:19:22.916 15475.971 - 15581.250: 97.4226% ( 20) 01:19:22.916 15581.250 - 15686.529: 97.5029% ( 11) 01:19:22.916 15686.529 - 15791.807: 97.5248% ( 3) 01:19:22.916 15791.807 - 15897.086: 97.5467% ( 3) 01:19:22.916 15897.086 - 16002.365: 97.5759% ( 4) 01:19:22.916 16002.365 - 16107.643: 97.6051% ( 4) 01:19:22.916 16107.643 - 16212.922: 97.6416% ( 5) 01:19:22.916 16212.922 - 16318.201: 97.6636% ( 3) 01:19:22.916 17581.545 - 17686.824: 97.7731% ( 15) 01:19:22.916 17686.824 - 17792.103: 97.8972% ( 17) 01:19:22.916 17792.103 - 17897.382: 98.0359% ( 19) 01:19:22.916 17897.382 - 18002.660: 98.1162% ( 11) 01:19:22.916 18002.660 - 18107.939: 98.1308% ( 2) 01:19:22.916 19160.726 - 19266.005: 98.1600% ( 4) 01:19:22.916 19266.005 - 19371.284: 98.1820% ( 3) 01:19:22.916 19371.284 - 19476.562: 98.2258% ( 6) 01:19:22.916 19476.562 - 19581.841: 98.2623% ( 5) 01:19:22.916 19581.841 - 19687.120: 98.3061% ( 6) 01:19:22.916 19687.120 - 19792.398: 98.3426% ( 5) 01:19:22.916 19792.398 - 19897.677: 98.3791% ( 5) 01:19:22.916 19897.677 - 20002.956: 98.4229% ( 6) 01:19:22.916 20002.956 - 20108.235: 98.4594% ( 5) 01:19:22.916 20108.235 - 20213.513: 98.4959% ( 5) 01:19:22.916 20213.513 - 20318.792: 98.5324% ( 5) 01:19:22.916 20318.792 - 20424.071: 98.5689% ( 5) 01:19:22.916 20424.071 - 20529.349: 98.5981% ( 4) 01:19:22.916 21687.415 - 21792.694: 98.6127% ( 2) 01:19:22.916 21792.694 - 21897.973: 98.6930% ( 11) 01:19:22.916 21897.973 - 22003.251: 98.7588% ( 9) 01:19:22.916 22003.251 - 22108.530: 98.8099% ( 7) 01:19:22.916 22108.530 - 22213.809: 98.8756% ( 9) 01:19:22.916 22213.809 - 22319.088: 98.9413% ( 9) 01:19:22.916 22319.088 - 22424.366: 99.0070% ( 9) 01:19:22.916 22424.366 - 22529.645: 99.0800% ( 10) 01:19:22.916 22529.645 - 22634.924: 99.1311% ( 7) 01:19:22.916 22634.924 - 22740.202: 99.1895% ( 8) 01:19:22.916 22740.202 - 22845.481: 99.2480% ( 8) 01:19:22.916 22845.481 - 22950.760: 99.3137% ( 9) 01:19:22.916 22950.760 - 23056.039: 99.3575% ( 6) 01:19:22.916 23056.039 - 23161.317: 99.3867% ( 4) 01:19:22.916 23161.317 - 23266.596: 99.4159% ( 4) 01:19:22.916 23266.596 - 23371.875: 99.4451% ( 4) 01:19:22.916 23371.875 - 23477.153: 99.4670% ( 3) 01:19:22.916 23477.153 - 23582.432: 99.4962% ( 4) 01:19:22.916 23582.432 - 23687.711: 99.5181% ( 3) 01:19:22.916 23687.711 - 23792.990: 99.5327% ( 2) 01:19:22.916 29056.925 - 29267.483: 99.5473% ( 2) 01:19:22.916 29267.483 - 29478.040: 99.5911% ( 6) 01:19:22.916 29478.040 - 29688.598: 99.6422% ( 7) 01:19:22.916 29688.598 - 29899.155: 99.6933% ( 7) 01:19:22.916 29899.155 - 30109.712: 99.7445% ( 7) 01:19:22.916 30109.712 - 30320.270: 99.7956% ( 7) 01:19:22.916 30320.270 - 30530.827: 99.8467% ( 7) 01:19:22.916 30530.827 - 30741.385: 99.8978% ( 7) 01:19:22.916 30741.385 - 30951.942: 99.9489% ( 7) 01:19:22.916 30951.942 - 31162.500: 99.9927% ( 6) 01:19:22.916 31162.500 - 31373.057: 100.0000% ( 1) 01:19:22.916 01:19:22.916 05:14:05 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 01:19:22.916 01:19:22.916 real 0m2.887s 01:19:22.916 user 0m2.469s 01:19:22.916 sys 0m0.304s 01:19:22.916 05:14:05 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 01:19:22.916 05:14:05 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 01:19:22.916 ************************************ 01:19:22.916 END TEST nvme_perf 01:19:22.916 ************************************ 01:19:22.917 05:14:05 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 01:19:22.917 05:14:05 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:19:22.917 05:14:05 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 01:19:22.917 05:14:05 nvme -- common/autotest_common.sh@10 -- # set +x 01:19:22.917 ************************************ 01:19:22.917 START TEST nvme_hello_world 01:19:22.917 ************************************ 01:19:22.917 05:14:05 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 01:19:23.175 Initializing NVMe Controllers 01:19:23.175 Attached to 0000:00:13.0 01:19:23.175 Namespace ID: 1 size: 1GB 01:19:23.175 Attached to 0000:00:10.0 01:19:23.175 Namespace ID: 1 size: 6GB 01:19:23.175 Attached to 0000:00:11.0 01:19:23.175 Namespace ID: 1 size: 5GB 01:19:23.175 Attached to 0000:00:12.0 01:19:23.175 Namespace ID: 1 size: 4GB 01:19:23.175 Namespace ID: 2 size: 4GB 01:19:23.175 Namespace ID: 3 size: 4GB 01:19:23.175 Initialization complete. 01:19:23.175 INFO: using host memory buffer for IO 01:19:23.175 Hello world! 01:19:23.175 INFO: using host memory buffer for IO 01:19:23.175 Hello world! 01:19:23.175 INFO: using host memory buffer for IO 01:19:23.175 Hello world! 01:19:23.175 INFO: using host memory buffer for IO 01:19:23.175 Hello world! 01:19:23.175 INFO: using host memory buffer for IO 01:19:23.175 Hello world! 01:19:23.175 INFO: using host memory buffer for IO 01:19:23.175 Hello world! 01:19:23.435 ************************************ 01:19:23.435 END TEST nvme_hello_world 01:19:23.435 ************************************ 01:19:23.435 01:19:23.435 real 0m0.405s 01:19:23.435 user 0m0.192s 01:19:23.435 sys 0m0.167s 01:19:23.435 05:14:05 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 01:19:23.435 05:14:05 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 01:19:23.435 05:14:05 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 01:19:23.435 05:14:05 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:19:23.435 05:14:05 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 01:19:23.435 05:14:05 nvme -- common/autotest_common.sh@10 -- # set +x 01:19:23.435 ************************************ 01:19:23.435 START TEST nvme_sgl 01:19:23.435 ************************************ 01:19:23.435 05:14:05 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 01:19:23.694 0000:00:13.0: build_io_request_0 Invalid IO length parameter 01:19:23.694 0000:00:13.0: build_io_request_1 Invalid IO length parameter 01:19:23.694 0000:00:13.0: build_io_request_2 Invalid IO length parameter 01:19:23.694 0000:00:13.0: build_io_request_3 Invalid IO length parameter 01:19:23.694 0000:00:13.0: build_io_request_4 Invalid IO length parameter 01:19:23.694 0000:00:13.0: build_io_request_5 Invalid IO length parameter 01:19:23.694 0000:00:13.0: build_io_request_6 Invalid IO length parameter 01:19:23.694 0000:00:13.0: build_io_request_7 Invalid IO length parameter 01:19:23.694 0000:00:13.0: build_io_request_8 Invalid IO length parameter 01:19:23.694 0000:00:13.0: build_io_request_9 Invalid IO length parameter 01:19:23.694 0000:00:13.0: build_io_request_10 Invalid IO length parameter 01:19:23.694 0000:00:13.0: build_io_request_11 Invalid IO length parameter 01:19:23.694 0000:00:10.0: build_io_request_0 Invalid IO length parameter 01:19:23.694 0000:00:10.0: build_io_request_1 Invalid IO length parameter 01:19:23.694 0000:00:10.0: build_io_request_3 Invalid IO length parameter 01:19:23.694 0000:00:10.0: build_io_request_8 Invalid IO length parameter 01:19:23.694 0000:00:10.0: build_io_request_9 Invalid IO length parameter 01:19:23.694 0000:00:10.0: build_io_request_11 Invalid IO length parameter 01:19:23.694 0000:00:11.0: build_io_request_0 Invalid IO length parameter 01:19:23.694 0000:00:11.0: build_io_request_1 Invalid IO length parameter 01:19:23.694 0000:00:11.0: build_io_request_3 Invalid IO length parameter 01:19:23.694 0000:00:11.0: build_io_request_8 Invalid IO length parameter 01:19:23.694 0000:00:11.0: build_io_request_9 Invalid IO length parameter 01:19:23.694 0000:00:11.0: build_io_request_11 Invalid IO length parameter 01:19:23.694 0000:00:12.0: build_io_request_0 Invalid IO length parameter 01:19:23.694 0000:00:12.0: build_io_request_1 Invalid IO length parameter 01:19:23.694 0000:00:12.0: build_io_request_2 Invalid IO length parameter 01:19:23.694 0000:00:12.0: build_io_request_3 Invalid IO length parameter 01:19:23.694 0000:00:12.0: build_io_request_4 Invalid IO length parameter 01:19:23.694 0000:00:12.0: build_io_request_5 Invalid IO length parameter 01:19:23.694 0000:00:12.0: build_io_request_6 Invalid IO length parameter 01:19:23.694 0000:00:12.0: build_io_request_7 Invalid IO length parameter 01:19:23.694 0000:00:12.0: build_io_request_8 Invalid IO length parameter 01:19:23.694 0000:00:12.0: build_io_request_9 Invalid IO length parameter 01:19:23.694 0000:00:12.0: build_io_request_10 Invalid IO length parameter 01:19:23.694 0000:00:12.0: build_io_request_11 Invalid IO length parameter 01:19:23.694 NVMe Readv/Writev Request test 01:19:23.694 Attached to 0000:00:13.0 01:19:23.694 Attached to 0000:00:10.0 01:19:23.694 Attached to 0000:00:11.0 01:19:23.694 Attached to 0000:00:12.0 01:19:23.694 0000:00:10.0: build_io_request_2 test passed 01:19:23.694 0000:00:10.0: build_io_request_4 test passed 01:19:23.694 0000:00:10.0: build_io_request_5 test passed 01:19:23.694 0000:00:10.0: build_io_request_6 test passed 01:19:23.694 0000:00:10.0: build_io_request_7 test passed 01:19:23.694 0000:00:10.0: build_io_request_10 test passed 01:19:23.694 0000:00:11.0: build_io_request_2 test passed 01:19:23.694 0000:00:11.0: build_io_request_4 test passed 01:19:23.694 0000:00:11.0: build_io_request_5 test passed 01:19:23.694 0000:00:11.0: build_io_request_6 test passed 01:19:23.694 0000:00:11.0: build_io_request_7 test passed 01:19:23.694 0000:00:11.0: build_io_request_10 test passed 01:19:23.694 Cleaning up... 01:19:23.694 01:19:23.694 real 0m0.375s 01:19:23.694 user 0m0.181s 01:19:23.694 sys 0m0.152s 01:19:23.694 05:14:06 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 01:19:23.694 ************************************ 01:19:23.694 END TEST nvme_sgl 01:19:23.694 ************************************ 01:19:23.694 05:14:06 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 01:19:23.952 05:14:06 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 01:19:23.952 05:14:06 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:19:23.952 05:14:06 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 01:19:23.953 05:14:06 nvme -- common/autotest_common.sh@10 -- # set +x 01:19:23.953 ************************************ 01:19:23.953 START TEST nvme_e2edp 01:19:23.953 ************************************ 01:19:23.953 05:14:06 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 01:19:24.244 NVMe Write/Read with End-to-End data protection test 01:19:24.244 Attached to 0000:00:13.0 01:19:24.244 Attached to 0000:00:10.0 01:19:24.244 Attached to 0000:00:11.0 01:19:24.244 Attached to 0000:00:12.0 01:19:24.244 Cleaning up... 01:19:24.244 01:19:24.244 real 0m0.306s 01:19:24.244 user 0m0.116s 01:19:24.244 sys 0m0.146s 01:19:24.244 ************************************ 01:19:24.244 END TEST nvme_e2edp 01:19:24.244 ************************************ 01:19:24.244 05:14:06 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 01:19:24.244 05:14:06 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 01:19:24.244 05:14:06 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 01:19:24.244 05:14:06 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:19:24.244 05:14:06 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 01:19:24.244 05:14:06 nvme -- common/autotest_common.sh@10 -- # set +x 01:19:24.244 ************************************ 01:19:24.244 START TEST nvme_reserve 01:19:24.244 ************************************ 01:19:24.244 05:14:06 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 01:19:24.513 ===================================================== 01:19:24.513 NVMe Controller at PCI bus 0, device 19, function 0 01:19:24.513 ===================================================== 01:19:24.513 Reservations: Not Supported 01:19:24.513 ===================================================== 01:19:24.513 NVMe Controller at PCI bus 0, device 16, function 0 01:19:24.513 ===================================================== 01:19:24.513 Reservations: Not Supported 01:19:24.513 ===================================================== 01:19:24.513 NVMe Controller at PCI bus 0, device 17, function 0 01:19:24.513 ===================================================== 01:19:24.513 Reservations: Not Supported 01:19:24.513 ===================================================== 01:19:24.513 NVMe Controller at PCI bus 0, device 18, function 0 01:19:24.513 ===================================================== 01:19:24.513 Reservations: Not Supported 01:19:24.513 Reservation test passed 01:19:24.513 01:19:24.513 real 0m0.322s 01:19:24.513 user 0m0.107s 01:19:24.513 sys 0m0.161s 01:19:24.513 ************************************ 01:19:24.513 END TEST nvme_reserve 01:19:24.513 ************************************ 01:19:24.513 05:14:06 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 01:19:24.513 05:14:06 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 01:19:24.513 05:14:06 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 01:19:24.513 05:14:06 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:19:24.513 05:14:06 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 01:19:24.513 05:14:06 nvme -- common/autotest_common.sh@10 -- # set +x 01:19:24.772 ************************************ 01:19:24.772 START TEST nvme_err_injection 01:19:24.772 ************************************ 01:19:24.772 05:14:06 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 01:19:25.031 NVMe Error Injection test 01:19:25.031 Attached to 0000:00:13.0 01:19:25.031 Attached to 0000:00:10.0 01:19:25.031 Attached to 0000:00:11.0 01:19:25.031 Attached to 0000:00:12.0 01:19:25.031 0000:00:12.0: get features failed as expected 01:19:25.031 0000:00:13.0: get features failed as expected 01:19:25.031 0000:00:10.0: get features failed as expected 01:19:25.031 0000:00:11.0: get features failed as expected 01:19:25.031 0000:00:13.0: get features successfully as expected 01:19:25.031 0000:00:10.0: get features successfully as expected 01:19:25.031 0000:00:11.0: get features successfully as expected 01:19:25.031 0000:00:12.0: get features successfully as expected 01:19:25.031 0000:00:13.0: read failed as expected 01:19:25.031 0000:00:10.0: read failed as expected 01:19:25.031 0000:00:11.0: read failed as expected 01:19:25.031 0000:00:12.0: read failed as expected 01:19:25.031 0000:00:13.0: read successfully as expected 01:19:25.031 0000:00:10.0: read successfully as expected 01:19:25.031 0000:00:11.0: read successfully as expected 01:19:25.031 0000:00:12.0: read successfully as expected 01:19:25.031 Cleaning up... 01:19:25.031 01:19:25.031 real 0m0.338s 01:19:25.031 user 0m0.135s 01:19:25.031 sys 0m0.155s 01:19:25.031 ************************************ 01:19:25.031 END TEST nvme_err_injection 01:19:25.031 ************************************ 01:19:25.031 05:14:07 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 01:19:25.031 05:14:07 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 01:19:25.031 05:14:07 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 01:19:25.031 05:14:07 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 01:19:25.031 05:14:07 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 01:19:25.031 05:14:07 nvme -- common/autotest_common.sh@10 -- # set +x 01:19:25.031 ************************************ 01:19:25.031 START TEST nvme_overhead 01:19:25.031 ************************************ 01:19:25.031 05:14:07 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 01:19:26.411 Initializing NVMe Controllers 01:19:26.411 Attached to 0000:00:13.0 01:19:26.411 Attached to 0000:00:10.0 01:19:26.411 Attached to 0000:00:11.0 01:19:26.411 Attached to 0000:00:12.0 01:19:26.411 Initialization complete. Launching workers. 01:19:26.411 submit (in ns) avg, min, max = 13863.1, 12601.6, 63163.1 01:19:26.411 complete (in ns) avg, min, max = 9155.6, 8334.9, 54233.7 01:19:26.411 01:19:26.411 Submit histogram 01:19:26.411 ================ 01:19:26.411 Range in us Cumulative Count 01:19:26.411 12.594 - 12.646: 0.0478% ( 3) 01:19:26.411 12.646 - 12.697: 0.0797% ( 2) 01:19:26.411 12.697 - 12.749: 0.3187% ( 15) 01:19:26.411 12.749 - 12.800: 1.0996% ( 49) 01:19:26.411 12.800 - 12.851: 2.7570% ( 104) 01:19:26.411 12.851 - 12.903: 5.4661% ( 170) 01:19:26.411 12.903 - 12.954: 9.4821% ( 252) 01:19:26.411 12.954 - 13.006: 14.9641% ( 344) 01:19:26.411 13.006 - 13.057: 20.8765% ( 371) 01:19:26.411 13.057 - 13.108: 26.2311% ( 336) 01:19:26.411 13.108 - 13.160: 31.2988% ( 318) 01:19:26.411 13.160 - 13.263: 40.8287% ( 598) 01:19:26.411 13.263 - 13.365: 51.7450% ( 685) 01:19:26.411 13.365 - 13.468: 61.4980% ( 612) 01:19:26.411 13.468 - 13.571: 70.0876% ( 539) 01:19:26.411 13.571 - 13.674: 76.7012% ( 415) 01:19:26.411 13.674 - 13.777: 81.3386% ( 291) 01:19:26.411 13.777 - 13.880: 84.8287% ( 219) 01:19:26.411 13.880 - 13.982: 87.3785% ( 160) 01:19:26.411 13.982 - 14.085: 89.4183% ( 128) 01:19:26.411 14.085 - 14.188: 90.9482% ( 96) 01:19:26.411 14.188 - 14.291: 91.7131% ( 48) 01:19:26.411 14.291 - 14.394: 92.1912% ( 30) 01:19:26.411 14.394 - 14.496: 92.5737% ( 24) 01:19:26.411 14.496 - 14.599: 92.8606% ( 18) 01:19:26.411 14.599 - 14.702: 92.9402% ( 5) 01:19:26.411 14.702 - 14.805: 93.0518% ( 7) 01:19:26.411 14.805 - 14.908: 93.0677% ( 1) 01:19:26.411 14.908 - 15.010: 93.1315% ( 4) 01:19:26.411 15.010 - 15.113: 93.1474% ( 1) 01:19:26.411 15.113 - 15.216: 93.2112% ( 4) 01:19:26.411 15.216 - 15.319: 93.2430% ( 2) 01:19:26.411 15.319 - 15.422: 93.3227% ( 5) 01:19:26.411 15.422 - 15.524: 93.3546% ( 2) 01:19:26.411 15.524 - 15.627: 93.3705% ( 1) 01:19:26.411 15.627 - 15.730: 93.3865% ( 1) 01:19:26.411 15.730 - 15.833: 93.4183% ( 2) 01:19:26.411 15.833 - 15.936: 93.4661% ( 3) 01:19:26.411 15.936 - 16.039: 93.4980% ( 2) 01:19:26.411 16.039 - 16.141: 93.5139% ( 1) 01:19:26.411 16.141 - 16.244: 93.5458% ( 2) 01:19:26.411 16.244 - 16.347: 93.5777% ( 2) 01:19:26.411 16.347 - 16.450: 93.6574% ( 5) 01:19:26.411 16.450 - 16.553: 93.7211% ( 4) 01:19:26.411 16.553 - 16.655: 93.7689% ( 3) 01:19:26.411 16.655 - 16.758: 93.8167% ( 3) 01:19:26.411 16.758 - 16.861: 93.8645% ( 3) 01:19:26.411 16.861 - 16.964: 93.9124% ( 3) 01:19:26.411 16.964 - 17.067: 93.9761% ( 4) 01:19:26.411 17.067 - 17.169: 94.0239% ( 3) 01:19:26.411 17.169 - 17.272: 94.1036% ( 5) 01:19:26.411 17.272 - 17.375: 94.1673% ( 4) 01:19:26.411 17.375 - 17.478: 94.2948% ( 8) 01:19:26.411 17.478 - 17.581: 94.3426% ( 3) 01:19:26.411 17.581 - 17.684: 94.3904% ( 3) 01:19:26.411 17.684 - 17.786: 94.4542% ( 4) 01:19:26.411 17.786 - 17.889: 94.4701% ( 1) 01:19:26.411 17.889 - 17.992: 94.5179% ( 3) 01:19:26.411 17.992 - 18.095: 94.5657% ( 3) 01:19:26.411 18.095 - 18.198: 94.6614% ( 6) 01:19:26.411 18.198 - 18.300: 94.7251% ( 4) 01:19:26.411 18.300 - 18.403: 94.8048% ( 5) 01:19:26.411 18.403 - 18.506: 94.9323% ( 8) 01:19:26.411 18.506 - 18.609: 95.0120% ( 5) 01:19:26.411 18.609 - 18.712: 95.1235% ( 7) 01:19:26.411 18.712 - 18.814: 95.2988% ( 11) 01:19:26.411 18.814 - 18.917: 95.4900% ( 12) 01:19:26.411 18.917 - 19.020: 95.6653% ( 11) 01:19:26.411 19.020 - 19.123: 95.8566% ( 12) 01:19:26.411 19.123 - 19.226: 96.0159% ( 10) 01:19:26.411 19.226 - 19.329: 96.1753% ( 10) 01:19:26.411 19.329 - 19.431: 96.3825% ( 13) 01:19:26.411 19.431 - 19.534: 96.5100% ( 8) 01:19:26.411 19.534 - 19.637: 96.6853% ( 11) 01:19:26.411 19.637 - 19.740: 96.8287% ( 9) 01:19:26.411 19.740 - 19.843: 96.9880% ( 10) 01:19:26.411 19.843 - 19.945: 97.1155% ( 8) 01:19:26.411 19.945 - 20.048: 97.2430% ( 8) 01:19:26.411 20.048 - 20.151: 97.4183% ( 11) 01:19:26.411 20.151 - 20.254: 97.4502% ( 2) 01:19:26.411 20.254 - 20.357: 97.5936% ( 9) 01:19:26.411 20.357 - 20.459: 97.7371% ( 9) 01:19:26.411 20.459 - 20.562: 97.8167% ( 5) 01:19:26.411 20.562 - 20.665: 97.9124% ( 6) 01:19:26.411 20.665 - 20.768: 97.9602% ( 3) 01:19:26.411 20.768 - 20.871: 97.9920% ( 2) 01:19:26.411 20.871 - 20.973: 98.0398% ( 3) 01:19:26.411 20.973 - 21.076: 98.1514% ( 7) 01:19:26.411 21.076 - 21.179: 98.1992% ( 3) 01:19:26.411 21.179 - 21.282: 98.2311% ( 2) 01:19:26.411 21.385 - 21.488: 98.3108% ( 5) 01:19:26.411 21.488 - 21.590: 98.3426% ( 2) 01:19:26.411 21.590 - 21.693: 98.3745% ( 2) 01:19:26.411 21.693 - 21.796: 98.3904% ( 1) 01:19:26.411 21.796 - 21.899: 98.4223% ( 2) 01:19:26.411 21.899 - 22.002: 98.4542% ( 2) 01:19:26.411 22.207 - 22.310: 98.5020% ( 3) 01:19:26.411 22.310 - 22.413: 98.5817% ( 5) 01:19:26.411 22.413 - 22.516: 98.5976% ( 1) 01:19:26.411 22.618 - 22.721: 98.6135% ( 1) 01:19:26.411 22.824 - 22.927: 98.6295% ( 1) 01:19:26.411 22.927 - 23.030: 98.6614% ( 2) 01:19:26.412 23.133 - 23.235: 98.6773% ( 1) 01:19:26.412 23.235 - 23.338: 98.7092% ( 2) 01:19:26.412 23.441 - 23.544: 98.7410% ( 2) 01:19:26.412 23.544 - 23.647: 98.7570% ( 1) 01:19:26.412 23.647 - 23.749: 98.7729% ( 1) 01:19:26.412 23.749 - 23.852: 98.8526% ( 5) 01:19:26.412 23.852 - 23.955: 98.8845% ( 2) 01:19:26.412 24.161 - 24.263: 98.9163% ( 2) 01:19:26.412 24.469 - 24.572: 98.9482% ( 2) 01:19:26.412 24.572 - 24.675: 98.9641% ( 1) 01:19:26.412 24.778 - 24.880: 98.9801% ( 1) 01:19:26.412 25.086 - 25.189: 98.9960% ( 1) 01:19:26.412 25.394 - 25.497: 99.0120% ( 1) 01:19:26.412 25.497 - 25.600: 99.0598% ( 3) 01:19:26.412 25.600 - 25.703: 99.0916% ( 2) 01:19:26.412 25.703 - 25.806: 99.1554% ( 4) 01:19:26.412 25.806 - 25.908: 99.1873% ( 2) 01:19:26.412 25.908 - 26.011: 99.2510% ( 4) 01:19:26.412 26.011 - 26.114: 99.2829% ( 2) 01:19:26.412 26.114 - 26.217: 99.3147% ( 2) 01:19:26.412 26.217 - 26.320: 99.3625% ( 3) 01:19:26.412 26.320 - 26.525: 99.4263% ( 4) 01:19:26.412 26.525 - 26.731: 99.4582% ( 2) 01:19:26.412 26.731 - 26.937: 99.4741% ( 1) 01:19:26.412 26.937 - 27.142: 99.5060% ( 2) 01:19:26.412 27.348 - 27.553: 99.5219% ( 1) 01:19:26.412 27.553 - 27.759: 99.5378% ( 1) 01:19:26.412 27.965 - 28.170: 99.5538% ( 1) 01:19:26.412 28.170 - 28.376: 99.5697% ( 1) 01:19:26.412 28.993 - 29.198: 99.6016% ( 2) 01:19:26.412 29.198 - 29.404: 99.6335% ( 2) 01:19:26.412 29.404 - 29.610: 99.6813% ( 3) 01:19:26.412 29.610 - 29.815: 99.7291% ( 3) 01:19:26.412 29.815 - 30.021: 99.7769% ( 3) 01:19:26.412 30.021 - 30.227: 99.8406% ( 4) 01:19:26.412 30.227 - 30.432: 99.8884% ( 3) 01:19:26.412 30.843 - 31.049: 99.9044% ( 1) 01:19:26.412 32.488 - 32.694: 99.9203% ( 1) 01:19:26.412 33.311 - 33.516: 99.9363% ( 1) 01:19:26.412 36.601 - 36.806: 99.9522% ( 1) 01:19:26.412 38.040 - 38.246: 99.9681% ( 1) 01:19:26.412 42.769 - 42.975: 99.9841% ( 1) 01:19:26.412 62.920 - 63.332: 100.0000% ( 1) 01:19:26.412 01:19:26.412 Complete histogram 01:19:26.412 ================== 01:19:26.412 Range in us Cumulative Count 01:19:26.412 8.328 - 8.379: 0.1434% ( 9) 01:19:26.412 8.379 - 8.431: 2.5817% ( 153) 01:19:26.412 8.431 - 8.482: 6.9960% ( 277) 01:19:26.412 8.482 - 8.533: 11.2351% ( 266) 01:19:26.412 8.533 - 8.585: 18.9801% ( 486) 01:19:26.412 8.585 - 8.636: 30.7251% ( 737) 01:19:26.412 8.636 - 8.688: 38.9801% ( 518) 01:19:26.412 8.688 - 8.739: 47.1394% ( 512) 01:19:26.412 8.739 - 8.790: 55.5538% ( 528) 01:19:26.412 8.790 - 8.842: 61.8008% ( 392) 01:19:26.412 8.842 - 8.893: 67.2510% ( 342) 01:19:26.412 8.893 - 8.945: 71.4104% ( 261) 01:19:26.412 8.945 - 8.996: 75.0279% ( 227) 01:19:26.412 8.996 - 9.047: 78.0398% ( 189) 01:19:26.412 9.047 - 9.099: 81.0518% ( 189) 01:19:26.412 9.099 - 9.150: 83.5219% ( 155) 01:19:26.412 9.150 - 9.202: 85.5299% ( 126) 01:19:26.412 9.202 - 9.253: 86.9004% ( 86) 01:19:26.412 9.253 - 9.304: 88.1912% ( 81) 01:19:26.412 9.304 - 9.356: 89.6255% ( 90) 01:19:26.412 9.356 - 9.407: 90.5498% ( 58) 01:19:26.412 9.407 - 9.459: 91.4582% ( 57) 01:19:26.412 9.459 - 9.510: 92.2231% ( 48) 01:19:26.412 9.510 - 9.561: 92.8765% ( 41) 01:19:26.412 9.561 - 9.613: 93.3068% ( 27) 01:19:26.412 9.613 - 9.664: 93.6733% ( 23) 01:19:26.412 9.664 - 9.716: 94.0398% ( 23) 01:19:26.412 9.716 - 9.767: 94.3586% ( 20) 01:19:26.412 9.767 - 9.818: 94.5976% ( 15) 01:19:26.412 9.818 - 9.870: 94.8207% ( 14) 01:19:26.412 9.870 - 9.921: 95.0279% ( 13) 01:19:26.412 9.921 - 9.973: 95.1554% ( 8) 01:19:26.412 9.973 - 10.024: 95.2669% ( 7) 01:19:26.412 10.024 - 10.076: 95.4104% ( 9) 01:19:26.412 10.076 - 10.127: 95.5378% ( 8) 01:19:26.412 10.127 - 10.178: 95.6972% ( 10) 01:19:26.412 10.178 - 10.230: 95.7291% ( 2) 01:19:26.412 10.230 - 10.281: 95.8247% ( 6) 01:19:26.412 10.281 - 10.333: 95.8725% ( 3) 01:19:26.412 10.333 - 10.384: 95.9363% ( 4) 01:19:26.412 10.384 - 10.435: 95.9681% ( 2) 01:19:26.412 10.435 - 10.487: 96.0000% ( 2) 01:19:26.412 10.487 - 10.538: 96.0159% ( 1) 01:19:26.412 10.538 - 10.590: 96.0637% ( 3) 01:19:26.412 10.590 - 10.641: 96.0797% ( 1) 01:19:26.412 10.641 - 10.692: 96.1275% ( 3) 01:19:26.412 10.744 - 10.795: 96.1434% ( 1) 01:19:26.412 10.795 - 10.847: 96.1753% ( 2) 01:19:26.412 10.847 - 10.898: 96.2072% ( 2) 01:19:26.412 10.949 - 11.001: 96.2550% ( 3) 01:19:26.412 11.052 - 11.104: 96.2709% ( 1) 01:19:26.412 11.104 - 11.155: 96.3187% ( 3) 01:19:26.412 11.155 - 11.206: 96.3506% ( 2) 01:19:26.412 11.206 - 11.258: 96.3665% ( 1) 01:19:26.412 11.258 - 11.309: 96.4143% ( 3) 01:19:26.412 11.309 - 11.361: 96.4303% ( 1) 01:19:26.412 11.361 - 11.412: 96.4622% ( 2) 01:19:26.412 11.412 - 11.463: 96.4940% ( 2) 01:19:26.412 11.463 - 11.515: 96.5100% ( 1) 01:19:26.412 11.515 - 11.566: 96.5737% ( 4) 01:19:26.412 11.566 - 11.618: 96.5896% ( 1) 01:19:26.412 11.618 - 11.669: 96.6375% ( 3) 01:19:26.412 11.669 - 11.720: 96.7012% ( 4) 01:19:26.412 11.772 - 11.823: 96.7331% ( 2) 01:19:26.412 11.823 - 11.875: 96.7490% ( 1) 01:19:26.412 11.875 - 11.926: 96.7968% ( 3) 01:19:26.412 11.978 - 12.029: 96.8127% ( 1) 01:19:26.412 12.029 - 12.080: 96.8287% ( 1) 01:19:26.412 12.080 - 12.132: 96.8446% ( 1) 01:19:26.412 12.132 - 12.183: 96.8606% ( 1) 01:19:26.412 12.183 - 12.235: 96.8765% ( 1) 01:19:26.412 12.235 - 12.286: 96.9084% ( 2) 01:19:26.412 12.286 - 12.337: 96.9402% ( 2) 01:19:26.412 12.389 - 12.440: 97.0040% ( 4) 01:19:26.412 12.492 - 12.543: 97.0199% ( 1) 01:19:26.412 12.543 - 12.594: 97.0518% ( 2) 01:19:26.412 12.594 - 12.646: 97.0677% ( 1) 01:19:26.412 12.646 - 12.697: 97.1155% ( 3) 01:19:26.412 12.697 - 12.749: 97.1474% ( 2) 01:19:26.412 12.749 - 12.800: 97.1633% ( 1) 01:19:26.412 12.851 - 12.903: 97.2112% ( 3) 01:19:26.412 12.903 - 12.954: 97.2271% ( 1) 01:19:26.412 12.954 - 13.006: 97.2430% ( 1) 01:19:26.412 13.006 - 13.057: 97.2749% ( 2) 01:19:26.412 13.263 - 13.365: 97.3227% ( 3) 01:19:26.412 13.468 - 13.571: 97.3705% ( 3) 01:19:26.412 13.571 - 13.674: 97.3865% ( 1) 01:19:26.412 13.674 - 13.777: 97.4024% ( 1) 01:19:26.412 13.777 - 13.880: 97.4343% ( 2) 01:19:26.412 13.982 - 14.085: 97.4661% ( 2) 01:19:26.412 14.085 - 14.188: 97.4821% ( 1) 01:19:26.412 14.394 - 14.496: 97.4980% ( 1) 01:19:26.412 14.496 - 14.599: 97.5299% ( 2) 01:19:26.412 14.599 - 14.702: 97.5618% ( 2) 01:19:26.412 14.702 - 14.805: 97.5777% ( 1) 01:19:26.412 14.805 - 14.908: 97.6255% ( 3) 01:19:26.412 14.908 - 15.010: 97.6733% ( 3) 01:19:26.412 15.010 - 15.113: 97.6892% ( 1) 01:19:26.412 15.113 - 15.216: 97.7530% ( 4) 01:19:26.412 15.216 - 15.319: 97.7849% ( 2) 01:19:26.412 15.319 - 15.422: 97.8327% ( 3) 01:19:26.412 15.422 - 15.524: 97.8805% ( 3) 01:19:26.412 15.524 - 15.627: 97.9602% ( 5) 01:19:26.412 15.627 - 15.730: 98.0398% ( 5) 01:19:26.412 15.730 - 15.833: 98.1355% ( 6) 01:19:26.412 15.833 - 15.936: 98.1992% ( 4) 01:19:26.412 15.936 - 16.039: 98.2629% ( 4) 01:19:26.412 16.039 - 16.141: 98.2948% ( 2) 01:19:26.412 16.141 - 16.244: 98.3586% ( 4) 01:19:26.412 16.244 - 16.347: 98.3745% ( 1) 01:19:26.412 16.347 - 16.450: 98.4223% ( 3) 01:19:26.412 16.450 - 16.553: 98.4861% ( 4) 01:19:26.412 16.553 - 16.655: 98.5339% ( 3) 01:19:26.412 16.655 - 16.758: 98.5657% ( 2) 01:19:26.412 16.758 - 16.861: 98.5817% ( 1) 01:19:26.412 16.861 - 16.964: 98.6135% ( 2) 01:19:26.412 16.964 - 17.067: 98.6614% ( 3) 01:19:26.412 17.067 - 17.169: 98.6773% ( 1) 01:19:26.412 17.169 - 17.272: 98.6932% ( 1) 01:19:26.412 17.272 - 17.375: 98.7092% ( 1) 01:19:26.412 17.478 - 17.581: 98.7251% ( 1) 01:19:26.412 17.684 - 17.786: 98.7410% ( 1) 01:19:26.412 17.889 - 17.992: 98.7570% ( 1) 01:19:26.412 18.095 - 18.198: 98.7729% ( 1) 01:19:26.412 18.300 - 18.403: 98.7888% ( 1) 01:19:26.412 18.712 - 18.814: 98.8048% ( 1) 01:19:26.412 18.917 - 19.020: 98.8367% ( 2) 01:19:26.412 19.020 - 19.123: 98.8526% ( 1) 01:19:26.412 19.123 - 19.226: 98.8685% ( 1) 01:19:26.412 19.226 - 19.329: 98.8845% ( 1) 01:19:26.412 19.329 - 19.431: 98.9004% ( 1) 01:19:26.412 19.637 - 19.740: 98.9163% ( 1) 01:19:26.412 19.843 - 19.945: 98.9323% ( 1) 01:19:26.412 20.459 - 20.562: 98.9482% ( 1) 01:19:26.412 20.665 - 20.768: 98.9641% ( 1) 01:19:26.412 20.768 - 20.871: 98.9801% ( 1) 01:19:26.412 20.871 - 20.973: 99.0598% ( 5) 01:19:26.412 20.973 - 21.076: 99.0916% ( 2) 01:19:26.412 21.076 - 21.179: 99.1394% ( 3) 01:19:26.412 21.179 - 21.282: 99.1713% ( 2) 01:19:26.413 21.282 - 21.385: 99.2351% ( 4) 01:19:26.413 21.385 - 21.488: 99.2510% ( 1) 01:19:26.413 21.488 - 21.590: 99.2669% ( 1) 01:19:26.413 21.590 - 21.693: 99.2829% ( 1) 01:19:26.413 21.796 - 21.899: 99.2988% ( 1) 01:19:26.413 22.002 - 22.104: 99.3147% ( 1) 01:19:26.413 22.207 - 22.310: 99.3466% ( 2) 01:19:26.413 23.133 - 23.235: 99.3625% ( 1) 01:19:26.413 23.338 - 23.441: 99.3785% ( 1) 01:19:26.413 23.955 - 24.058: 99.3944% ( 1) 01:19:26.413 24.058 - 24.161: 99.4104% ( 1) 01:19:26.413 24.880 - 24.983: 99.4263% ( 1) 01:19:26.413 24.983 - 25.086: 99.4422% ( 1) 01:19:26.413 25.086 - 25.189: 99.4741% ( 2) 01:19:26.413 25.189 - 25.292: 99.5060% ( 2) 01:19:26.413 25.292 - 25.394: 99.5219% ( 1) 01:19:26.413 25.394 - 25.497: 99.5857% ( 4) 01:19:26.413 25.497 - 25.600: 99.6016% ( 1) 01:19:26.413 25.600 - 25.703: 99.6175% ( 1) 01:19:26.413 25.703 - 25.806: 99.6653% ( 3) 01:19:26.413 25.806 - 25.908: 99.6813% ( 1) 01:19:26.413 25.908 - 26.011: 99.7450% ( 4) 01:19:26.413 26.011 - 26.114: 99.7610% ( 1) 01:19:26.413 26.114 - 26.217: 99.7769% ( 1) 01:19:26.413 26.320 - 26.525: 99.7928% ( 1) 01:19:26.413 26.525 - 26.731: 99.8088% ( 1) 01:19:26.413 26.937 - 27.142: 99.8247% ( 1) 01:19:26.413 27.348 - 27.553: 99.8406% ( 1) 01:19:26.413 27.759 - 27.965: 99.8566% ( 1) 01:19:26.413 28.170 - 28.376: 99.8725% ( 1) 01:19:26.413 28.582 - 28.787: 99.8884% ( 1) 01:19:26.413 29.610 - 29.815: 99.9044% ( 1) 01:19:26.413 32.077 - 32.283: 99.9203% ( 1) 01:19:26.413 32.283 - 32.488: 99.9363% ( 1) 01:19:26.413 38.657 - 38.863: 99.9522% ( 1) 01:19:26.413 51.817 - 52.022: 99.9681% ( 1) 01:19:26.413 52.639 - 53.051: 99.9841% ( 1) 01:19:26.413 53.873 - 54.284: 100.0000% ( 1) 01:19:26.413 01:19:26.413 ************************************ 01:19:26.413 END TEST nvme_overhead 01:19:26.413 ************************************ 01:19:26.413 01:19:26.413 real 0m1.348s 01:19:26.413 user 0m1.114s 01:19:26.413 sys 0m0.174s 01:19:26.413 05:14:08 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 01:19:26.413 05:14:08 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 01:19:26.413 05:14:08 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 01:19:26.413 05:14:08 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 01:19:26.413 05:14:08 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 01:19:26.413 05:14:08 nvme -- common/autotest_common.sh@10 -- # set +x 01:19:26.413 ************************************ 01:19:26.413 START TEST nvme_arbitration 01:19:26.413 ************************************ 01:19:26.413 05:14:08 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 01:19:30.601 Initializing NVMe Controllers 01:19:30.601 Attached to 0000:00:13.0 01:19:30.601 Attached to 0000:00:10.0 01:19:30.601 Attached to 0000:00:11.0 01:19:30.601 Attached to 0000:00:12.0 01:19:30.601 Associating QEMU NVMe Ctrl (12343 ) with lcore 0 01:19:30.601 Associating QEMU NVMe Ctrl (12340 ) with lcore 1 01:19:30.601 Associating QEMU NVMe Ctrl (12341 ) with lcore 2 01:19:30.601 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 01:19:30.601 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 01:19:30.601 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 01:19:30.601 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 01:19:30.601 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 01:19:30.601 Initialization complete. Launching workers. 01:19:30.601 Starting thread on core 1 with urgent priority queue 01:19:30.601 Starting thread on core 2 with urgent priority queue 01:19:30.601 Starting thread on core 0 with urgent priority queue 01:19:30.601 Starting thread on core 3 with urgent priority queue 01:19:30.601 QEMU NVMe Ctrl (12343 ) core 0: 448.00 IO/s 223.21 secs/100000 ios 01:19:30.601 QEMU NVMe Ctrl (12342 ) core 0: 448.00 IO/s 223.21 secs/100000 ios 01:19:30.601 QEMU NVMe Ctrl (12340 ) core 1: 405.33 IO/s 246.71 secs/100000 ios 01:19:30.601 QEMU NVMe Ctrl (12342 ) core 1: 405.33 IO/s 246.71 secs/100000 ios 01:19:30.601 QEMU NVMe Ctrl (12341 ) core 2: 938.67 IO/s 106.53 secs/100000 ios 01:19:30.601 QEMU NVMe Ctrl (12342 ) core 3: 384.00 IO/s 260.42 secs/100000 ios 01:19:30.601 ======================================================== 01:19:30.601 01:19:30.601 01:19:30.601 real 0m3.504s 01:19:30.601 user 0m9.457s 01:19:30.601 sys 0m0.164s 01:19:30.601 ************************************ 01:19:30.601 END TEST nvme_arbitration 01:19:30.601 ************************************ 01:19:30.601 05:14:12 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 01:19:30.601 05:14:12 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 01:19:30.601 05:14:12 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 01:19:30.601 05:14:12 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 01:19:30.601 05:14:12 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 01:19:30.601 05:14:12 nvme -- common/autotest_common.sh@10 -- # set +x 01:19:30.601 ************************************ 01:19:30.601 START TEST nvme_single_aen 01:19:30.601 ************************************ 01:19:30.601 05:14:12 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 01:19:30.601 Asynchronous Event Request test 01:19:30.601 Attached to 0000:00:13.0 01:19:30.601 Attached to 0000:00:10.0 01:19:30.601 Attached to 0000:00:11.0 01:19:30.601 Attached to 0000:00:12.0 01:19:30.601 Reset controller to setup AER completions for this process 01:19:30.601 Registering asynchronous event callbacks... 01:19:30.601 Getting orig temperature thresholds of all controllers 01:19:30.601 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 01:19:30.601 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 01:19:30.601 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 01:19:30.601 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 01:19:30.601 Setting all controllers temperature threshold low to trigger AER 01:19:30.601 Waiting for all controllers temperature threshold to be set lower 01:19:30.601 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 01:19:30.601 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 01:19:30.601 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 01:19:30.601 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 01:19:30.601 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 01:19:30.602 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 01:19:30.602 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 01:19:30.602 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 01:19:30.602 Waiting for all controllers to trigger AER and reset threshold 01:19:30.602 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 01:19:30.602 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 01:19:30.602 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 01:19:30.602 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 01:19:30.602 Cleaning up... 01:19:30.602 ************************************ 01:19:30.602 END TEST nvme_single_aen 01:19:30.602 ************************************ 01:19:30.602 01:19:30.602 real 0m0.323s 01:19:30.602 user 0m0.120s 01:19:30.602 sys 0m0.154s 01:19:30.602 05:14:12 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 01:19:30.602 05:14:12 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 01:19:30.602 05:14:12 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 01:19:30.602 05:14:12 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:19:30.602 05:14:12 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 01:19:30.602 05:14:12 nvme -- common/autotest_common.sh@10 -- # set +x 01:19:30.602 ************************************ 01:19:30.602 START TEST nvme_doorbell_aers 01:19:30.602 ************************************ 01:19:30.602 05:14:12 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 01:19:30.602 05:14:12 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 01:19:30.602 05:14:12 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 01:19:30.602 05:14:12 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 01:19:30.602 05:14:12 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 01:19:30.602 05:14:12 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 01:19:30.602 05:14:12 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 01:19:30.602 05:14:12 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 01:19:30.602 05:14:12 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 01:19:30.602 05:14:12 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 01:19:30.602 05:14:12 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 01:19:30.602 05:14:12 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 01:19:30.602 05:14:12 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 01:19:30.602 05:14:12 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 01:19:30.860 [2024-12-09 05:14:13.238601] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64506) is not found. Dropping the request. 01:19:40.858 Executing: test_write_invalid_db 01:19:40.858 Waiting for AER completion... 01:19:40.858 Failure: test_write_invalid_db 01:19:40.858 01:19:40.858 Executing: test_invalid_db_write_overflow_sq 01:19:40.858 Waiting for AER completion... 01:19:40.858 Failure: test_invalid_db_write_overflow_sq 01:19:40.858 01:19:40.858 Executing: test_invalid_db_write_overflow_cq 01:19:40.858 Waiting for AER completion... 01:19:40.858 Failure: test_invalid_db_write_overflow_cq 01:19:40.858 01:19:40.858 05:14:23 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 01:19:40.858 05:14:23 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 01:19:41.117 [2024-12-09 05:14:23.368454] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64506) is not found. Dropping the request. 01:19:51.174 Executing: test_write_invalid_db 01:19:51.174 Waiting for AER completion... 01:19:51.174 Failure: test_write_invalid_db 01:19:51.174 01:19:51.174 Executing: test_invalid_db_write_overflow_sq 01:19:51.174 Waiting for AER completion... 01:19:51.174 Failure: test_invalid_db_write_overflow_sq 01:19:51.174 01:19:51.174 Executing: test_invalid_db_write_overflow_cq 01:19:51.174 Waiting for AER completion... 01:19:51.174 Failure: test_invalid_db_write_overflow_cq 01:19:51.174 01:19:51.174 05:14:33 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 01:19:51.174 05:14:33 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 01:19:51.174 [2024-12-09 05:14:33.492821] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64506) is not found. Dropping the request. 01:20:01.154 Executing: test_write_invalid_db 01:20:01.154 Waiting for AER completion... 01:20:01.154 Failure: test_write_invalid_db 01:20:01.154 01:20:01.154 Executing: test_invalid_db_write_overflow_sq 01:20:01.154 Waiting for AER completion... 01:20:01.154 Failure: test_invalid_db_write_overflow_sq 01:20:01.154 01:20:01.154 Executing: test_invalid_db_write_overflow_cq 01:20:01.154 Waiting for AER completion... 01:20:01.154 Failure: test_invalid_db_write_overflow_cq 01:20:01.154 01:20:01.154 05:14:43 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 01:20:01.154 05:14:43 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 01:20:01.413 [2024-12-09 05:14:43.629306] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64506) is not found. Dropping the request. 01:20:11.450 Executing: test_write_invalid_db 01:20:11.450 Waiting for AER completion... 01:20:11.450 Failure: test_write_invalid_db 01:20:11.450 01:20:11.450 Executing: test_invalid_db_write_overflow_sq 01:20:11.450 Waiting for AER completion... 01:20:11.450 Failure: test_invalid_db_write_overflow_sq 01:20:11.450 01:20:11.450 Executing: test_invalid_db_write_overflow_cq 01:20:11.450 Waiting for AER completion... 01:20:11.450 Failure: test_invalid_db_write_overflow_cq 01:20:11.450 01:20:11.450 01:20:11.450 real 0m40.675s 01:20:11.450 user 0m28.728s 01:20:11.450 sys 0m11.569s 01:20:11.450 ************************************ 01:20:11.450 END TEST nvme_doorbell_aers 01:20:11.450 ************************************ 01:20:11.450 05:14:53 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 01:20:11.450 05:14:53 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 01:20:11.450 05:14:53 nvme -- nvme/nvme.sh@97 -- # uname 01:20:11.450 05:14:53 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 01:20:11.450 05:14:53 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 01:20:11.450 05:14:53 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 01:20:11.450 05:14:53 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 01:20:11.450 05:14:53 nvme -- common/autotest_common.sh@10 -- # set +x 01:20:11.450 ************************************ 01:20:11.450 START TEST nvme_multi_aen 01:20:11.450 ************************************ 01:20:11.450 05:14:53 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 01:20:11.450 [2024-12-09 05:14:53.817093] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64506) is not found. Dropping the request. 01:20:11.451 [2024-12-09 05:14:53.817187] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64506) is not found. Dropping the request. 01:20:11.451 [2024-12-09 05:14:53.817209] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64506) is not found. Dropping the request. 01:20:11.451 [2024-12-09 05:14:53.819190] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64506) is not found. Dropping the request. 01:20:11.451 [2024-12-09 05:14:53.819236] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64506) is not found. Dropping the request. 01:20:11.451 [2024-12-09 05:14:53.819251] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64506) is not found. Dropping the request. 01:20:11.451 [2024-12-09 05:14:53.820827] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64506) is not found. Dropping the request. 01:20:11.451 [2024-12-09 05:14:53.821003] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64506) is not found. Dropping the request. 01:20:11.451 [2024-12-09 05:14:53.821025] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64506) is not found. Dropping the request. 01:20:11.451 [2024-12-09 05:14:53.822410] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64506) is not found. Dropping the request. 01:20:11.451 [2024-12-09 05:14:53.822446] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64506) is not found. Dropping the request. 01:20:11.451 [2024-12-09 05:14:53.822470] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64506) is not found. Dropping the request. 01:20:11.451 Child process pid: 65028 01:20:11.710 [Child] Asynchronous Event Request test 01:20:11.710 [Child] Attached to 0000:00:13.0 01:20:11.710 [Child] Attached to 0000:00:10.0 01:20:11.710 [Child] Attached to 0000:00:11.0 01:20:11.710 [Child] Attached to 0000:00:12.0 01:20:11.710 [Child] Registering asynchronous event callbacks... 01:20:11.710 [Child] Getting orig temperature thresholds of all controllers 01:20:11.710 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 01:20:11.710 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 01:20:11.710 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 01:20:11.710 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 01:20:11.711 [Child] Waiting for all controllers to trigger AER and reset threshold 01:20:11.711 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 01:20:11.711 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 01:20:11.711 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 01:20:11.711 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 01:20:11.711 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 01:20:11.711 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 01:20:11.711 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 01:20:11.711 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 01:20:11.711 [Child] Cleaning up... 01:20:11.711 Asynchronous Event Request test 01:20:11.711 Attached to 0000:00:13.0 01:20:11.711 Attached to 0000:00:10.0 01:20:11.711 Attached to 0000:00:11.0 01:20:11.711 Attached to 0000:00:12.0 01:20:11.711 Reset controller to setup AER completions for this process 01:20:11.711 Registering asynchronous event callbacks... 01:20:11.711 Getting orig temperature thresholds of all controllers 01:20:11.711 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 01:20:11.711 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 01:20:11.711 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 01:20:11.711 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 01:20:11.711 Setting all controllers temperature threshold low to trigger AER 01:20:11.711 Waiting for all controllers temperature threshold to be set lower 01:20:11.711 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 01:20:11.711 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 01:20:11.711 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 01:20:11.711 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 01:20:11.711 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 01:20:11.711 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 01:20:11.711 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 01:20:11.711 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 01:20:11.711 Waiting for all controllers to trigger AER and reset threshold 01:20:11.711 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 01:20:11.711 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 01:20:11.711 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 01:20:11.711 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 01:20:11.711 Cleaning up... 01:20:11.970 ************************************ 01:20:11.970 END TEST nvme_multi_aen 01:20:11.970 ************************************ 01:20:11.970 01:20:11.970 real 0m0.623s 01:20:11.970 user 0m0.210s 01:20:11.970 sys 0m0.305s 01:20:11.970 05:14:54 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 01:20:11.970 05:14:54 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 01:20:11.970 05:14:54 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 01:20:11.970 05:14:54 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:20:11.970 05:14:54 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 01:20:11.970 05:14:54 nvme -- common/autotest_common.sh@10 -- # set +x 01:20:11.970 ************************************ 01:20:11.970 START TEST nvme_startup 01:20:11.970 ************************************ 01:20:11.970 05:14:54 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 01:20:12.229 Initializing NVMe Controllers 01:20:12.229 Attached to 0000:00:13.0 01:20:12.229 Attached to 0000:00:10.0 01:20:12.229 Attached to 0000:00:11.0 01:20:12.229 Attached to 0000:00:12.0 01:20:12.229 Initialization complete. 01:20:12.229 Time used:196869.328 (us). 01:20:12.229 01:20:12.229 real 0m0.297s 01:20:12.229 user 0m0.112s 01:20:12.229 sys 0m0.143s 01:20:12.229 05:14:54 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 01:20:12.229 ************************************ 01:20:12.229 END TEST nvme_startup 01:20:12.229 ************************************ 01:20:12.229 05:14:54 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 01:20:12.229 05:14:54 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 01:20:12.229 05:14:54 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:20:12.229 05:14:54 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 01:20:12.229 05:14:54 nvme -- common/autotest_common.sh@10 -- # set +x 01:20:12.229 ************************************ 01:20:12.229 START TEST nvme_multi_secondary 01:20:12.229 ************************************ 01:20:12.229 05:14:54 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 01:20:12.229 05:14:54 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=65084 01:20:12.229 05:14:54 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 01:20:12.229 05:14:54 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=65085 01:20:12.229 05:14:54 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 01:20:12.229 05:14:54 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 01:20:15.577 Initializing NVMe Controllers 01:20:15.577 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 01:20:15.577 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 01:20:15.577 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 01:20:15.577 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 01:20:15.577 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 01:20:15.577 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 01:20:15.577 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 01:20:15.577 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 01:20:15.577 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 01:20:15.577 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 01:20:15.577 Initialization complete. Launching workers. 01:20:15.577 ======================================================== 01:20:15.577 Latency(us) 01:20:15.577 Device Information : IOPS MiB/s Average min max 01:20:15.577 PCIE (0000:00:13.0) NSID 1 from core 1: 5026.40 19.63 3182.56 1515.66 7974.51 01:20:15.577 PCIE (0000:00:10.0) NSID 1 from core 1: 5026.40 19.63 3180.95 1643.61 7387.01 01:20:15.577 PCIE (0000:00:11.0) NSID 1 from core 1: 5026.40 19.63 3182.96 1592.71 7657.22 01:20:15.577 PCIE (0000:00:12.0) NSID 1 from core 1: 5026.40 19.63 3183.45 1580.62 8295.14 01:20:15.577 PCIE (0000:00:12.0) NSID 2 from core 1: 5026.40 19.63 3183.49 1459.69 8416.81 01:20:15.577 PCIE (0000:00:12.0) NSID 3 from core 1: 5026.40 19.63 3183.56 1477.95 7978.43 01:20:15.577 ======================================================== 01:20:15.577 Total : 30158.42 117.81 3182.83 1459.69 8416.81 01:20:15.577 01:20:15.835 Initializing NVMe Controllers 01:20:15.835 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 01:20:15.835 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 01:20:15.835 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 01:20:15.835 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 01:20:15.835 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 01:20:15.835 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 01:20:15.835 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 01:20:15.835 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 01:20:15.835 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 01:20:15.835 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 01:20:15.835 Initialization complete. Launching workers. 01:20:15.835 ======================================================== 01:20:15.835 Latency(us) 01:20:15.835 Device Information : IOPS MiB/s Average min max 01:20:15.835 PCIE (0000:00:13.0) NSID 1 from core 2: 3445.82 13.46 4642.16 1152.87 11797.19 01:20:15.835 PCIE (0000:00:10.0) NSID 1 from core 2: 3445.82 13.46 4641.21 1128.91 12483.79 01:20:15.835 PCIE (0000:00:11.0) NSID 1 from core 2: 3445.82 13.46 4642.11 1243.27 13419.06 01:20:15.835 PCIE (0000:00:12.0) NSID 1 from core 2: 3445.82 13.46 4642.77 1128.35 12249.60 01:20:15.835 PCIE (0000:00:12.0) NSID 2 from core 2: 3445.82 13.46 4642.76 1157.76 12397.64 01:20:15.835 PCIE (0000:00:12.0) NSID 3 from core 2: 3445.82 13.46 4642.71 1000.95 11704.62 01:20:15.835 ======================================================== 01:20:15.835 Total : 20674.94 80.76 4642.29 1000.95 13419.06 01:20:15.835 01:20:16.092 05:14:58 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 65084 01:20:17.464 Initializing NVMe Controllers 01:20:17.464 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 01:20:17.464 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 01:20:17.464 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 01:20:17.464 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 01:20:17.464 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 01:20:17.464 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 01:20:17.464 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 01:20:17.464 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 01:20:17.464 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 01:20:17.464 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 01:20:17.464 Initialization complete. Launching workers. 01:20:17.464 ======================================================== 01:20:17.464 Latency(us) 01:20:17.464 Device Information : IOPS MiB/s Average min max 01:20:17.464 PCIE (0000:00:13.0) NSID 1 from core 0: 8395.96 32.80 1905.23 917.49 8821.39 01:20:17.464 PCIE (0000:00:10.0) NSID 1 from core 0: 8395.96 32.80 1904.10 906.80 8626.66 01:20:17.464 PCIE (0000:00:11.0) NSID 1 from core 0: 8395.96 32.80 1905.18 825.38 8567.49 01:20:17.464 PCIE (0000:00:12.0) NSID 1 from core 0: 8395.96 32.80 1905.15 786.48 8614.01 01:20:17.464 PCIE (0000:00:12.0) NSID 2 from core 0: 8395.96 32.80 1905.12 720.76 8947.18 01:20:17.464 PCIE (0000:00:12.0) NSID 3 from core 0: 8399.16 32.81 1904.37 704.91 8778.51 01:20:17.464 ======================================================== 01:20:17.464 Total : 50378.96 196.79 1904.86 704.91 8947.18 01:20:17.464 01:20:17.723 05:15:00 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 65085 01:20:17.723 05:15:00 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=65154 01:20:17.723 05:15:00 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 01:20:17.723 05:15:00 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=65155 01:20:17.723 05:15:00 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 01:20:17.723 05:15:00 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 01:20:21.007 Initializing NVMe Controllers 01:20:21.007 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 01:20:21.007 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 01:20:21.007 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 01:20:21.008 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 01:20:21.008 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 01:20:21.008 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 01:20:21.008 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 01:20:21.008 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 01:20:21.008 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 01:20:21.008 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 01:20:21.008 Initialization complete. Launching workers. 01:20:21.008 ======================================================== 01:20:21.008 Latency(us) 01:20:21.008 Device Information : IOPS MiB/s Average min max 01:20:21.008 PCIE (0000:00:13.0) NSID 1 from core 0: 4988.33 19.49 3206.87 1032.43 8870.08 01:20:21.008 PCIE (0000:00:10.0) NSID 1 from core 0: 4988.33 19.49 3205.38 1014.01 9385.20 01:20:21.008 PCIE (0000:00:11.0) NSID 1 from core 0: 4988.33 19.49 3207.19 1056.77 8900.38 01:20:21.008 PCIE (0000:00:12.0) NSID 1 from core 0: 4988.33 19.49 3207.29 1061.67 8729.92 01:20:21.008 PCIE (0000:00:12.0) NSID 2 from core 0: 4988.33 19.49 3207.58 1062.65 8623.33 01:20:21.008 PCIE (0000:00:12.0) NSID 3 from core 0: 4988.33 19.49 3207.65 1061.41 8722.11 01:20:21.008 ======================================================== 01:20:21.008 Total : 29929.96 116.91 3206.99 1014.01 9385.20 01:20:21.008 01:20:21.267 Initializing NVMe Controllers 01:20:21.267 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 01:20:21.267 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 01:20:21.267 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 01:20:21.267 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 01:20:21.267 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 01:20:21.267 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 01:20:21.267 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 01:20:21.267 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 01:20:21.267 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 01:20:21.267 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 01:20:21.267 Initialization complete. Launching workers. 01:20:21.267 ======================================================== 01:20:21.267 Latency(us) 01:20:21.267 Device Information : IOPS MiB/s Average min max 01:20:21.267 PCIE (0000:00:13.0) NSID 1 from core 1: 5093.86 19.90 3140.32 1048.60 6403.55 01:20:21.267 PCIE (0000:00:10.0) NSID 1 from core 1: 5093.86 19.90 3138.98 1017.13 6160.52 01:20:21.267 PCIE (0000:00:11.0) NSID 1 from core 1: 5093.86 19.90 3140.82 1036.85 6697.19 01:20:21.267 PCIE (0000:00:12.0) NSID 1 from core 1: 5093.86 19.90 3141.26 1041.64 6400.95 01:20:21.267 PCIE (0000:00:12.0) NSID 2 from core 1: 5093.86 19.90 3141.25 1009.17 5977.78 01:20:21.267 PCIE (0000:00:12.0) NSID 3 from core 1: 5093.86 19.90 3141.60 1046.94 5987.20 01:20:21.267 ======================================================== 01:20:21.267 Total : 30563.18 119.39 3140.70 1009.17 6697.19 01:20:21.267 01:20:23.172 Initializing NVMe Controllers 01:20:23.172 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 01:20:23.172 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 01:20:23.172 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 01:20:23.172 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 01:20:23.172 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 01:20:23.172 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 01:20:23.172 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 01:20:23.172 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 01:20:23.172 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 01:20:23.172 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 01:20:23.172 Initialization complete. Launching workers. 01:20:23.172 ======================================================== 01:20:23.172 Latency(us) 01:20:23.172 Device Information : IOPS MiB/s Average min max 01:20:23.172 PCIE (0000:00:13.0) NSID 1 from core 2: 3334.86 13.03 4796.83 1076.58 11380.43 01:20:23.172 PCIE (0000:00:10.0) NSID 1 from core 2: 3334.86 13.03 4793.61 1062.82 11737.74 01:20:23.172 PCIE (0000:00:11.0) NSID 1 from core 2: 3334.86 13.03 4795.00 1051.82 12286.79 01:20:23.172 PCIE (0000:00:12.0) NSID 1 from core 2: 3334.86 13.03 4796.99 1047.63 12417.10 01:20:23.172 PCIE (0000:00:12.0) NSID 2 from core 2: 3334.86 13.03 4796.92 1061.76 11453.98 01:20:23.172 PCIE (0000:00:12.0) NSID 3 from core 2: 3334.86 13.03 4796.85 1063.41 11599.90 01:20:23.172 ======================================================== 01:20:23.172 Total : 20009.14 78.16 4796.03 1047.63 12417.10 01:20:23.172 01:20:23.172 ************************************ 01:20:23.172 END TEST nvme_multi_secondary 01:20:23.172 ************************************ 01:20:23.172 05:15:05 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 65154 01:20:23.172 05:15:05 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 65155 01:20:23.172 01:20:23.172 real 0m10.941s 01:20:23.172 user 0m19.110s 01:20:23.172 sys 0m0.999s 01:20:23.172 05:15:05 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 01:20:23.172 05:15:05 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 01:20:23.172 05:15:05 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 01:20:23.172 05:15:05 nvme -- nvme/nvme.sh@102 -- # kill_stub 01:20:23.172 05:15:05 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/64076 ]] 01:20:23.172 05:15:05 nvme -- common/autotest_common.sh@1094 -- # kill 64076 01:20:23.172 05:15:05 nvme -- common/autotest_common.sh@1095 -- # wait 64076 01:20:23.430 [2024-12-09 05:15:05.625991] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65027) is not found. Dropping the request. 01:20:23.430 [2024-12-09 05:15:05.626123] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65027) is not found. Dropping the request. 01:20:23.430 [2024-12-09 05:15:05.626202] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65027) is not found. Dropping the request. 01:20:23.430 [2024-12-09 05:15:05.626255] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65027) is not found. Dropping the request. 01:20:23.430 [2024-12-09 05:15:05.633253] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65027) is not found. Dropping the request. 01:20:23.430 [2024-12-09 05:15:05.633362] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65027) is not found. Dropping the request. 01:20:23.430 [2024-12-09 05:15:05.633420] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65027) is not found. Dropping the request. 01:20:23.431 [2024-12-09 05:15:05.633501] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65027) is not found. Dropping the request. 01:20:23.431 [2024-12-09 05:15:05.637991] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65027) is not found. Dropping the request. 01:20:23.431 [2024-12-09 05:15:05.638062] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65027) is not found. Dropping the request. 01:20:23.431 [2024-12-09 05:15:05.638091] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65027) is not found. Dropping the request. 01:20:23.431 [2024-12-09 05:15:05.638123] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65027) is not found. Dropping the request. 01:20:23.431 [2024-12-09 05:15:05.642595] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65027) is not found. Dropping the request. 01:20:23.431 [2024-12-09 05:15:05.642672] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65027) is not found. Dropping the request. 01:20:23.431 [2024-12-09 05:15:05.642701] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65027) is not found. Dropping the request. 01:20:23.431 [2024-12-09 05:15:05.642734] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65027) is not found. Dropping the request. 01:20:23.689 05:15:05 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 01:20:23.689 05:15:05 nvme -- common/autotest_common.sh@1101 -- # echo 2 01:20:23.689 05:15:05 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 01:20:23.689 05:15:05 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:20:23.689 05:15:05 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 01:20:23.689 05:15:05 nvme -- common/autotest_common.sh@10 -- # set +x 01:20:23.689 ************************************ 01:20:23.689 START TEST bdev_nvme_reset_stuck_adm_cmd 01:20:23.689 ************************************ 01:20:23.689 05:15:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 01:20:23.689 * Looking for test storage... 01:20:23.689 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 01:20:23.689 05:15:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:20:23.689 05:15:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:20:23.689 05:15:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lcov --version 01:20:23.949 05:15:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:20:23.949 05:15:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:20:23.949 05:15:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 01:20:23.949 05:15:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 01:20:23.949 05:15:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 01:20:23.949 05:15:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 01:20:23.949 05:15:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 01:20:23.949 05:15:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 01:20:23.949 05:15:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 01:20:23.949 05:15:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 01:20:23.949 05:15:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 01:20:23.949 05:15:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:20:23.949 05:15:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 01:20:23.949 05:15:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 01:20:23.949 05:15:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 01:20:23.949 05:15:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:20:23.949 05:15:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 01:20:23.949 05:15:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 01:20:23.949 05:15:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:20:23.949 05:15:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 01:20:23.949 05:15:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 01:20:23.949 05:15:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 01:20:23.949 05:15:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 01:20:23.949 05:15:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:20:23.949 05:15:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 01:20:23.949 05:15:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 01:20:23.949 05:15:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:20:23.949 05:15:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:20:23.949 05:15:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 01:20:23.949 05:15:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:20:23.949 05:15:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:20:23.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:20:23.949 --rc genhtml_branch_coverage=1 01:20:23.949 --rc genhtml_function_coverage=1 01:20:23.949 --rc genhtml_legend=1 01:20:23.949 --rc geninfo_all_blocks=1 01:20:23.949 --rc geninfo_unexecuted_blocks=1 01:20:23.949 01:20:23.949 ' 01:20:23.949 05:15:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:20:23.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:20:23.949 --rc genhtml_branch_coverage=1 01:20:23.949 --rc genhtml_function_coverage=1 01:20:23.949 --rc genhtml_legend=1 01:20:23.949 --rc geninfo_all_blocks=1 01:20:23.949 --rc geninfo_unexecuted_blocks=1 01:20:23.949 01:20:23.949 ' 01:20:23.949 05:15:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:20:23.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:20:23.949 --rc genhtml_branch_coverage=1 01:20:23.949 --rc genhtml_function_coverage=1 01:20:23.949 --rc genhtml_legend=1 01:20:23.949 --rc geninfo_all_blocks=1 01:20:23.949 --rc geninfo_unexecuted_blocks=1 01:20:23.949 01:20:23.949 ' 01:20:23.949 05:15:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:20:23.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:20:23.949 --rc genhtml_branch_coverage=1 01:20:23.949 --rc genhtml_function_coverage=1 01:20:23.949 --rc genhtml_legend=1 01:20:23.949 --rc geninfo_all_blocks=1 01:20:23.949 --rc geninfo_unexecuted_blocks=1 01:20:23.949 01:20:23.949 ' 01:20:23.949 05:15:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 01:20:23.949 05:15:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 01:20:23.949 05:15:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 01:20:23.949 05:15:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 01:20:23.949 05:15:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 01:20:23.949 05:15:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 01:20:23.949 05:15:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 01:20:23.949 05:15:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 01:20:23.949 05:15:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 01:20:23.949 05:15:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 01:20:23.949 05:15:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 01:20:23.949 05:15:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 01:20:23.949 05:15:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 01:20:23.949 05:15:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 01:20:23.949 05:15:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 01:20:23.949 05:15:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 01:20:23.949 05:15:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 01:20:23.949 05:15:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 01:20:23.949 05:15:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 01:20:23.949 05:15:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 01:20:23.949 05:15:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=65325 01:20:23.949 05:15:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 01:20:23.949 05:15:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 01:20:23.949 05:15:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 65325 01:20:23.949 05:15:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 65325 ']' 01:20:23.949 05:15:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:20:23.949 05:15:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 01:20:23.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:20:23.949 05:15:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:20:23.949 05:15:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 01:20:23.949 05:15:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 01:20:24.208 [2024-12-09 05:15:06.415567] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:20:24.208 [2024-12-09 05:15:06.416135] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65325 ] 01:20:24.465 [2024-12-09 05:15:06.679431] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 01:20:24.466 [2024-12-09 05:15:06.813944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:20:24.466 [2024-12-09 05:15:06.814119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:20:24.466 [2024-12-09 05:15:06.814267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:20:24.466 [2024-12-09 05:15:06.814305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:20:25.400 05:15:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:20:25.400 05:15:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 01:20:25.400 05:15:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 01:20:25.400 05:15:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:25.400 05:15:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 01:20:25.400 nvme0n1 01:20:25.400 05:15:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:25.400 05:15:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 01:20:25.400 05:15:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_91ICQ.txt 01:20:25.400 05:15:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 01:20:25.400 05:15:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:25.400 05:15:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 01:20:25.400 true 01:20:25.400 05:15:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:25.400 05:15:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 01:20:25.400 05:15:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1733721307 01:20:25.400 05:15:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=65354 01:20:25.400 05:15:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 01:20:25.400 05:15:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 01:20:25.400 05:15:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 01:20:27.955 05:15:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 01:20:27.955 05:15:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:27.955 05:15:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 01:20:27.955 [2024-12-09 05:15:09.819024] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 01:20:27.955 [2024-12-09 05:15:09.819449] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:20:27.955 [2024-12-09 05:15:09.819501] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 01:20:27.955 [2024-12-09 05:15:09.819520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:20:27.955 [2024-12-09 05:15:09.821408] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 01:20:27.955 05:15:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:27.955 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 65354 01:20:27.955 05:15:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 65354 01:20:27.955 05:15:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 65354 01:20:27.955 05:15:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 01:20:27.955 05:15:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 01:20:27.955 05:15:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:20:27.955 05:15:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:27.955 05:15:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 01:20:27.955 05:15:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:27.955 05:15:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 01:20:27.955 05:15:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_91ICQ.txt 01:20:27.955 05:15:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 01:20:27.955 05:15:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 01:20:27.955 05:15:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 01:20:27.955 05:15:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 01:20:27.955 05:15:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 01:20:27.955 05:15:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 01:20:27.955 05:15:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 01:20:27.955 05:15:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 01:20:27.955 05:15:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 01:20:27.955 05:15:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 01:20:27.955 05:15:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 01:20:27.955 05:15:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 01:20:27.955 05:15:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 01:20:27.955 05:15:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 01:20:27.955 05:15:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 01:20:27.955 05:15:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 01:20:27.955 05:15:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 01:20:27.955 05:15:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 01:20:27.955 05:15:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 01:20:27.955 05:15:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_91ICQ.txt 01:20:27.955 05:15:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 65325 01:20:27.955 05:15:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 65325 ']' 01:20:27.955 05:15:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 65325 01:20:27.956 05:15:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 01:20:27.956 05:15:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:20:27.956 05:15:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65325 01:20:27.956 05:15:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:20:27.956 killing process with pid 65325 01:20:27.956 05:15:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:20:27.956 05:15:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65325' 01:20:27.956 05:15:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 65325 01:20:27.956 05:15:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 65325 01:20:30.490 05:15:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 01:20:30.490 05:15:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 01:20:30.490 01:20:30.490 real 0m6.585s 01:20:30.490 user 0m22.472s 01:20:30.490 sys 0m0.823s 01:20:30.490 05:15:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 01:20:30.490 05:15:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 01:20:30.490 ************************************ 01:20:30.490 END TEST bdev_nvme_reset_stuck_adm_cmd 01:20:30.490 ************************************ 01:20:30.490 05:15:12 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 01:20:30.490 05:15:12 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 01:20:30.490 05:15:12 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:20:30.490 05:15:12 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 01:20:30.490 05:15:12 nvme -- common/autotest_common.sh@10 -- # set +x 01:20:30.490 ************************************ 01:20:30.490 START TEST nvme_fio 01:20:30.490 ************************************ 01:20:30.490 05:15:12 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 01:20:30.490 05:15:12 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 01:20:30.490 05:15:12 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 01:20:30.490 05:15:12 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 01:20:30.490 05:15:12 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 01:20:30.490 05:15:12 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 01:20:30.490 05:15:12 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 01:20:30.490 05:15:12 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 01:20:30.490 05:15:12 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 01:20:30.490 05:15:12 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 01:20:30.490 05:15:12 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 01:20:30.491 05:15:12 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 01:20:30.491 05:15:12 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 01:20:30.491 05:15:12 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 01:20:30.491 05:15:12 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 01:20:30.491 05:15:12 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 01:20:30.749 05:15:13 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 01:20:30.749 05:15:13 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 01:20:31.009 05:15:13 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 01:20:31.009 05:15:13 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 01:20:31.009 05:15:13 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 01:20:31.009 05:15:13 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 01:20:31.009 05:15:13 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:20:31.009 05:15:13 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 01:20:31.009 05:15:13 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 01:20:31.009 05:15:13 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 01:20:31.009 05:15:13 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 01:20:31.009 05:15:13 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:20:31.009 05:15:13 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 01:20:31.268 05:15:13 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 01:20:31.268 05:15:13 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:20:31.268 05:15:13 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 01:20:31.268 05:15:13 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 01:20:31.268 05:15:13 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 01:20:31.268 05:15:13 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 01:20:31.268 05:15:13 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 01:20:31.268 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 01:20:31.268 fio-3.35 01:20:31.268 Starting 1 thread 01:20:35.477 01:20:35.477 test: (groupid=0, jobs=1): err= 0: pid=65512: Mon Dec 9 05:15:17 2024 01:20:35.477 read: IOPS=22.3k, BW=86.9MiB/s (91.2MB/s)(174MiB/2001msec) 01:20:35.477 slat (usec): min=3, max=279, avg= 4.54, stdev= 1.95 01:20:35.477 clat (usec): min=254, max=10615, avg=2872.95, stdev=546.80 01:20:35.477 lat (usec): min=259, max=10708, avg=2877.49, stdev=547.56 01:20:35.477 clat percentiles (usec): 01:20:35.477 | 1.00th=[ 1958], 5.00th=[ 2606], 10.00th=[ 2638], 20.00th=[ 2704], 01:20:35.477 | 30.00th=[ 2737], 40.00th=[ 2769], 50.00th=[ 2802], 60.00th=[ 2835], 01:20:35.477 | 70.00th=[ 2868], 80.00th=[ 2933], 90.00th=[ 2999], 95.00th=[ 3163], 01:20:35.477 | 99.00th=[ 5735], 99.50th=[ 7570], 99.90th=[ 8717], 99.95th=[ 8848], 01:20:35.477 | 99.99th=[10290] 01:20:35.477 bw ( KiB/s): min=86792, max=90347, per=98.85%, avg=88006.33, stdev=2027.55, samples=3 01:20:35.477 iops : min=21698, max=22586, avg=22000.67, stdev=507.01, samples=3 01:20:35.477 write: IOPS=22.1k, BW=86.3MiB/s (90.5MB/s)(173MiB/2001msec); 0 zone resets 01:20:35.477 slat (usec): min=3, max=271, avg= 4.68, stdev= 1.82 01:20:35.477 clat (usec): min=278, max=10423, avg=2872.77, stdev=525.36 01:20:35.477 lat (usec): min=282, max=10437, avg=2877.46, stdev=526.05 01:20:35.477 clat percentiles (usec): 01:20:35.477 | 1.00th=[ 1975], 5.00th=[ 2606], 10.00th=[ 2671], 20.00th=[ 2704], 01:20:35.477 | 30.00th=[ 2769], 40.00th=[ 2802], 50.00th=[ 2835], 60.00th=[ 2835], 01:20:35.477 | 70.00th=[ 2868], 80.00th=[ 2933], 90.00th=[ 2999], 95.00th=[ 3163], 01:20:35.477 | 99.00th=[ 5473], 99.50th=[ 7242], 99.90th=[ 8455], 99.95th=[ 8717], 01:20:35.477 | 99.99th=[10028] 01:20:35.477 bw ( KiB/s): min=86888, max=89780, per=99.76%, avg=88204.00, stdev=1463.43, samples=3 01:20:35.477 iops : min=21722, max=22445, avg=22051.00, stdev=365.86, samples=3 01:20:35.477 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.02% 01:20:35.477 lat (msec) : 2=1.02%, 4=97.20%, 10=1.73%, 20=0.01% 01:20:35.477 cpu : usr=99.25%, sys=0.15%, ctx=3, majf=0, minf=607 01:20:35.477 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 01:20:35.477 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:35.477 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:20:35.477 issued rwts: total=44535,44229,0,0 short=0,0,0,0 dropped=0,0,0,0 01:20:35.477 latency : target=0, window=0, percentile=100.00%, depth=128 01:20:35.477 01:20:35.477 Run status group 0 (all jobs): 01:20:35.477 READ: bw=86.9MiB/s (91.2MB/s), 86.9MiB/s-86.9MiB/s (91.2MB/s-91.2MB/s), io=174MiB (182MB), run=2001-2001msec 01:20:35.477 WRITE: bw=86.3MiB/s (90.5MB/s), 86.3MiB/s-86.3MiB/s (90.5MB/s-90.5MB/s), io=173MiB (181MB), run=2001-2001msec 01:20:35.477 ----------------------------------------------------- 01:20:35.477 Suppressions used: 01:20:35.477 count bytes template 01:20:35.477 1 32 /usr/src/fio/parse.c 01:20:35.477 1 8 libtcmalloc_minimal.so 01:20:35.477 ----------------------------------------------------- 01:20:35.477 01:20:35.477 05:15:17 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 01:20:35.477 05:15:17 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 01:20:35.477 05:15:17 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 01:20:35.477 05:15:17 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 01:20:35.477 05:15:17 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 01:20:35.477 05:15:17 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 01:20:36.044 05:15:18 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 01:20:36.044 05:15:18 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 01:20:36.044 05:15:18 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 01:20:36.044 05:15:18 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 01:20:36.044 05:15:18 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:20:36.044 05:15:18 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 01:20:36.044 05:15:18 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 01:20:36.044 05:15:18 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 01:20:36.044 05:15:18 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 01:20:36.044 05:15:18 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:20:36.044 05:15:18 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 01:20:36.044 05:15:18 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 01:20:36.044 05:15:18 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:20:36.044 05:15:18 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 01:20:36.044 05:15:18 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 01:20:36.044 05:15:18 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 01:20:36.044 05:15:18 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 01:20:36.044 05:15:18 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 01:20:36.044 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 01:20:36.044 fio-3.35 01:20:36.044 Starting 1 thread 01:20:40.236 01:20:40.236 test: (groupid=0, jobs=1): err= 0: pid=65573: Mon Dec 9 05:15:22 2024 01:20:40.236 read: IOPS=21.9k, BW=85.6MiB/s (89.7MB/s)(171MiB/2001msec) 01:20:40.236 slat (nsec): min=3809, max=57818, avg=4607.92, stdev=1131.07 01:20:40.236 clat (usec): min=243, max=11307, avg=2915.24, stdev=268.59 01:20:40.236 lat (usec): min=247, max=11353, avg=2919.85, stdev=268.89 01:20:40.236 clat percentiles (usec): 01:20:40.236 | 1.00th=[ 2638], 5.00th=[ 2704], 10.00th=[ 2769], 20.00th=[ 2802], 01:20:40.236 | 30.00th=[ 2835], 40.00th=[ 2868], 50.00th=[ 2900], 60.00th=[ 2933], 01:20:40.236 | 70.00th=[ 2966], 80.00th=[ 2999], 90.00th=[ 3064], 95.00th=[ 3097], 01:20:40.236 | 99.00th=[ 3490], 99.50th=[ 4113], 99.90th=[ 6128], 99.95th=[ 9241], 01:20:40.236 | 99.99th=[11076] 01:20:40.236 bw ( KiB/s): min=85464, max=87864, per=99.01%, avg=86765.33, stdev=1212.77, samples=3 01:20:40.236 iops : min=21366, max=21966, avg=21691.33, stdev=303.19, samples=3 01:20:40.236 write: IOPS=21.8k, BW=85.0MiB/s (89.1MB/s)(170MiB/2001msec); 0 zone resets 01:20:40.236 slat (nsec): min=3941, max=47808, avg=4736.33, stdev=1116.71 01:20:40.236 clat (usec): min=190, max=11081, avg=2921.25, stdev=279.36 01:20:40.236 lat (usec): min=194, max=11092, avg=2925.98, stdev=279.66 01:20:40.236 clat percentiles (usec): 01:20:40.236 | 1.00th=[ 2638], 5.00th=[ 2704], 10.00th=[ 2769], 20.00th=[ 2802], 01:20:40.236 | 30.00th=[ 2835], 40.00th=[ 2868], 50.00th=[ 2900], 60.00th=[ 2933], 01:20:40.236 | 70.00th=[ 2966], 80.00th=[ 2999], 90.00th=[ 3064], 95.00th=[ 3097], 01:20:40.236 | 99.00th=[ 3556], 99.50th=[ 4293], 99.90th=[ 7242], 99.95th=[ 9372], 01:20:40.236 | 99.99th=[10814] 01:20:40.236 bw ( KiB/s): min=85192, max=88848, per=99.95%, avg=86986.67, stdev=1828.91, samples=3 01:20:40.236 iops : min=21298, max=22212, avg=21746.67, stdev=457.23, samples=3 01:20:40.236 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 01:20:40.236 lat (msec) : 2=0.05%, 4=99.34%, 10=0.53%, 20=0.04% 01:20:40.236 cpu : usr=99.30%, sys=0.10%, ctx=4, majf=0, minf=606 01:20:40.236 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 01:20:40.236 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:40.236 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:20:40.236 issued rwts: total=43836,43537,0,0 short=0,0,0,0 dropped=0,0,0,0 01:20:40.236 latency : target=0, window=0, percentile=100.00%, depth=128 01:20:40.236 01:20:40.236 Run status group 0 (all jobs): 01:20:40.236 READ: bw=85.6MiB/s (89.7MB/s), 85.6MiB/s-85.6MiB/s (89.7MB/s-89.7MB/s), io=171MiB (180MB), run=2001-2001msec 01:20:40.236 WRITE: bw=85.0MiB/s (89.1MB/s), 85.0MiB/s-85.0MiB/s (89.1MB/s-89.1MB/s), io=170MiB (178MB), run=2001-2001msec 01:20:40.236 ----------------------------------------------------- 01:20:40.236 Suppressions used: 01:20:40.236 count bytes template 01:20:40.236 1 32 /usr/src/fio/parse.c 01:20:40.236 1 8 libtcmalloc_minimal.so 01:20:40.236 ----------------------------------------------------- 01:20:40.236 01:20:40.236 05:15:22 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 01:20:40.236 05:15:22 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 01:20:40.236 05:15:22 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 01:20:40.237 05:15:22 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 01:20:40.495 05:15:22 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 01:20:40.495 05:15:22 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 01:20:40.755 05:15:23 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 01:20:40.755 05:15:23 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 01:20:40.755 05:15:23 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 01:20:40.755 05:15:23 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 01:20:40.755 05:15:23 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:20:40.755 05:15:23 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 01:20:40.755 05:15:23 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 01:20:40.755 05:15:23 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 01:20:40.755 05:15:23 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 01:20:40.755 05:15:23 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:20:40.755 05:15:23 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 01:20:40.755 05:15:23 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 01:20:40.755 05:15:23 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:20:40.755 05:15:23 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 01:20:40.755 05:15:23 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 01:20:40.755 05:15:23 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 01:20:40.755 05:15:23 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 01:20:40.755 05:15:23 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 01:20:41.015 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 01:20:41.015 fio-3.35 01:20:41.015 Starting 1 thread 01:20:45.206 01:20:45.206 test: (groupid=0, jobs=1): err= 0: pid=65640: Mon Dec 9 05:15:27 2024 01:20:45.206 read: IOPS=22.6k, BW=88.3MiB/s (92.6MB/s)(177MiB/2001msec) 01:20:45.206 slat (nsec): min=3721, max=56731, avg=4472.73, stdev=1321.78 01:20:45.206 clat (usec): min=231, max=12437, avg=2822.30, stdev=431.98 01:20:45.206 lat (usec): min=235, max=12494, avg=2826.77, stdev=432.69 01:20:45.206 clat percentiles (usec): 01:20:45.206 | 1.00th=[ 2540], 5.00th=[ 2606], 10.00th=[ 2638], 20.00th=[ 2671], 01:20:45.206 | 30.00th=[ 2704], 40.00th=[ 2737], 50.00th=[ 2769], 60.00th=[ 2802], 01:20:45.206 | 70.00th=[ 2868], 80.00th=[ 2900], 90.00th=[ 2966], 95.00th=[ 3032], 01:20:45.206 | 99.00th=[ 3556], 99.50th=[ 5932], 99.90th=[ 8717], 99.95th=[ 8979], 01:20:45.206 | 99.99th=[11994] 01:20:45.206 bw ( KiB/s): min=84926, max=90920, per=98.06%, avg=88671.33, stdev=3265.27, samples=3 01:20:45.206 iops : min=21231, max=22730, avg=22167.67, stdev=816.61, samples=3 01:20:45.206 write: IOPS=22.5k, BW=87.8MiB/s (92.1MB/s)(176MiB/2001msec); 0 zone resets 01:20:45.206 slat (nsec): min=3833, max=45776, avg=4610.21, stdev=1260.62 01:20:45.206 clat (usec): min=185, max=12167, avg=2829.34, stdev=447.94 01:20:45.206 lat (usec): min=189, max=12180, avg=2833.95, stdev=448.61 01:20:45.206 clat percentiles (usec): 01:20:45.206 | 1.00th=[ 2540], 5.00th=[ 2606], 10.00th=[ 2638], 20.00th=[ 2671], 01:20:45.206 | 30.00th=[ 2704], 40.00th=[ 2737], 50.00th=[ 2769], 60.00th=[ 2835], 01:20:45.206 | 70.00th=[ 2868], 80.00th=[ 2900], 90.00th=[ 2966], 95.00th=[ 3032], 01:20:45.206 | 99.00th=[ 3654], 99.50th=[ 6652], 99.90th=[ 8717], 99.95th=[ 9241], 01:20:45.206 | 99.99th=[11600] 01:20:45.206 bw ( KiB/s): min=84782, max=91928, per=98.83%, avg=88884.67, stdev=3688.90, samples=3 01:20:45.206 iops : min=21195, max=22982, avg=22221.00, stdev=922.50, samples=3 01:20:45.206 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 01:20:45.206 lat (msec) : 2=0.06%, 4=99.12%, 10=0.75%, 20=0.03% 01:20:45.206 cpu : usr=99.25%, sys=0.05%, ctx=18, majf=0, minf=606 01:20:45.206 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 01:20:45.206 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:45.206 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:20:45.206 issued rwts: total=45236,44992,0,0 short=0,0,0,0 dropped=0,0,0,0 01:20:45.206 latency : target=0, window=0, percentile=100.00%, depth=128 01:20:45.206 01:20:45.206 Run status group 0 (all jobs): 01:20:45.206 READ: bw=88.3MiB/s (92.6MB/s), 88.3MiB/s-88.3MiB/s (92.6MB/s-92.6MB/s), io=177MiB (185MB), run=2001-2001msec 01:20:45.206 WRITE: bw=87.8MiB/s (92.1MB/s), 87.8MiB/s-87.8MiB/s (92.1MB/s-92.1MB/s), io=176MiB (184MB), run=2001-2001msec 01:20:45.206 ----------------------------------------------------- 01:20:45.206 Suppressions used: 01:20:45.206 count bytes template 01:20:45.206 1 32 /usr/src/fio/parse.c 01:20:45.206 1 8 libtcmalloc_minimal.so 01:20:45.206 ----------------------------------------------------- 01:20:45.206 01:20:45.206 05:15:27 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 01:20:45.206 05:15:27 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 01:20:45.206 05:15:27 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 01:20:45.206 05:15:27 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 01:20:45.772 05:15:27 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 01:20:45.772 05:15:27 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 01:20:46.030 05:15:28 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 01:20:46.030 05:15:28 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 01:20:46.030 05:15:28 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 01:20:46.030 05:15:28 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 01:20:46.030 05:15:28 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:20:46.030 05:15:28 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 01:20:46.030 05:15:28 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 01:20:46.030 05:15:28 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 01:20:46.030 05:15:28 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 01:20:46.030 05:15:28 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:20:46.030 05:15:28 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 01:20:46.031 05:15:28 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 01:20:46.031 05:15:28 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:20:46.031 05:15:28 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 01:20:46.031 05:15:28 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 01:20:46.031 05:15:28 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 01:20:46.031 05:15:28 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 01:20:46.031 05:15:28 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 01:20:46.031 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 01:20:46.031 fio-3.35 01:20:46.031 Starting 1 thread 01:20:51.297 01:20:51.297 test: (groupid=0, jobs=1): err= 0: pid=65706: Mon Dec 9 05:15:33 2024 01:20:51.297 read: IOPS=23.6k, BW=92.2MiB/s (96.7MB/s)(185MiB/2001msec) 01:20:51.297 slat (nsec): min=3773, max=60979, avg=4386.77, stdev=1115.26 01:20:51.297 clat (usec): min=224, max=11403, avg=2703.99, stdev=280.91 01:20:51.297 lat (usec): min=229, max=11459, avg=2708.37, stdev=281.34 01:20:51.297 clat percentiles (usec): 01:20:51.297 | 1.00th=[ 2474], 5.00th=[ 2540], 10.00th=[ 2573], 20.00th=[ 2606], 01:20:51.297 | 30.00th=[ 2638], 40.00th=[ 2671], 50.00th=[ 2671], 60.00th=[ 2704], 01:20:51.297 | 70.00th=[ 2737], 80.00th=[ 2769], 90.00th=[ 2802], 95.00th=[ 2868], 01:20:51.297 | 99.00th=[ 3359], 99.50th=[ 4359], 99.90th=[ 6063], 99.95th=[ 8717], 01:20:51.297 | 99.99th=[11076] 01:20:51.297 bw ( KiB/s): min=91225, max=95616, per=99.38%, avg=93877.67, stdev=2333.93, samples=3 01:20:51.297 iops : min=22806, max=23904, avg=23469.33, stdev=583.62, samples=3 01:20:51.297 write: IOPS=23.5k, BW=91.6MiB/s (96.1MB/s)(183MiB/2001msec); 0 zone resets 01:20:51.297 slat (nsec): min=3877, max=33564, avg=4526.90, stdev=956.42 01:20:51.297 clat (usec): min=198, max=11275, avg=2709.69, stdev=289.93 01:20:51.297 lat (usec): min=202, max=11289, avg=2714.21, stdev=290.34 01:20:51.297 clat percentiles (usec): 01:20:51.297 | 1.00th=[ 2474], 5.00th=[ 2540], 10.00th=[ 2573], 20.00th=[ 2606], 01:20:51.297 | 30.00th=[ 2638], 40.00th=[ 2671], 50.00th=[ 2671], 60.00th=[ 2704], 01:20:51.297 | 70.00th=[ 2737], 80.00th=[ 2769], 90.00th=[ 2802], 95.00th=[ 2868], 01:20:51.297 | 99.00th=[ 3425], 99.50th=[ 4424], 99.90th=[ 6783], 99.95th=[ 8979], 01:20:51.297 | 99.99th=[10814] 01:20:51.297 bw ( KiB/s): min=90874, max=97168, per=100.00%, avg=93950.00, stdev=3149.40, samples=3 01:20:51.297 iops : min=22718, max=24292, avg=23487.33, stdev=787.59, samples=3 01:20:51.297 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 01:20:51.297 lat (msec) : 2=0.09%, 4=99.22%, 10=0.62%, 20=0.03% 01:20:51.297 cpu : usr=99.40%, sys=0.15%, ctx=5, majf=0, minf=604 01:20:51.297 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 01:20:51.297 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:51.297 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:20:51.297 issued rwts: total=47254,46931,0,0 short=0,0,0,0 dropped=0,0,0,0 01:20:51.297 latency : target=0, window=0, percentile=100.00%, depth=128 01:20:51.297 01:20:51.297 Run status group 0 (all jobs): 01:20:51.297 READ: bw=92.2MiB/s (96.7MB/s), 92.2MiB/s-92.2MiB/s (96.7MB/s-96.7MB/s), io=185MiB (194MB), run=2001-2001msec 01:20:51.297 WRITE: bw=91.6MiB/s (96.1MB/s), 91.6MiB/s-91.6MiB/s (96.1MB/s-96.1MB/s), io=183MiB (192MB), run=2001-2001msec 01:20:51.557 ----------------------------------------------------- 01:20:51.557 Suppressions used: 01:20:51.557 count bytes template 01:20:51.557 1 32 /usr/src/fio/parse.c 01:20:51.557 1 8 libtcmalloc_minimal.so 01:20:51.557 ----------------------------------------------------- 01:20:51.557 01:20:51.557 05:15:33 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 01:20:51.557 05:15:33 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 01:20:51.557 01:20:51.557 real 0m21.267s 01:20:51.557 user 0m15.633s 01:20:51.557 sys 0m6.594s 01:20:51.557 05:15:33 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 01:20:51.557 ************************************ 01:20:51.557 END TEST nvme_fio 01:20:51.557 ************************************ 01:20:51.557 05:15:33 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 01:20:51.557 01:20:51.557 real 1m38.126s 01:20:51.557 user 3m47.221s 01:20:51.557 sys 0m26.660s 01:20:51.557 05:15:33 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 01:20:51.557 05:15:33 nvme -- common/autotest_common.sh@10 -- # set +x 01:20:51.557 ************************************ 01:20:51.557 END TEST nvme 01:20:51.557 ************************************ 01:20:51.557 05:15:33 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 01:20:51.557 05:15:33 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 01:20:51.557 05:15:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:20:51.557 05:15:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:20:51.557 05:15:33 -- common/autotest_common.sh@10 -- # set +x 01:20:51.816 ************************************ 01:20:51.816 START TEST nvme_scc 01:20:51.816 ************************************ 01:20:51.816 05:15:34 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 01:20:51.816 * Looking for test storage... 01:20:51.816 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 01:20:51.816 05:15:34 nvme_scc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:20:51.816 05:15:34 nvme_scc -- common/autotest_common.sh@1693 -- # lcov --version 01:20:51.816 05:15:34 nvme_scc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:20:51.816 05:15:34 nvme_scc -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:20:51.816 05:15:34 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:20:51.816 05:15:34 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 01:20:51.816 05:15:34 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 01:20:51.816 05:15:34 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 01:20:51.816 05:15:34 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 01:20:51.816 05:15:34 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 01:20:51.816 05:15:34 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 01:20:51.816 05:15:34 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 01:20:51.816 05:15:34 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 01:20:51.816 05:15:34 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 01:20:51.816 05:15:34 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:20:51.816 05:15:34 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 01:20:51.816 05:15:34 nvme_scc -- scripts/common.sh@345 -- # : 1 01:20:51.816 05:15:34 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 01:20:51.816 05:15:34 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:20:51.816 05:15:34 nvme_scc -- scripts/common.sh@365 -- # decimal 1 01:20:51.816 05:15:34 nvme_scc -- scripts/common.sh@353 -- # local d=1 01:20:51.816 05:15:34 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:20:51.816 05:15:34 nvme_scc -- scripts/common.sh@355 -- # echo 1 01:20:51.816 05:15:34 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 01:20:51.816 05:15:34 nvme_scc -- scripts/common.sh@366 -- # decimal 2 01:20:51.816 05:15:34 nvme_scc -- scripts/common.sh@353 -- # local d=2 01:20:51.816 05:15:34 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:20:51.816 05:15:34 nvme_scc -- scripts/common.sh@355 -- # echo 2 01:20:51.816 05:15:34 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 01:20:51.816 05:15:34 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:20:51.816 05:15:34 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:20:51.816 05:15:34 nvme_scc -- scripts/common.sh@368 -- # return 0 01:20:51.816 05:15:34 nvme_scc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:20:51.816 05:15:34 nvme_scc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:20:51.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:20:51.816 --rc genhtml_branch_coverage=1 01:20:51.816 --rc genhtml_function_coverage=1 01:20:51.816 --rc genhtml_legend=1 01:20:51.816 --rc geninfo_all_blocks=1 01:20:51.816 --rc geninfo_unexecuted_blocks=1 01:20:51.816 01:20:51.816 ' 01:20:51.816 05:15:34 nvme_scc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:20:51.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:20:51.816 --rc genhtml_branch_coverage=1 01:20:51.816 --rc genhtml_function_coverage=1 01:20:51.816 --rc genhtml_legend=1 01:20:51.816 --rc geninfo_all_blocks=1 01:20:51.816 --rc geninfo_unexecuted_blocks=1 01:20:51.816 01:20:51.816 ' 01:20:51.816 05:15:34 nvme_scc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:20:51.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:20:51.816 --rc genhtml_branch_coverage=1 01:20:51.816 --rc genhtml_function_coverage=1 01:20:51.816 --rc genhtml_legend=1 01:20:51.816 --rc geninfo_all_blocks=1 01:20:51.816 --rc geninfo_unexecuted_blocks=1 01:20:51.816 01:20:51.816 ' 01:20:51.816 05:15:34 nvme_scc -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:20:51.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:20:51.816 --rc genhtml_branch_coverage=1 01:20:51.816 --rc genhtml_function_coverage=1 01:20:51.816 --rc genhtml_legend=1 01:20:51.816 --rc geninfo_all_blocks=1 01:20:51.816 --rc geninfo_unexecuted_blocks=1 01:20:51.816 01:20:51.816 ' 01:20:51.816 05:15:34 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 01:20:51.816 05:15:34 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 01:20:51.816 05:15:34 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 01:20:51.816 05:15:34 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 01:20:51.816 05:15:34 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:20:51.816 05:15:34 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 01:20:51.816 05:15:34 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:20:51.816 05:15:34 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:20:51.816 05:15:34 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:20:51.816 05:15:34 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:20:51.816 05:15:34 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:20:51.816 05:15:34 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:20:51.816 05:15:34 nvme_scc -- paths/export.sh@5 -- # export PATH 01:20:52.074 05:15:34 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:20:52.074 05:15:34 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 01:20:52.074 05:15:34 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 01:20:52.074 05:15:34 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 01:20:52.074 05:15:34 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 01:20:52.074 05:15:34 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 01:20:52.074 05:15:34 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 01:20:52.074 05:15:34 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 01:20:52.074 05:15:34 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 01:20:52.074 05:15:34 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 01:20:52.074 05:15:34 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:20:52.074 05:15:34 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 01:20:52.074 05:15:34 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 01:20:52.074 05:15:34 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 01:20:52.074 05:15:34 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 01:20:52.642 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:20:52.901 Waiting for block devices as requested 01:20:52.901 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 01:20:52.901 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 01:20:53.159 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 01:20:53.159 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 01:20:58.437 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 01:20:58.437 05:15:40 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 01:20:58.437 05:15:40 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 01:20:58.437 05:15:40 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 01:20:58.437 05:15:40 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 01:20:58.437 05:15:40 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 01:20:58.437 05:15:40 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 01:20:58.437 05:15:40 nvme_scc -- scripts/common.sh@18 -- # local i 01:20:58.437 05:15:40 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 01:20:58.437 05:15:40 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 01:20:58.437 05:15:40 nvme_scc -- scripts/common.sh@27 -- # return 0 01:20:58.437 05:15:40 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 01:20:58.437 05:15:40 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 01:20:58.437 05:15:40 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 01:20:58.437 05:15:40 nvme_scc -- nvme/functions.sh@18 -- # shift 01:20:58.437 05:15:40 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 01:20:58.437 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.437 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.437 05:15:40 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 01:20:58.437 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 01:20:58.437 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.437 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.437 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 01:20:58.437 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 01:20:58.437 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 01:20:58.437 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.437 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 01:20:58.438 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 01:20:58.439 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng0n1 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@18 -- # shift 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.440 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dps]=0 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npda]=0 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nows]=0 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 01:20:58.441 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@18 -- # shift 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.442 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 01:20:58.443 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 01:20:58.444 05:15:40 nvme_scc -- scripts/common.sh@18 -- # local i 01:20:58.444 05:15:40 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 01:20:58.444 05:15:40 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 01:20:58.444 05:15:40 nvme_scc -- scripts/common.sh@27 -- # return 0 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@18 -- # shift 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.444 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.445 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.446 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng1n1 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@18 -- # shift 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dps]=0 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.447 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npda]=0 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nows]=0 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@18 -- # shift 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 01:20:58.448 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 01:20:58.449 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 01:20:58.450 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.450 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.450 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.450 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 01:20:58.450 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 01:20:58.450 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.450 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.450 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.450 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 01:20:58.450 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 01:20:58.450 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.450 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.450 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 01:20:58.450 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 01:20:58.450 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 01:20:58.450 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.450 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.450 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 01:20:58.450 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 01:20:58.450 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 01:20:58.450 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.450 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.450 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 01:20:58.450 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 01:20:58.450 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 01:20:58.450 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.450 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.450 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 01:20:58.450 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 01:20:58.450 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 01:20:58.450 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.450 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.450 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 01:20:58.450 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 01:20:58.450 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 01:20:58.450 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.450 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.450 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 01:20:58.450 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 01:20:58.450 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 01:20:58.450 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.450 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.450 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 01:20:58.450 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 01:20:58.450 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 01:20:58.450 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.450 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.450 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 01:20:58.450 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 01:20:58.450 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 01:20:58.450 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.450 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.450 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 01:20:58.450 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 01:20:58.450 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 01:20:58.450 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.450 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.450 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 01:20:58.450 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 01:20:58.450 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 01:20:58.450 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.450 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.450 05:15:40 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 01:20:58.450 05:15:40 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 01:20:58.450 05:15:40 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 01:20:58.450 05:15:40 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 01:20:58.450 05:15:40 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 01:20:58.450 05:15:40 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 01:20:58.450 05:15:40 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 01:20:58.450 05:15:40 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 01:20:58.450 05:15:40 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 01:20:58.450 05:15:40 nvme_scc -- scripts/common.sh@18 -- # local i 01:20:58.450 05:15:40 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 01:20:58.450 05:15:40 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 01:20:58.450 05:15:40 nvme_scc -- scripts/common.sh@27 -- # return 0 01:20:58.450 05:15:40 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 01:20:58.450 05:15:40 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 01:20:58.450 05:15:40 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 01:20:58.450 05:15:40 nvme_scc -- nvme/functions.sh@18 -- # shift 01:20:58.450 05:15:40 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 01:20:58.450 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.450 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.719 05:15:40 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 01:20:58.719 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 01:20:58.719 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.719 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.719 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 01:20:58.719 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 01:20:58.719 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 01:20:58.719 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.719 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.719 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 01:20:58.719 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 01:20:58.719 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 01:20:58.719 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.719 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.719 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 01:20:58.719 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 01:20:58.719 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 01:20:58.719 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.719 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.719 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 01:20:58.719 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 01:20:58.719 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 01:20:58.719 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.719 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.719 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 01:20:58.719 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 01:20:58.719 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 01:20:58.719 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.719 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.719 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 01:20:58.719 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 01:20:58.719 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 01:20:58.719 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.719 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.719 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 01:20:58.719 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 01:20:58.719 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 01:20:58.719 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.719 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.719 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.719 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 01:20:58.719 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 01:20:58.719 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.719 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.719 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 01:20:58.719 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 01:20:58.719 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 01:20:58.719 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.719 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.719 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.719 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 01:20:58.719 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 01:20:58.719 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.719 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.720 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 01:20:58.721 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n1 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@18 -- # shift 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dps]=0 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 01:20:58.722 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npda]=0 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nows]=0 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.723 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n2 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@18 -- # shift 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dps]=0 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 01:20:58.724 05:15:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 01:20:58.724 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.724 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.724 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.724 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 01:20:58.724 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 01:20:58.724 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.724 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.724 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npda]=0 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nows]=0 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n3 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@18 -- # shift 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.725 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dps]=0 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npda]=0 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nows]=0 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 01:20:58.726 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@18 -- # shift 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 01:20:58.727 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@18 -- # shift 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 01:20:58.728 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 01:20:58.729 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@18 -- # shift 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.730 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.731 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 01:20:58.732 05:15:41 nvme_scc -- scripts/common.sh@18 -- # local i 01:20:58.732 05:15:41 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 01:20:58.732 05:15:41 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 01:20:58.732 05:15:41 nvme_scc -- scripts/common.sh@27 -- # return 0 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@18 -- # shift 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 01:20:58.732 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.733 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.733 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.733 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 01:20:58.733 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 01:20:58.733 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.733 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.733 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 01:20:58.733 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 01:20:58.733 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 01:20:58.733 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.733 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.733 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 01:20:58.733 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 01:20:58.733 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 01:20:58.733 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.733 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.733 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 01:20:58.733 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 01:20:58.733 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 01:20:58.733 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.733 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.733 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 01:20:58.733 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 01:20:58.733 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 01:20:58.733 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.733 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.733 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 01:20:58.733 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 01:20:58.733 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 01:20:58.733 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.733 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.733 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.733 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 01:20:58.733 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 01:20:58.733 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.733 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.733 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.733 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 01:20:58.733 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 01:20:58.733 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.733 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.733 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.733 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 01:20:58.733 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 01:20:58.733 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.733 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.733 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.733 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 01:20:58.733 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 01:20:58.733 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.733 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.733 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 01:20:58.733 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 01:20:58.733 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 01:20:58.733 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.733 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.733 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 01:20:58.733 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 01:20:58.733 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 01:20:58.733 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.733 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.733 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.733 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 01:20:58.733 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 01:20:58.733 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.733 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.733 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.733 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 01:20:58.733 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 01:20:58.733 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.733 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.733 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.733 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 01:20:58.733 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 01:20:58.733 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.733 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.733 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.733 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 01:20:58.733 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 01:20:58.733 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.733 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.733 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.733 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 01:20:58.733 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 01:20:58.733 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.733 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.733 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.733 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 01:20:58.733 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 01:20:58.733 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.733 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.733 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.733 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 01:20:58.733 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 01:20:58.733 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.733 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.733 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.733 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 01:20:58.733 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 01:20:58.733 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.733 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.733 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.733 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 01:20:58.994 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 01:20:58.994 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.994 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.994 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.994 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 01:20:58.994 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 01:20:58.994 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.994 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.994 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.994 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 01:20:58.994 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 01:20:58.994 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.994 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.994 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.994 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 01:20:58.994 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 01:20:58.994 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.994 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.994 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.994 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 01:20:58.994 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 01:20:58.994 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.994 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.994 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.994 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 01:20:58.994 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 01:20:58.994 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.994 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.994 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.994 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 01:20:58.994 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 01:20:58.994 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.994 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.994 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.994 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 01:20:58.994 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 01:20:58.994 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.994 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.994 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.994 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 01:20:58.994 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 01:20:58.994 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.994 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.994 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 01:20:58.994 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 01:20:58.994 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 01:20:58.994 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.994 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.994 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.994 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 01:20:58.994 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 01:20:58.994 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.994 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.994 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.994 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 01:20:58.994 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 01:20:58.994 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.994 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.994 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.994 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 01:20:58.994 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 01:20:58.994 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.994 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.994 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.994 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 01:20:58.994 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 01:20:58.994 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.994 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 01:20:58.995 05:15:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 01:20:58.996 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:20:58.996 05:15:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:20:58.996 05:15:41 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 01:20:58.996 05:15:41 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 01:20:58.996 05:15:41 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 01:20:58.996 05:15:41 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 01:20:58.996 05:15:41 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 01:20:58.996 05:15:41 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 01:20:58.996 05:15:41 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 01:20:58.996 05:15:41 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 01:20:58.996 05:15:41 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 01:20:58.996 05:15:41 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 01:20:58.996 05:15:41 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 01:20:58.996 05:15:41 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 01:20:58.996 05:15:41 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 01:20:58.996 05:15:41 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 01:20:58.996 05:15:41 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 01:20:58.996 05:15:41 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 01:20:58.996 05:15:41 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 01:20:58.996 05:15:41 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 01:20:58.996 05:15:41 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 01:20:58.996 05:15:41 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 01:20:58.996 05:15:41 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 01:20:58.996 05:15:41 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 01:20:58.996 05:15:41 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 01:20:58.996 05:15:41 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 01:20:58.996 05:15:41 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 01:20:58.996 05:15:41 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 01:20:58.996 05:15:41 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 01:20:58.996 05:15:41 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 01:20:58.996 05:15:41 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 01:20:58.996 05:15:41 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 01:20:58.996 05:15:41 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 01:20:58.996 05:15:41 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 01:20:58.996 05:15:41 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 01:20:58.996 05:15:41 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 01:20:58.996 05:15:41 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 01:20:58.996 05:15:41 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 01:20:58.996 05:15:41 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 01:20:58.996 05:15:41 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 01:20:58.996 05:15:41 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 01:20:58.996 05:15:41 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 01:20:58.996 05:15:41 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 01:20:58.996 05:15:41 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 01:20:58.996 05:15:41 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 01:20:58.996 05:15:41 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 01:20:58.996 05:15:41 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 01:20:58.996 05:15:41 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 01:20:58.996 05:15:41 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 01:20:58.996 05:15:41 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 01:20:58.996 05:15:41 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 01:20:58.996 05:15:41 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 01:20:58.996 05:15:41 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 01:20:58.996 05:15:41 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 01:20:58.996 05:15:41 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 01:20:58.996 05:15:41 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 01:20:58.996 05:15:41 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 01:20:58.996 05:15:41 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 01:20:58.996 05:15:41 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 01:20:58.996 05:15:41 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 01:20:58.996 05:15:41 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 01:20:58.996 05:15:41 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 01:20:58.996 05:15:41 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 01:20:58.996 05:15:41 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 01:20:58.996 05:15:41 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 01:20:58.996 05:15:41 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 01:20:58.996 05:15:41 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 01:20:58.996 05:15:41 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 01:20:58.996 05:15:41 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 01:20:58.996 05:15:41 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 01:20:58.996 05:15:41 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 01:20:58.996 05:15:41 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 01:20:58.996 05:15:41 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 01:20:58.996 05:15:41 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 01:20:58.996 05:15:41 nvme_scc -- nvme/functions.sh@209 -- # return 0 01:20:58.996 05:15:41 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 01:20:58.996 05:15:41 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 01:20:58.996 05:15:41 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 01:20:59.565 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:21:00.503 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 01:21:00.503 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 01:21:00.503 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 01:21:00.503 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 01:21:00.503 05:15:42 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 01:21:00.503 05:15:42 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:21:00.503 05:15:42 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 01:21:00.503 05:15:42 nvme_scc -- common/autotest_common.sh@10 -- # set +x 01:21:00.503 ************************************ 01:21:00.503 START TEST nvme_simple_copy 01:21:00.503 ************************************ 01:21:00.503 05:15:42 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 01:21:00.763 Initializing NVMe Controllers 01:21:00.763 Attaching to 0000:00:10.0 01:21:00.763 Controller supports SCC. Attached to 0000:00:10.0 01:21:00.763 Namespace ID: 1 size: 6GB 01:21:00.763 Initialization complete. 01:21:00.763 01:21:00.763 Controller QEMU NVMe Ctrl (12340 ) 01:21:00.763 Controller PCI vendor:6966 PCI subsystem vendor:6900 01:21:00.763 Namespace Block Size:4096 01:21:00.763 Writing LBAs 0 to 63 with Random Data 01:21:00.763 Copied LBAs from 0 - 63 to the Destination LBA 256 01:21:00.763 LBAs matching Written Data: 64 01:21:00.763 01:21:00.763 real 0m0.308s 01:21:00.763 user 0m0.115s 01:21:00.763 sys 0m0.093s 01:21:00.763 05:15:43 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 01:21:00.763 05:15:43 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 01:21:00.763 ************************************ 01:21:00.763 END TEST nvme_simple_copy 01:21:00.763 ************************************ 01:21:00.763 01:21:00.763 real 0m9.147s 01:21:00.763 user 0m1.708s 01:21:00.763 sys 0m2.470s 01:21:00.763 05:15:43 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 01:21:00.763 05:15:43 nvme_scc -- common/autotest_common.sh@10 -- # set +x 01:21:00.763 ************************************ 01:21:00.763 END TEST nvme_scc 01:21:00.763 ************************************ 01:21:00.763 05:15:43 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 01:21:00.763 05:15:43 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 01:21:00.763 05:15:43 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 01:21:00.763 05:15:43 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 01:21:00.763 05:15:43 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 01:21:00.763 05:15:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:21:01.023 05:15:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:21:01.023 05:15:43 -- common/autotest_common.sh@10 -- # set +x 01:21:01.023 ************************************ 01:21:01.023 START TEST nvme_fdp 01:21:01.023 ************************************ 01:21:01.023 05:15:43 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 01:21:01.023 * Looking for test storage... 01:21:01.023 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 01:21:01.023 05:15:43 nvme_fdp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:21:01.023 05:15:43 nvme_fdp -- common/autotest_common.sh@1693 -- # lcov --version 01:21:01.023 05:15:43 nvme_fdp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:21:01.023 05:15:43 nvme_fdp -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:21:01.023 05:15:43 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:21:01.023 05:15:43 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 01:21:01.023 05:15:43 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 01:21:01.023 05:15:43 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 01:21:01.023 05:15:43 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 01:21:01.023 05:15:43 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 01:21:01.023 05:15:43 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 01:21:01.023 05:15:43 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 01:21:01.023 05:15:43 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 01:21:01.023 05:15:43 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 01:21:01.023 05:15:43 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:21:01.023 05:15:43 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 01:21:01.023 05:15:43 nvme_fdp -- scripts/common.sh@345 -- # : 1 01:21:01.023 05:15:43 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 01:21:01.023 05:15:43 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:21:01.023 05:15:43 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 01:21:01.023 05:15:43 nvme_fdp -- scripts/common.sh@353 -- # local d=1 01:21:01.023 05:15:43 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:21:01.023 05:15:43 nvme_fdp -- scripts/common.sh@355 -- # echo 1 01:21:01.023 05:15:43 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 01:21:01.023 05:15:43 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 01:21:01.023 05:15:43 nvme_fdp -- scripts/common.sh@353 -- # local d=2 01:21:01.023 05:15:43 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:21:01.023 05:15:43 nvme_fdp -- scripts/common.sh@355 -- # echo 2 01:21:01.023 05:15:43 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 01:21:01.023 05:15:43 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:21:01.023 05:15:43 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:21:01.023 05:15:43 nvme_fdp -- scripts/common.sh@368 -- # return 0 01:21:01.023 05:15:43 nvme_fdp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:21:01.023 05:15:43 nvme_fdp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:21:01.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:21:01.023 --rc genhtml_branch_coverage=1 01:21:01.023 --rc genhtml_function_coverage=1 01:21:01.023 --rc genhtml_legend=1 01:21:01.023 --rc geninfo_all_blocks=1 01:21:01.023 --rc geninfo_unexecuted_blocks=1 01:21:01.023 01:21:01.023 ' 01:21:01.023 05:15:43 nvme_fdp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:21:01.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:21:01.023 --rc genhtml_branch_coverage=1 01:21:01.023 --rc genhtml_function_coverage=1 01:21:01.023 --rc genhtml_legend=1 01:21:01.023 --rc geninfo_all_blocks=1 01:21:01.023 --rc geninfo_unexecuted_blocks=1 01:21:01.023 01:21:01.023 ' 01:21:01.023 05:15:43 nvme_fdp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:21:01.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:21:01.023 --rc genhtml_branch_coverage=1 01:21:01.023 --rc genhtml_function_coverage=1 01:21:01.023 --rc genhtml_legend=1 01:21:01.023 --rc geninfo_all_blocks=1 01:21:01.023 --rc geninfo_unexecuted_blocks=1 01:21:01.023 01:21:01.023 ' 01:21:01.023 05:15:43 nvme_fdp -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:21:01.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:21:01.023 --rc genhtml_branch_coverage=1 01:21:01.023 --rc genhtml_function_coverage=1 01:21:01.023 --rc genhtml_legend=1 01:21:01.023 --rc geninfo_all_blocks=1 01:21:01.023 --rc geninfo_unexecuted_blocks=1 01:21:01.023 01:21:01.023 ' 01:21:01.023 05:15:43 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 01:21:01.023 05:15:43 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 01:21:01.023 05:15:43 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 01:21:01.023 05:15:43 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 01:21:01.023 05:15:43 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:21:01.023 05:15:43 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 01:21:01.283 05:15:43 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:21:01.283 05:15:43 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:21:01.283 05:15:43 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:21:01.283 05:15:43 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:21:01.283 05:15:43 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:21:01.284 05:15:43 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:21:01.284 05:15:43 nvme_fdp -- paths/export.sh@5 -- # export PATH 01:21:01.284 05:15:43 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:21:01.284 05:15:43 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 01:21:01.284 05:15:43 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 01:21:01.284 05:15:43 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 01:21:01.284 05:15:43 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 01:21:01.284 05:15:43 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 01:21:01.284 05:15:43 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 01:21:01.284 05:15:43 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 01:21:01.284 05:15:43 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 01:21:01.284 05:15:43 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 01:21:01.284 05:15:43 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:21:01.284 05:15:43 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 01:21:01.854 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:21:01.854 Waiting for block devices as requested 01:21:02.114 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 01:21:02.114 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 01:21:02.114 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 01:21:02.388 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 01:21:07.721 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 01:21:07.721 05:15:49 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 01:21:07.721 05:15:49 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 01:21:07.721 05:15:49 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 01:21:07.721 05:15:49 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 01:21:07.721 05:15:49 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 01:21:07.721 05:15:49 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 01:21:07.721 05:15:49 nvme_fdp -- scripts/common.sh@18 -- # local i 01:21:07.721 05:15:49 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 01:21:07.721 05:15:49 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 01:21:07.721 05:15:49 nvme_fdp -- scripts/common.sh@27 -- # return 0 01:21:07.721 05:15:49 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 01:21:07.721 05:15:49 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 01:21:07.721 05:15:49 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 01:21:07.721 05:15:49 nvme_fdp -- nvme/functions.sh@18 -- # shift 01:21:07.721 05:15:49 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 01:21:07.721 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.721 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.721 05:15:49 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 01:21:07.721 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 01:21:07.721 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.721 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.721 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 01:21:07.721 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 01:21:07.721 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 01:21:07.721 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.721 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.721 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 01:21:07.721 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 01:21:07.721 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 01:21:07.721 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.721 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.721 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 01:21:07.721 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 01:21:07.721 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 01:21:07.721 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.721 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.721 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 01:21:07.721 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 01:21:07.721 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 01:21:07.721 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.721 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.721 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 01:21:07.721 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 01:21:07.721 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 01:21:07.721 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.721 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.721 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 01:21:07.721 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 01:21:07.721 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 01:21:07.721 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.721 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.721 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 01:21:07.721 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 01:21:07.721 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 01:21:07.721 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.721 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.721 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.721 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 01:21:07.721 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 01:21:07.721 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.721 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.721 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 01:21:07.721 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 01:21:07.721 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 01:21:07.721 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.721 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.721 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.721 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 01:21:07.721 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 01:21:07.721 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.721 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.721 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 01:21:07.721 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 01:21:07.721 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 01:21:07.721 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.721 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.721 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.721 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 01:21:07.721 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 01:21:07.721 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.721 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.721 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.721 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 01:21:07.721 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 01:21:07.721 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.721 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.721 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 01:21:07.721 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 01:21:07.721 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 01:21:07.721 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.721 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.722 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.723 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng0n1 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@18 -- # shift 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dps]=0 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 01:21:07.724 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npda]=0 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nows]=0 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@18 -- # shift 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 01:21:07.725 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 01:21:07.726 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 01:21:07.727 05:15:49 nvme_fdp -- scripts/common.sh@18 -- # local i 01:21:07.727 05:15:49 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 01:21:07.727 05:15:49 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 01:21:07.727 05:15:49 nvme_fdp -- scripts/common.sh@27 -- # return 0 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@18 -- # shift 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.727 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.728 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 01:21:07.729 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng1n1 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@18 -- # shift 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.730 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dps]=0 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npda]=0 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nows]=0 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 01:21:07.731 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@18 -- # shift 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 01:21:07.732 05:15:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 01:21:07.732 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.732 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.732 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.732 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 01:21:07.732 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 01:21:07.732 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.732 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.732 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.732 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 01:21:07.732 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 01:21:07.732 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.732 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.732 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.732 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 01:21:07.732 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 01:21:07.732 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.732 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.732 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.732 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 01:21:07.732 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 01:21:07.732 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.732 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.732 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.732 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 01:21:07.732 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 01:21:07.732 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.732 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 01:21:07.733 05:15:50 nvme_fdp -- scripts/common.sh@18 -- # local i 01:21:07.733 05:15:50 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 01:21:07.733 05:15:50 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 01:21:07.733 05:15:50 nvme_fdp -- scripts/common.sh@27 -- # return 0 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@18 -- # shift 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.733 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 01:21:07.734 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.735 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n1 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@18 -- # shift 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 01:21:07.736 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dps]=0 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npda]=0 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nows]=0 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.737 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n2 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@18 -- # shift 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dps]=0 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 01:21:07.738 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npda]=0 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nows]=0 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n3 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@18 -- # shift 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.739 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dps]=0 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npda]=0 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nows]=0 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 01:21:07.740 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@18 -- # shift 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 01:21:07.741 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 01:21:07.742 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.742 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.742 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.742 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 01:21:07.742 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 01:21:07.742 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.742 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.742 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 01:21:07.742 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 01:21:07.742 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 01:21:07.742 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.742 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.742 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.742 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 01:21:07.742 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 01:21:07.742 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.742 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.742 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.742 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 01:21:07.742 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 01:21:07.742 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.742 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.742 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.742 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 01:21:07.742 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 01:21:07.742 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.742 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.742 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.742 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 01:21:07.742 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 01:21:07.742 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.742 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.742 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.742 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 01:21:07.742 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 01:21:07.742 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.742 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.742 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.742 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 01:21:07.742 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 01:21:07.742 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.742 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.742 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.742 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 01:21:07.742 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 01:21:07.742 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.742 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.742 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.742 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 01:21:07.742 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 01:21:07.742 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.742 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.742 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.742 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 01:21:07.742 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 01:21:07.742 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.742 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.742 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.742 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 01:21:07.742 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 01:21:07.742 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.742 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.742 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.742 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 01:21:07.742 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 01:21:07.742 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.742 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.742 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.742 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 01:21:07.742 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 01:21:07.742 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.742 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.742 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.742 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 01:21:07.742 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 01:21:07.742 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.742 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.742 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 01:21:07.742 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 01:21:07.742 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 01:21:07.742 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.742 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.742 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 01:21:07.742 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 01:21:07.742 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 01:21:07.742 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.742 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.742 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 01:21:07.742 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 01:21:07.742 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 01:21:07.742 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.742 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.742 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.742 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 01:21:07.742 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 01:21:07.742 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:07.742 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:07.742 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:07.742 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 01:21:07.742 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 01:21:07.742 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.009 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.009 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:08.009 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 01:21:08.009 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 01:21:08.009 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.009 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.009 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:08.009 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 01:21:08.009 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 01:21:08.009 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.009 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.009 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:08.009 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 01:21:08.009 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 01:21:08.009 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.009 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.009 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 01:21:08.009 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 01:21:08.009 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 01:21:08.009 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.009 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.009 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 01:21:08.009 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 01:21:08.009 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 01:21:08.009 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.009 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.009 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 01:21:08.009 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 01:21:08.009 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 01:21:08.009 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.009 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.009 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 01:21:08.009 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 01:21:08.009 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 01:21:08.009 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.009 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.009 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 01:21:08.009 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 01:21:08.009 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 01:21:08.009 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.009 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.009 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 01:21:08.009 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 01:21:08.009 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 01:21:08.009 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.009 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.009 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 01:21:08.009 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 01:21:08.009 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 01:21:08.009 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.009 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.009 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 01:21:08.009 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 01:21:08.009 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 01:21:08.009 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.009 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.009 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 01:21:08.009 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 01:21:08.009 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 01:21:08.009 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.009 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.009 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 01:21:08.009 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 01:21:08.009 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 01:21:08.009 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.009 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@18 -- # shift 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.010 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@18 -- # shift 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 01:21:08.011 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.012 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 01:21:08.013 05:15:50 nvme_fdp -- scripts/common.sh@18 -- # local i 01:21:08.013 05:15:50 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 01:21:08.013 05:15:50 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 01:21:08.013 05:15:50 nvme_fdp -- scripts/common.sh@27 -- # return 0 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@18 -- # shift 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 01:21:08.013 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 01:21:08.014 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.015 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:08.016 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 01:21:08.016 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 01:21:08.016 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.016 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.016 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:08.016 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 01:21:08.016 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 01:21:08.016 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.016 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.016 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:08.016 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 01:21:08.016 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 01:21:08.016 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.016 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.016 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:21:08.016 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 01:21:08.016 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 01:21:08.016 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.016 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.016 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 01:21:08.016 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 01:21:08.016 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 01:21:08.016 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.016 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.016 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 01:21:08.016 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 01:21:08.016 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 01:21:08.016 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.016 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.016 05:15:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 01:21:08.016 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 01:21:08.016 05:15:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 01:21:08.016 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:21:08.016 05:15:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:21:08.016 05:15:50 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 01:21:08.016 05:15:50 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 01:21:08.016 05:15:50 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 01:21:08.016 05:15:50 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 01:21:08.016 05:15:50 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 01:21:08.016 05:15:50 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 01:21:08.016 05:15:50 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 01:21:08.016 05:15:50 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 01:21:08.016 05:15:50 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 01:21:08.016 05:15:50 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 01:21:08.016 05:15:50 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 01:21:08.016 05:15:50 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 01:21:08.016 05:15:50 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 01:21:08.016 05:15:50 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 01:21:08.016 05:15:50 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 01:21:08.016 05:15:50 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 01:21:08.016 05:15:50 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 01:21:08.016 05:15:50 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 01:21:08.016 05:15:50 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 01:21:08.016 05:15:50 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 01:21:08.016 05:15:50 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 01:21:08.016 05:15:50 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 01:21:08.016 05:15:50 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 01:21:08.016 05:15:50 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 01:21:08.016 05:15:50 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 01:21:08.016 05:15:50 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 01:21:08.016 05:15:50 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 01:21:08.016 05:15:50 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 01:21:08.016 05:15:50 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 01:21:08.016 05:15:50 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 01:21:08.016 05:15:50 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 01:21:08.016 05:15:50 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 01:21:08.016 05:15:50 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 01:21:08.016 05:15:50 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 01:21:08.016 05:15:50 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 01:21:08.016 05:15:50 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 01:21:08.016 05:15:50 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 01:21:08.016 05:15:50 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 01:21:08.016 05:15:50 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 01:21:08.016 05:15:50 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 01:21:08.016 05:15:50 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 01:21:08.016 05:15:50 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 01:21:08.016 05:15:50 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 01:21:08.016 05:15:50 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 01:21:08.016 05:15:50 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 01:21:08.016 05:15:50 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 01:21:08.016 05:15:50 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 01:21:08.016 05:15:50 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 01:21:08.016 05:15:50 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 01:21:08.016 05:15:50 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 01:21:08.016 05:15:50 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 01:21:08.016 05:15:50 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 01:21:08.016 05:15:50 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 01:21:08.016 05:15:50 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 01:21:08.016 05:15:50 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 01:21:08.016 05:15:50 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 01:21:08.016 05:15:50 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 01:21:08.016 05:15:50 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 01:21:08.016 05:15:50 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 01:21:08.016 05:15:50 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 01:21:08.016 05:15:50 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 01:21:08.016 05:15:50 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 01:21:08.016 05:15:50 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 01:21:08.016 05:15:50 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 01:21:08.016 05:15:50 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 01:21:08.016 05:15:50 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 01:21:08.016 05:15:50 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 01:21:08.016 05:15:50 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 01:21:08.016 05:15:50 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 01:21:08.016 05:15:50 nvme_fdp -- nvme/functions.sh@209 -- # return 0 01:21:08.016 05:15:50 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 01:21:08.016 05:15:50 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 01:21:08.016 05:15:50 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 01:21:08.585 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:21:09.523 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 01:21:09.523 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 01:21:09.523 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 01:21:09.523 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 01:21:09.523 05:15:51 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 01:21:09.523 05:15:51 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:21:09.523 05:15:51 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 01:21:09.523 05:15:51 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 01:21:09.523 ************************************ 01:21:09.523 START TEST nvme_flexible_data_placement 01:21:09.523 ************************************ 01:21:09.523 05:15:51 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 01:21:10.092 Initializing NVMe Controllers 01:21:10.092 Attaching to 0000:00:13.0 01:21:10.092 Controller supports FDP Attached to 0000:00:13.0 01:21:10.092 Namespace ID: 1 Endurance Group ID: 1 01:21:10.092 Initialization complete. 01:21:10.092 01:21:10.092 ================================== 01:21:10.092 == FDP tests for Namespace: #01 == 01:21:10.092 ================================== 01:21:10.092 01:21:10.092 Get Feature: FDP: 01:21:10.092 ================= 01:21:10.092 Enabled: Yes 01:21:10.092 FDP configuration Index: 0 01:21:10.092 01:21:10.092 FDP configurations log page 01:21:10.092 =========================== 01:21:10.092 Number of FDP configurations: 1 01:21:10.092 Version: 0 01:21:10.092 Size: 112 01:21:10.092 FDP Configuration Descriptor: 0 01:21:10.092 Descriptor Size: 96 01:21:10.092 Reclaim Group Identifier format: 2 01:21:10.092 FDP Volatile Write Cache: Not Present 01:21:10.092 FDP Configuration: Valid 01:21:10.092 Vendor Specific Size: 0 01:21:10.092 Number of Reclaim Groups: 2 01:21:10.092 Number of Recalim Unit Handles: 8 01:21:10.092 Max Placement Identifiers: 128 01:21:10.092 Number of Namespaces Suppprted: 256 01:21:10.092 Reclaim unit Nominal Size: 6000000 bytes 01:21:10.092 Estimated Reclaim Unit Time Limit: Not Reported 01:21:10.092 RUH Desc #000: RUH Type: Initially Isolated 01:21:10.092 RUH Desc #001: RUH Type: Initially Isolated 01:21:10.092 RUH Desc #002: RUH Type: Initially Isolated 01:21:10.092 RUH Desc #003: RUH Type: Initially Isolated 01:21:10.092 RUH Desc #004: RUH Type: Initially Isolated 01:21:10.092 RUH Desc #005: RUH Type: Initially Isolated 01:21:10.092 RUH Desc #006: RUH Type: Initially Isolated 01:21:10.092 RUH Desc #007: RUH Type: Initially Isolated 01:21:10.092 01:21:10.092 FDP reclaim unit handle usage log page 01:21:10.092 ====================================== 01:21:10.092 Number of Reclaim Unit Handles: 8 01:21:10.092 RUH Usage Desc #000: RUH Attributes: Controller Specified 01:21:10.092 RUH Usage Desc #001: RUH Attributes: Unused 01:21:10.092 RUH Usage Desc #002: RUH Attributes: Unused 01:21:10.092 RUH Usage Desc #003: RUH Attributes: Unused 01:21:10.092 RUH Usage Desc #004: RUH Attributes: Unused 01:21:10.093 RUH Usage Desc #005: RUH Attributes: Unused 01:21:10.093 RUH Usage Desc #006: RUH Attributes: Unused 01:21:10.093 RUH Usage Desc #007: RUH Attributes: Unused 01:21:10.093 01:21:10.093 FDP statistics log page 01:21:10.093 ======================= 01:21:10.093 Host bytes with metadata written: 958586880 01:21:10.093 Media bytes with metadata written: 958681088 01:21:10.093 Media bytes erased: 0 01:21:10.093 01:21:10.093 FDP Reclaim unit handle status 01:21:10.093 ============================== 01:21:10.093 Number of RUHS descriptors: 2 01:21:10.093 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000002dd2 01:21:10.093 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 01:21:10.093 01:21:10.093 FDP write on placement id: 0 success 01:21:10.093 01:21:10.093 Set Feature: Enabling FDP events on Placement handle: #0 Success 01:21:10.093 01:21:10.093 IO mgmt send: RUH update for Placement ID: #0 Success 01:21:10.093 01:21:10.093 Get Feature: FDP Events for Placement handle: #0 01:21:10.093 ======================== 01:21:10.093 Number of FDP Events: 6 01:21:10.093 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 01:21:10.093 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 01:21:10.093 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 01:21:10.093 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 01:21:10.093 FDP Event: #4 Type: Media Reallocated Enabled: No 01:21:10.093 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 01:21:10.093 01:21:10.093 FDP events log page 01:21:10.093 =================== 01:21:10.093 Number of FDP events: 1 01:21:10.093 FDP Event #0: 01:21:10.093 Event Type: RU Not Written to Capacity 01:21:10.093 Placement Identifier: Valid 01:21:10.093 NSID: Valid 01:21:10.093 Location: Valid 01:21:10.093 Placement Identifier: 0 01:21:10.093 Event Timestamp: 8 01:21:10.093 Namespace Identifier: 1 01:21:10.093 Reclaim Group Identifier: 0 01:21:10.093 Reclaim Unit Handle Identifier: 0 01:21:10.093 01:21:10.093 FDP test passed 01:21:10.093 01:21:10.093 real 0m0.298s 01:21:10.093 user 0m0.097s 01:21:10.093 sys 0m0.099s 01:21:10.093 05:15:52 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 01:21:10.093 05:15:52 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 01:21:10.093 ************************************ 01:21:10.093 END TEST nvme_flexible_data_placement 01:21:10.093 ************************************ 01:21:10.093 01:21:10.093 real 0m9.101s 01:21:10.093 user 0m1.659s 01:21:10.093 sys 0m2.551s 01:21:10.093 05:15:52 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 01:21:10.093 05:15:52 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 01:21:10.093 ************************************ 01:21:10.093 END TEST nvme_fdp 01:21:10.093 ************************************ 01:21:10.093 05:15:52 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 01:21:10.093 05:15:52 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 01:21:10.093 05:15:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:21:10.093 05:15:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:21:10.093 05:15:52 -- common/autotest_common.sh@10 -- # set +x 01:21:10.093 ************************************ 01:21:10.093 START TEST nvme_rpc 01:21:10.093 ************************************ 01:21:10.093 05:15:52 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 01:21:10.093 * Looking for test storage... 01:21:10.093 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 01:21:10.093 05:15:52 nvme_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:21:10.093 05:15:52 nvme_rpc -- common/autotest_common.sh@1693 -- # lcov --version 01:21:10.093 05:15:52 nvme_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:21:10.352 05:15:52 nvme_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:21:10.352 05:15:52 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:21:10.352 05:15:52 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 01:21:10.352 05:15:52 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 01:21:10.352 05:15:52 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 01:21:10.352 05:15:52 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 01:21:10.352 05:15:52 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 01:21:10.352 05:15:52 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 01:21:10.352 05:15:52 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 01:21:10.352 05:15:52 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 01:21:10.352 05:15:52 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 01:21:10.352 05:15:52 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:21:10.352 05:15:52 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 01:21:10.352 05:15:52 nvme_rpc -- scripts/common.sh@345 -- # : 1 01:21:10.352 05:15:52 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 01:21:10.352 05:15:52 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:21:10.352 05:15:52 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 01:21:10.352 05:15:52 nvme_rpc -- scripts/common.sh@353 -- # local d=1 01:21:10.352 05:15:52 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:21:10.352 05:15:52 nvme_rpc -- scripts/common.sh@355 -- # echo 1 01:21:10.352 05:15:52 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 01:21:10.352 05:15:52 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 01:21:10.352 05:15:52 nvme_rpc -- scripts/common.sh@353 -- # local d=2 01:21:10.352 05:15:52 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:21:10.352 05:15:52 nvme_rpc -- scripts/common.sh@355 -- # echo 2 01:21:10.352 05:15:52 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 01:21:10.352 05:15:52 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:21:10.352 05:15:52 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:21:10.352 05:15:52 nvme_rpc -- scripts/common.sh@368 -- # return 0 01:21:10.352 05:15:52 nvme_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:21:10.352 05:15:52 nvme_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:21:10.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:21:10.352 --rc genhtml_branch_coverage=1 01:21:10.352 --rc genhtml_function_coverage=1 01:21:10.352 --rc genhtml_legend=1 01:21:10.352 --rc geninfo_all_blocks=1 01:21:10.352 --rc geninfo_unexecuted_blocks=1 01:21:10.352 01:21:10.352 ' 01:21:10.352 05:15:52 nvme_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:21:10.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:21:10.352 --rc genhtml_branch_coverage=1 01:21:10.352 --rc genhtml_function_coverage=1 01:21:10.352 --rc genhtml_legend=1 01:21:10.352 --rc geninfo_all_blocks=1 01:21:10.352 --rc geninfo_unexecuted_blocks=1 01:21:10.352 01:21:10.352 ' 01:21:10.352 05:15:52 nvme_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:21:10.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:21:10.352 --rc genhtml_branch_coverage=1 01:21:10.352 --rc genhtml_function_coverage=1 01:21:10.352 --rc genhtml_legend=1 01:21:10.352 --rc geninfo_all_blocks=1 01:21:10.352 --rc geninfo_unexecuted_blocks=1 01:21:10.352 01:21:10.352 ' 01:21:10.352 05:15:52 nvme_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:21:10.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:21:10.352 --rc genhtml_branch_coverage=1 01:21:10.352 --rc genhtml_function_coverage=1 01:21:10.352 --rc genhtml_legend=1 01:21:10.352 --rc geninfo_all_blocks=1 01:21:10.352 --rc geninfo_unexecuted_blocks=1 01:21:10.352 01:21:10.352 ' 01:21:10.352 05:15:52 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:21:10.352 05:15:52 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 01:21:10.352 05:15:52 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 01:21:10.352 05:15:52 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 01:21:10.352 05:15:52 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 01:21:10.353 05:15:52 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 01:21:10.353 05:15:52 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 01:21:10.353 05:15:52 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 01:21:10.353 05:15:52 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 01:21:10.353 05:15:52 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 01:21:10.353 05:15:52 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 01:21:10.353 05:15:52 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 01:21:10.353 05:15:52 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 01:21:10.353 05:15:52 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 01:21:10.353 05:15:52 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 01:21:10.353 05:15:52 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=67113 01:21:10.353 05:15:52 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 01:21:10.353 05:15:52 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 01:21:10.353 05:15:52 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 67113 01:21:10.353 05:15:52 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 67113 ']' 01:21:10.353 05:15:52 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:21:10.353 05:15:52 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 01:21:10.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:21:10.353 05:15:52 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:21:10.353 05:15:52 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 01:21:10.353 05:15:52 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 01:21:10.612 [2024-12-09 05:15:52.859343] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:21:10.612 [2024-12-09 05:15:52.859502] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67113 ] 01:21:10.612 [2024-12-09 05:15:53.044479] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 01:21:10.870 [2024-12-09 05:15:53.162854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:21:10.870 [2024-12-09 05:15:53.162884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:21:11.806 05:15:54 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:21:11.806 05:15:54 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 01:21:11.806 05:15:54 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 01:21:12.064 Nvme0n1 01:21:12.064 05:15:54 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 01:21:12.064 05:15:54 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 01:21:12.322 request: 01:21:12.322 { 01:21:12.322 "bdev_name": "Nvme0n1", 01:21:12.322 "filename": "non_existing_file", 01:21:12.322 "method": "bdev_nvme_apply_firmware", 01:21:12.322 "req_id": 1 01:21:12.322 } 01:21:12.322 Got JSON-RPC error response 01:21:12.322 response: 01:21:12.322 { 01:21:12.322 "code": -32603, 01:21:12.322 "message": "open file failed." 01:21:12.322 } 01:21:12.322 05:15:54 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 01:21:12.322 05:15:54 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 01:21:12.322 05:15:54 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 01:21:12.581 05:15:54 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 01:21:12.581 05:15:54 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 67113 01:21:12.581 05:15:54 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 67113 ']' 01:21:12.581 05:15:54 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 67113 01:21:12.581 05:15:54 nvme_rpc -- common/autotest_common.sh@959 -- # uname 01:21:12.581 05:15:54 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:21:12.581 05:15:54 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67113 01:21:12.581 killing process with pid 67113 01:21:12.581 05:15:54 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:21:12.581 05:15:54 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:21:12.581 05:15:54 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67113' 01:21:12.581 05:15:54 nvme_rpc -- common/autotest_common.sh@973 -- # kill 67113 01:21:12.581 05:15:54 nvme_rpc -- common/autotest_common.sh@978 -- # wait 67113 01:21:15.168 01:21:15.168 real 0m4.828s 01:21:15.168 user 0m8.805s 01:21:15.168 sys 0m0.820s 01:21:15.168 05:15:57 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 01:21:15.168 05:15:57 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 01:21:15.168 ************************************ 01:21:15.168 END TEST nvme_rpc 01:21:15.168 ************************************ 01:21:15.168 05:15:57 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 01:21:15.168 05:15:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:21:15.168 05:15:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:21:15.168 05:15:57 -- common/autotest_common.sh@10 -- # set +x 01:21:15.168 ************************************ 01:21:15.168 START TEST nvme_rpc_timeouts 01:21:15.168 ************************************ 01:21:15.168 05:15:57 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 01:21:15.168 * Looking for test storage... 01:21:15.168 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 01:21:15.168 05:15:57 nvme_rpc_timeouts -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:21:15.168 05:15:57 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lcov --version 01:21:15.168 05:15:57 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:21:15.168 05:15:57 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:21:15.168 05:15:57 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:21:15.168 05:15:57 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 01:21:15.168 05:15:57 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 01:21:15.168 05:15:57 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 01:21:15.168 05:15:57 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 01:21:15.168 05:15:57 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 01:21:15.168 05:15:57 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 01:21:15.168 05:15:57 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 01:21:15.168 05:15:57 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 01:21:15.168 05:15:57 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 01:21:15.168 05:15:57 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:21:15.168 05:15:57 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 01:21:15.168 05:15:57 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 01:21:15.168 05:15:57 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 01:21:15.168 05:15:57 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:21:15.168 05:15:57 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 01:21:15.168 05:15:57 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 01:21:15.168 05:15:57 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:21:15.168 05:15:57 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 01:21:15.168 05:15:57 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 01:21:15.168 05:15:57 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 01:21:15.168 05:15:57 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 01:21:15.168 05:15:57 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:21:15.168 05:15:57 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 01:21:15.168 05:15:57 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 01:21:15.168 05:15:57 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:21:15.168 05:15:57 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:21:15.168 05:15:57 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 01:21:15.168 05:15:57 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:21:15.168 05:15:57 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:21:15.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:21:15.168 --rc genhtml_branch_coverage=1 01:21:15.168 --rc genhtml_function_coverage=1 01:21:15.168 --rc genhtml_legend=1 01:21:15.168 --rc geninfo_all_blocks=1 01:21:15.168 --rc geninfo_unexecuted_blocks=1 01:21:15.168 01:21:15.168 ' 01:21:15.168 05:15:57 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:21:15.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:21:15.168 --rc genhtml_branch_coverage=1 01:21:15.168 --rc genhtml_function_coverage=1 01:21:15.168 --rc genhtml_legend=1 01:21:15.168 --rc geninfo_all_blocks=1 01:21:15.168 --rc geninfo_unexecuted_blocks=1 01:21:15.168 01:21:15.168 ' 01:21:15.168 05:15:57 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:21:15.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:21:15.168 --rc genhtml_branch_coverage=1 01:21:15.168 --rc genhtml_function_coverage=1 01:21:15.168 --rc genhtml_legend=1 01:21:15.168 --rc geninfo_all_blocks=1 01:21:15.168 --rc geninfo_unexecuted_blocks=1 01:21:15.168 01:21:15.168 ' 01:21:15.168 05:15:57 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:21:15.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:21:15.168 --rc genhtml_branch_coverage=1 01:21:15.168 --rc genhtml_function_coverage=1 01:21:15.168 --rc genhtml_legend=1 01:21:15.168 --rc geninfo_all_blocks=1 01:21:15.168 --rc geninfo_unexecuted_blocks=1 01:21:15.168 01:21:15.168 ' 01:21:15.168 05:15:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:21:15.168 05:15:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_67196 01:21:15.168 05:15:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_67196 01:21:15.168 05:15:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=67232 01:21:15.169 05:15:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 01:21:15.169 05:15:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 01:21:15.169 05:15:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 67232 01:21:15.169 05:15:57 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 67232 ']' 01:21:15.169 05:15:57 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:21:15.169 05:15:57 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 01:21:15.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:21:15.169 05:15:57 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:21:15.169 05:15:57 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 01:21:15.169 05:15:57 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 01:21:15.426 [2024-12-09 05:15:57.643184] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:21:15.426 [2024-12-09 05:15:57.643328] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67232 ] 01:21:15.426 [2024-12-09 05:15:57.828836] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 01:21:15.684 [2024-12-09 05:15:57.945159] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:21:15.684 [2024-12-09 05:15:57.945195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:21:16.619 05:15:58 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:21:16.619 05:15:58 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 01:21:16.619 Checking default timeout settings: 01:21:16.619 05:15:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 01:21:16.619 05:15:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 01:21:16.877 Making settings changes with rpc: 01:21:16.877 05:15:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 01:21:16.877 05:15:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 01:21:17.135 Check default vs. modified settings: 01:21:17.135 05:15:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 01:21:17.135 05:15:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 01:21:17.394 05:15:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 01:21:17.394 05:15:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 01:21:17.394 05:15:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 01:21:17.394 05:15:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_67196 01:21:17.394 05:15:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 01:21:17.394 05:15:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 01:21:17.394 05:15:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_67196 01:21:17.394 05:15:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 01:21:17.394 05:15:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 01:21:17.394 05:15:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 01:21:17.394 05:15:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 01:21:17.394 Setting action_on_timeout is changed as expected. 01:21:17.394 05:15:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 01:21:17.394 05:15:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 01:21:17.394 05:15:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_67196 01:21:17.394 05:15:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 01:21:17.394 05:15:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 01:21:17.394 05:15:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 01:21:17.394 05:15:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_67196 01:21:17.394 05:15:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 01:21:17.394 05:15:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 01:21:17.394 05:15:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 01:21:17.394 05:15:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 01:21:17.394 Setting timeout_us is changed as expected. 01:21:17.394 05:15:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 01:21:17.394 05:15:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 01:21:17.394 05:15:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_67196 01:21:17.394 05:15:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 01:21:17.394 05:15:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 01:21:17.394 05:15:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 01:21:17.394 05:15:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_67196 01:21:17.394 05:15:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 01:21:17.394 05:15:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 01:21:17.394 05:15:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 01:21:17.394 Setting timeout_admin_us is changed as expected. 01:21:17.394 05:15:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 01:21:17.394 05:15:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 01:21:17.394 05:15:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 01:21:17.394 05:15:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_67196 /tmp/settings_modified_67196 01:21:17.394 05:15:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 67232 01:21:17.394 05:15:59 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 67232 ']' 01:21:17.395 05:15:59 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 67232 01:21:17.395 05:15:59 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 01:21:17.395 05:15:59 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:21:17.395 05:15:59 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67232 01:21:17.653 05:15:59 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:21:17.653 killing process with pid 67232 01:21:17.653 05:15:59 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:21:17.653 05:15:59 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67232' 01:21:17.653 05:15:59 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 67232 01:21:17.653 05:15:59 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 67232 01:21:20.181 RPC TIMEOUT SETTING TEST PASSED. 01:21:20.181 05:16:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 01:21:20.181 01:21:20.181 real 0m5.036s 01:21:20.181 user 0m9.402s 01:21:20.181 sys 0m0.811s 01:21:20.181 05:16:02 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 01:21:20.181 05:16:02 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 01:21:20.181 ************************************ 01:21:20.181 END TEST nvme_rpc_timeouts 01:21:20.181 ************************************ 01:21:20.181 05:16:02 -- spdk/autotest.sh@239 -- # uname -s 01:21:20.181 05:16:02 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 01:21:20.181 05:16:02 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 01:21:20.181 05:16:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:21:20.181 05:16:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:21:20.181 05:16:02 -- common/autotest_common.sh@10 -- # set +x 01:21:20.181 ************************************ 01:21:20.181 START TEST sw_hotplug 01:21:20.181 ************************************ 01:21:20.181 05:16:02 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 01:21:20.181 * Looking for test storage... 01:21:20.181 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 01:21:20.181 05:16:02 sw_hotplug -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:21:20.181 05:16:02 sw_hotplug -- common/autotest_common.sh@1693 -- # lcov --version 01:21:20.181 05:16:02 sw_hotplug -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:21:20.181 05:16:02 sw_hotplug -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:21:20.181 05:16:02 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:21:20.181 05:16:02 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 01:21:20.181 05:16:02 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 01:21:20.181 05:16:02 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 01:21:20.181 05:16:02 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 01:21:20.181 05:16:02 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 01:21:20.181 05:16:02 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 01:21:20.181 05:16:02 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 01:21:20.181 05:16:02 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 01:21:20.181 05:16:02 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 01:21:20.181 05:16:02 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:21:20.181 05:16:02 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 01:21:20.181 05:16:02 sw_hotplug -- scripts/common.sh@345 -- # : 1 01:21:20.182 05:16:02 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 01:21:20.182 05:16:02 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:21:20.440 05:16:02 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 01:21:20.440 05:16:02 sw_hotplug -- scripts/common.sh@353 -- # local d=1 01:21:20.440 05:16:02 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:21:20.440 05:16:02 sw_hotplug -- scripts/common.sh@355 -- # echo 1 01:21:20.440 05:16:02 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 01:21:20.440 05:16:02 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 01:21:20.440 05:16:02 sw_hotplug -- scripts/common.sh@353 -- # local d=2 01:21:20.440 05:16:02 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:21:20.440 05:16:02 sw_hotplug -- scripts/common.sh@355 -- # echo 2 01:21:20.440 05:16:02 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 01:21:20.440 05:16:02 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:21:20.440 05:16:02 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:21:20.440 05:16:02 sw_hotplug -- scripts/common.sh@368 -- # return 0 01:21:20.440 05:16:02 sw_hotplug -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:21:20.440 05:16:02 sw_hotplug -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:21:20.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:21:20.440 --rc genhtml_branch_coverage=1 01:21:20.440 --rc genhtml_function_coverage=1 01:21:20.440 --rc genhtml_legend=1 01:21:20.440 --rc geninfo_all_blocks=1 01:21:20.440 --rc geninfo_unexecuted_blocks=1 01:21:20.440 01:21:20.440 ' 01:21:20.440 05:16:02 sw_hotplug -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:21:20.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:21:20.440 --rc genhtml_branch_coverage=1 01:21:20.440 --rc genhtml_function_coverage=1 01:21:20.440 --rc genhtml_legend=1 01:21:20.440 --rc geninfo_all_blocks=1 01:21:20.440 --rc geninfo_unexecuted_blocks=1 01:21:20.440 01:21:20.440 ' 01:21:20.440 05:16:02 sw_hotplug -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:21:20.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:21:20.440 --rc genhtml_branch_coverage=1 01:21:20.440 --rc genhtml_function_coverage=1 01:21:20.440 --rc genhtml_legend=1 01:21:20.440 --rc geninfo_all_blocks=1 01:21:20.440 --rc geninfo_unexecuted_blocks=1 01:21:20.440 01:21:20.440 ' 01:21:20.440 05:16:02 sw_hotplug -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:21:20.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:21:20.440 --rc genhtml_branch_coverage=1 01:21:20.440 --rc genhtml_function_coverage=1 01:21:20.440 --rc genhtml_legend=1 01:21:20.440 --rc geninfo_all_blocks=1 01:21:20.440 --rc geninfo_unexecuted_blocks=1 01:21:20.440 01:21:20.440 ' 01:21:20.440 05:16:02 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 01:21:21.008 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:21:21.008 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 01:21:21.008 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 01:21:21.008 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 01:21:21.008 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 01:21:21.266 05:16:03 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 01:21:21.266 05:16:03 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 01:21:21.266 05:16:03 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 01:21:21.266 05:16:03 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 01:21:21.266 05:16:03 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 01:21:21.266 05:16:03 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 01:21:21.266 05:16:03 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 01:21:21.266 05:16:03 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 01:21:21.266 05:16:03 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 01:21:21.266 05:16:03 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 01:21:21.266 05:16:03 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 01:21:21.266 05:16:03 sw_hotplug -- scripts/common.sh@233 -- # local class 01:21:21.266 05:16:03 sw_hotplug -- scripts/common.sh@234 -- # local subclass 01:21:21.266 05:16:03 sw_hotplug -- scripts/common.sh@235 -- # local progif 01:21:21.266 05:16:03 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 01:21:21.266 05:16:03 sw_hotplug -- scripts/common.sh@236 -- # class=01 01:21:21.266 05:16:03 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 01:21:21.266 05:16:03 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 01:21:21.266 05:16:03 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 01:21:21.266 05:16:03 sw_hotplug -- scripts/common.sh@238 -- # progif=02 01:21:21.266 05:16:03 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 01:21:21.266 05:16:03 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 01:21:21.266 05:16:03 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 01:21:21.266 05:16:03 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 01:21:21.266 05:16:03 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 01:21:21.266 05:16:03 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 01:21:21.267 05:16:03 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 01:21:21.267 05:16:03 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 01:21:21.267 05:16:03 sw_hotplug -- scripts/common.sh@18 -- # local i 01:21:21.267 05:16:03 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 01:21:21.267 05:16:03 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 01:21:21.267 05:16:03 sw_hotplug -- scripts/common.sh@27 -- # return 0 01:21:21.267 05:16:03 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 01:21:21.267 05:16:03 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 01:21:21.267 05:16:03 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 01:21:21.267 05:16:03 sw_hotplug -- scripts/common.sh@18 -- # local i 01:21:21.267 05:16:03 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 01:21:21.267 05:16:03 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 01:21:21.267 05:16:03 sw_hotplug -- scripts/common.sh@27 -- # return 0 01:21:21.267 05:16:03 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 01:21:21.267 05:16:03 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 01:21:21.267 05:16:03 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 01:21:21.267 05:16:03 sw_hotplug -- scripts/common.sh@18 -- # local i 01:21:21.267 05:16:03 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 01:21:21.267 05:16:03 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 01:21:21.267 05:16:03 sw_hotplug -- scripts/common.sh@27 -- # return 0 01:21:21.267 05:16:03 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 01:21:21.267 05:16:03 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 01:21:21.267 05:16:03 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 01:21:21.267 05:16:03 sw_hotplug -- scripts/common.sh@18 -- # local i 01:21:21.267 05:16:03 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 01:21:21.267 05:16:03 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 01:21:21.267 05:16:03 sw_hotplug -- scripts/common.sh@27 -- # return 0 01:21:21.267 05:16:03 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 01:21:21.267 05:16:03 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 01:21:21.267 05:16:03 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 01:21:21.267 05:16:03 sw_hotplug -- scripts/common.sh@323 -- # uname -s 01:21:21.267 05:16:03 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 01:21:21.267 05:16:03 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 01:21:21.267 05:16:03 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 01:21:21.267 05:16:03 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 01:21:21.267 05:16:03 sw_hotplug -- scripts/common.sh@323 -- # uname -s 01:21:21.267 05:16:03 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 01:21:21.267 05:16:03 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 01:21:21.267 05:16:03 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 01:21:21.267 05:16:03 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 01:21:21.267 05:16:03 sw_hotplug -- scripts/common.sh@323 -- # uname -s 01:21:21.267 05:16:03 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 01:21:21.267 05:16:03 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 01:21:21.267 05:16:03 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 01:21:21.267 05:16:03 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 01:21:21.267 05:16:03 sw_hotplug -- scripts/common.sh@323 -- # uname -s 01:21:21.267 05:16:03 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 01:21:21.267 05:16:03 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 01:21:21.267 05:16:03 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 01:21:21.267 05:16:03 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 01:21:21.267 05:16:03 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 01:21:21.267 05:16:03 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 01:21:21.267 05:16:03 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 01:21:21.834 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:21:22.094 Waiting for block devices as requested 01:21:22.094 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 01:21:22.352 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 01:21:22.352 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 01:21:22.352 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 01:21:27.621 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 01:21:27.621 05:16:09 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 01:21:27.621 05:16:09 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 01:21:28.188 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 01:21:28.188 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:21:28.188 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 01:21:28.753 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 01:21:29.011 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 01:21:29.011 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 01:21:29.011 05:16:11 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 01:21:29.011 05:16:11 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 01:21:29.269 05:16:11 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 01:21:29.269 05:16:11 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 01:21:29.269 05:16:11 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=68116 01:21:29.269 05:16:11 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 01:21:29.269 05:16:11 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 01:21:29.269 05:16:11 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 01:21:29.269 05:16:11 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 01:21:29.269 05:16:11 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 01:21:29.269 05:16:11 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 01:21:29.269 05:16:11 sw_hotplug -- common/autotest_common.sh@711 -- # exec 01:21:29.269 05:16:11 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 01:21:29.269 05:16:11 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 01:21:29.269 05:16:11 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 01:21:29.269 05:16:11 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 01:21:29.269 05:16:11 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 01:21:29.269 05:16:11 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 01:21:29.269 05:16:11 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 01:21:29.526 Initializing NVMe Controllers 01:21:29.526 Attaching to 0000:00:10.0 01:21:29.526 Attaching to 0000:00:11.0 01:21:29.526 Attached to 0000:00:11.0 01:21:29.526 Attached to 0000:00:10.0 01:21:29.526 Initialization complete. Starting I/O... 01:21:29.527 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 01:21:29.527 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 01:21:29.527 01:21:30.459 QEMU NVMe Ctrl (12341 ): 1568 I/Os completed (+1568) 01:21:30.459 QEMU NVMe Ctrl (12340 ): 1568 I/Os completed (+1568) 01:21:30.460 01:21:31.395 QEMU NVMe Ctrl (12341 ): 3712 I/Os completed (+2144) 01:21:31.395 QEMU NVMe Ctrl (12340 ): 3712 I/Os completed (+2144) 01:21:31.395 01:21:32.332 QEMU NVMe Ctrl (12341 ): 5924 I/Os completed (+2212) 01:21:32.332 QEMU NVMe Ctrl (12340 ): 5924 I/Os completed (+2212) 01:21:32.332 01:21:33.707 QEMU NVMe Ctrl (12341 ): 8112 I/Os completed (+2188) 01:21:33.707 QEMU NVMe Ctrl (12340 ): 8112 I/Os completed (+2188) 01:21:33.707 01:21:34.644 QEMU NVMe Ctrl (12341 ): 10300 I/Os completed (+2188) 01:21:34.644 QEMU NVMe Ctrl (12340 ): 10300 I/Os completed (+2188) 01:21:34.644 01:21:35.213 05:16:17 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 01:21:35.213 05:16:17 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 01:21:35.213 05:16:17 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 01:21:35.213 [2024-12-09 05:16:17.527699] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 01:21:35.213 Controller removed: QEMU NVMe Ctrl (12340 ) 01:21:35.213 [2024-12-09 05:16:17.529702] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:21:35.213 [2024-12-09 05:16:17.529869] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:21:35.213 [2024-12-09 05:16:17.529922] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:21:35.213 [2024-12-09 05:16:17.530022] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:21:35.213 unregister_dev: QEMU NVMe Ctrl (12340 ) 01:21:35.213 [2024-12-09 05:16:17.532785] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:21:35.213 [2024-12-09 05:16:17.532922] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:21:35.213 [2024-12-09 05:16:17.532973] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:21:35.213 [2024-12-09 05:16:17.533069] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:21:35.213 05:16:17 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 01:21:35.213 05:16:17 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 01:21:35.213 [2024-12-09 05:16:17.566183] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 01:21:35.213 Controller removed: QEMU NVMe Ctrl (12341 ) 01:21:35.213 [2024-12-09 05:16:17.567835] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:21:35.214 [2024-12-09 05:16:17.567911] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:21:35.214 [2024-12-09 05:16:17.567962] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:21:35.214 [2024-12-09 05:16:17.568004] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:21:35.214 unregister_dev: QEMU NVMe Ctrl (12341 ) 01:21:35.214 [2024-12-09 05:16:17.570589] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:21:35.214 [2024-12-09 05:16:17.570702] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:21:35.214 [2024-12-09 05:16:17.570756] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:21:35.214 [2024-12-09 05:16:17.570853] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:21:35.214 05:16:17 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 01:21:35.214 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 01:21:35.214 EAL: Scan for (pci) bus failed. 01:21:35.214 05:16:17 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 01:21:35.473 05:16:17 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 01:21:35.473 05:16:17 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 01:21:35.473 05:16:17 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 01:21:35.473 01:21:35.473 05:16:17 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 01:21:35.473 05:16:17 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 01:21:35.473 05:16:17 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 01:21:35.473 05:16:17 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 01:21:35.473 05:16:17 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 01:21:35.473 Attaching to 0000:00:10.0 01:21:35.473 Attached to 0000:00:10.0 01:21:35.473 05:16:17 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 01:21:35.473 05:16:17 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 01:21:35.473 05:16:17 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 01:21:35.473 Attaching to 0000:00:11.0 01:21:35.473 Attached to 0000:00:11.0 01:21:36.409 QEMU NVMe Ctrl (12340 ): 2104 I/Os completed (+2104) 01:21:36.409 QEMU NVMe Ctrl (12341 ): 1871 I/Os completed (+1871) 01:21:36.409 01:21:37.343 QEMU NVMe Ctrl (12340 ): 4304 I/Os completed (+2200) 01:21:37.344 QEMU NVMe Ctrl (12341 ): 4071 I/Os completed (+2200) 01:21:37.344 01:21:38.720 QEMU NVMe Ctrl (12340 ): 6504 I/Os completed (+2200) 01:21:38.720 QEMU NVMe Ctrl (12341 ): 6271 I/Os completed (+2200) 01:21:38.720 01:21:39.337 QEMU NVMe Ctrl (12340 ): 8708 I/Os completed (+2204) 01:21:39.337 QEMU NVMe Ctrl (12341 ): 8475 I/Os completed (+2204) 01:21:39.337 01:21:40.717 QEMU NVMe Ctrl (12340 ): 10920 I/Os completed (+2212) 01:21:40.717 QEMU NVMe Ctrl (12341 ): 10687 I/Os completed (+2212) 01:21:40.717 01:21:41.655 QEMU NVMe Ctrl (12340 ): 13116 I/Os completed (+2196) 01:21:41.655 QEMU NVMe Ctrl (12341 ): 12883 I/Os completed (+2196) 01:21:41.655 01:21:42.592 QEMU NVMe Ctrl (12340 ): 15312 I/Os completed (+2196) 01:21:42.592 QEMU NVMe Ctrl (12341 ): 15079 I/Os completed (+2196) 01:21:42.592 01:21:43.526 QEMU NVMe Ctrl (12340 ): 17512 I/Os completed (+2200) 01:21:43.526 QEMU NVMe Ctrl (12341 ): 17279 I/Os completed (+2200) 01:21:43.526 01:21:44.463 QEMU NVMe Ctrl (12340 ): 19720 I/Os completed (+2208) 01:21:44.463 QEMU NVMe Ctrl (12341 ): 19487 I/Os completed (+2208) 01:21:44.463 01:21:45.402 QEMU NVMe Ctrl (12340 ): 21908 I/Os completed (+2188) 01:21:45.402 QEMU NVMe Ctrl (12341 ): 21679 I/Os completed (+2192) 01:21:45.402 01:21:46.339 QEMU NVMe Ctrl (12340 ): 24124 I/Os completed (+2216) 01:21:46.339 QEMU NVMe Ctrl (12341 ): 23895 I/Os completed (+2216) 01:21:46.339 01:21:47.714 QEMU NVMe Ctrl (12340 ): 26308 I/Os completed (+2184) 01:21:47.714 QEMU NVMe Ctrl (12341 ): 26079 I/Os completed (+2184) 01:21:47.714 01:21:47.714 05:16:29 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 01:21:47.714 05:16:29 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 01:21:47.714 05:16:29 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 01:21:47.714 05:16:29 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 01:21:47.714 [2024-12-09 05:16:29.903614] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 01:21:47.714 Controller removed: QEMU NVMe Ctrl (12340 ) 01:21:47.714 [2024-12-09 05:16:29.905473] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:21:47.714 [2024-12-09 05:16:29.905638] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:21:47.714 [2024-12-09 05:16:29.905692] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:21:47.714 [2024-12-09 05:16:29.905826] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:21:47.714 unregister_dev: QEMU NVMe Ctrl (12340 ) 01:21:47.714 [2024-12-09 05:16:29.908788] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:21:47.714 [2024-12-09 05:16:29.908931] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:21:47.714 [2024-12-09 05:16:29.908983] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:21:47.714 [2024-12-09 05:16:29.909092] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:21:47.714 05:16:29 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 01:21:47.714 05:16:29 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 01:21:47.714 [2024-12-09 05:16:29.942278] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 01:21:47.714 Controller removed: QEMU NVMe Ctrl (12341 ) 01:21:47.714 [2024-12-09 05:16:29.943991] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:21:47.714 [2024-12-09 05:16:29.944079] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:21:47.714 [2024-12-09 05:16:29.944132] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:21:47.714 [2024-12-09 05:16:29.944174] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:21:47.714 unregister_dev: QEMU NVMe Ctrl (12341 ) 01:21:47.714 [2024-12-09 05:16:29.946953] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:21:47.714 [2024-12-09 05:16:29.947114] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:21:47.714 [2024-12-09 05:16:29.947168] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:21:47.714 [2024-12-09 05:16:29.947212] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:21:47.714 05:16:29 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 01:21:47.714 05:16:29 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 01:21:47.714 05:16:30 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 01:21:47.714 05:16:30 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 01:21:47.714 05:16:30 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 01:21:47.714 05:16:30 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 01:21:47.714 05:16:30 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 01:21:47.714 05:16:30 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 01:21:47.714 05:16:30 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 01:21:47.714 05:16:30 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 01:21:47.714 Attaching to 0000:00:10.0 01:21:47.714 Attached to 0000:00:10.0 01:21:47.972 05:16:30 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 01:21:47.972 05:16:30 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 01:21:47.972 05:16:30 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 01:21:47.972 Attaching to 0000:00:11.0 01:21:47.972 Attached to 0000:00:11.0 01:21:48.539 QEMU NVMe Ctrl (12340 ): 1240 I/Os completed (+1240) 01:21:48.539 QEMU NVMe Ctrl (12341 ): 1044 I/Os completed (+1044) 01:21:48.539 01:21:49.475 QEMU NVMe Ctrl (12340 ): 3392 I/Os completed (+2152) 01:21:49.475 QEMU NVMe Ctrl (12341 ): 3196 I/Os completed (+2152) 01:21:49.475 01:21:50.412 QEMU NVMe Ctrl (12340 ): 5512 I/Os completed (+2120) 01:21:50.412 QEMU NVMe Ctrl (12341 ): 5316 I/Os completed (+2120) 01:21:50.412 01:21:51.351 QEMU NVMe Ctrl (12340 ): 7724 I/Os completed (+2212) 01:21:51.351 QEMU NVMe Ctrl (12341 ): 7528 I/Os completed (+2212) 01:21:51.351 01:21:52.289 QEMU NVMe Ctrl (12340 ): 9928 I/Os completed (+2204) 01:21:52.289 QEMU NVMe Ctrl (12341 ): 9732 I/Os completed (+2204) 01:21:52.289 01:21:53.702 QEMU NVMe Ctrl (12340 ): 12128 I/Os completed (+2200) 01:21:53.702 QEMU NVMe Ctrl (12341 ): 11932 I/Os completed (+2200) 01:21:53.702 01:21:54.269 QEMU NVMe Ctrl (12340 ): 14328 I/Os completed (+2200) 01:21:54.269 QEMU NVMe Ctrl (12341 ): 14132 I/Os completed (+2200) 01:21:54.269 01:21:55.646 QEMU NVMe Ctrl (12340 ): 16524 I/Os completed (+2196) 01:21:55.646 QEMU NVMe Ctrl (12341 ): 16328 I/Os completed (+2196) 01:21:55.646 01:21:56.582 QEMU NVMe Ctrl (12340 ): 18720 I/Os completed (+2196) 01:21:56.582 QEMU NVMe Ctrl (12341 ): 18524 I/Os completed (+2196) 01:21:56.582 01:21:57.517 QEMU NVMe Ctrl (12340 ): 20924 I/Os completed (+2204) 01:21:57.517 QEMU NVMe Ctrl (12341 ): 20728 I/Os completed (+2204) 01:21:57.517 01:21:58.451 QEMU NVMe Ctrl (12340 ): 23124 I/Os completed (+2200) 01:21:58.451 QEMU NVMe Ctrl (12341 ): 22928 I/Os completed (+2200) 01:21:58.451 01:21:59.385 QEMU NVMe Ctrl (12340 ): 25328 I/Os completed (+2204) 01:21:59.385 QEMU NVMe Ctrl (12341 ): 25132 I/Os completed (+2204) 01:21:59.385 01:21:59.953 05:16:42 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 01:21:59.953 05:16:42 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 01:21:59.953 05:16:42 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 01:21:59.953 05:16:42 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 01:21:59.953 [2024-12-09 05:16:42.239007] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 01:21:59.953 Controller removed: QEMU NVMe Ctrl (12340 ) 01:21:59.953 [2024-12-09 05:16:42.240842] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:21:59.953 [2024-12-09 05:16:42.241014] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:21:59.953 [2024-12-09 05:16:42.241070] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:21:59.953 [2024-12-09 05:16:42.241173] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:21:59.953 unregister_dev: QEMU NVMe Ctrl (12340 ) 01:21:59.953 [2024-12-09 05:16:42.244163] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:21:59.953 [2024-12-09 05:16:42.244315] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:21:59.953 [2024-12-09 05:16:42.244368] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:21:59.953 [2024-12-09 05:16:42.244487] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:21:59.953 05:16:42 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 01:21:59.953 05:16:42 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 01:21:59.953 [2024-12-09 05:16:42.260155] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 01:21:59.953 Controller removed: QEMU NVMe Ctrl (12341 ) 01:21:59.953 [2024-12-09 05:16:42.261887] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:21:59.953 [2024-12-09 05:16:42.262042] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:21:59.953 [2024-12-09 05:16:42.262101] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:21:59.953 [2024-12-09 05:16:42.262207] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:21:59.953 unregister_dev: QEMU NVMe Ctrl (12341 ) 01:21:59.953 [2024-12-09 05:16:42.264880] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:21:59.953 [2024-12-09 05:16:42.264956] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:21:59.953 [2024-12-09 05:16:42.265005] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:21:59.953 [2024-12-09 05:16:42.265045] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:21:59.953 05:16:42 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 01:21:59.953 05:16:42 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 01:21:59.953 05:16:42 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 01:21:59.953 05:16:42 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 01:21:59.953 05:16:42 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 01:22:00.212 05:16:42 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 01:22:00.212 05:16:42 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 01:22:00.212 05:16:42 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 01:22:00.212 05:16:42 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 01:22:00.212 05:16:42 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 01:22:00.212 Attaching to 0000:00:10.0 01:22:00.212 Attached to 0000:00:10.0 01:22:00.212 05:16:42 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 01:22:00.212 05:16:42 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 01:22:00.212 05:16:42 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 01:22:00.212 Attaching to 0000:00:11.0 01:22:00.212 Attached to 0000:00:11.0 01:22:00.212 unregister_dev: QEMU NVMe Ctrl (12340 ) 01:22:00.212 unregister_dev: QEMU NVMe Ctrl (12341 ) 01:22:00.212 [2024-12-09 05:16:42.560662] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 01:22:12.481 05:16:54 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 01:22:12.481 05:16:54 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 01:22:12.481 05:16:54 sw_hotplug -- common/autotest_common.sh@719 -- # time=43.03 01:22:12.481 05:16:54 sw_hotplug -- common/autotest_common.sh@720 -- # echo 43.03 01:22:12.481 05:16:54 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 01:22:12.481 05:16:54 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=43.03 01:22:12.481 05:16:54 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 43.03 2 01:22:12.481 remove_attach_helper took 43.03s to complete (handling 2 nvme drive(s)) 05:16:54 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 01:22:19.046 05:17:00 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 68116 01:22:19.046 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (68116) - No such process 01:22:19.046 05:17:00 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 68116 01:22:19.046 05:17:00 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 01:22:19.046 05:17:00 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 01:22:19.046 05:17:00 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 01:22:19.046 05:17:00 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=68666 01:22:19.046 05:17:00 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 01:22:19.046 05:17:00 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 68666 01:22:19.046 05:17:00 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 68666 ']' 01:22:19.046 05:17:00 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:22:19.046 05:17:00 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:22:19.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:22:19.046 05:17:00 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 01:22:19.046 05:17:00 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:22:19.046 05:17:00 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 01:22:19.046 05:17:00 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 01:22:19.046 [2024-12-09 05:17:00.670165] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:22:19.046 [2024-12-09 05:17:00.670289] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68666 ] 01:22:19.046 [2024-12-09 05:17:00.842826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:22:19.046 [2024-12-09 05:17:00.945565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:22:19.614 05:17:01 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:22:19.614 05:17:01 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 01:22:19.614 05:17:01 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 01:22:19.614 05:17:01 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:19.614 05:17:01 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 01:22:19.614 05:17:01 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:19.614 05:17:01 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 01:22:19.614 05:17:01 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 01:22:19.614 05:17:01 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 01:22:19.614 05:17:01 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 01:22:19.614 05:17:01 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 01:22:19.614 05:17:01 sw_hotplug -- common/autotest_common.sh@711 -- # exec 01:22:19.614 05:17:01 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 01:22:19.614 05:17:01 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 01:22:19.614 05:17:01 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 01:22:19.614 05:17:01 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 01:22:19.614 05:17:01 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 01:22:19.614 05:17:01 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 01:22:19.614 05:17:01 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 01:22:26.210 05:17:07 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 01:22:26.210 05:17:07 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 01:22:26.210 05:17:07 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 01:22:26.210 05:17:07 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 01:22:26.210 05:17:07 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 01:22:26.210 05:17:07 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 01:22:26.210 05:17:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 01:22:26.210 05:17:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 01:22:26.210 05:17:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 01:22:26.210 05:17:07 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:26.210 05:17:07 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 01:22:26.210 05:17:07 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 01:22:26.210 05:17:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 01:22:26.210 [2024-12-09 05:17:07.869348] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 01:22:26.210 [2024-12-09 05:17:07.871633] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:22:26.210 [2024-12-09 05:17:07.871728] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 01:22:26.210 [2024-12-09 05:17:07.871752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:22:26.210 [2024-12-09 05:17:07.871779] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:22:26.210 [2024-12-09 05:17:07.871791] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 01:22:26.210 [2024-12-09 05:17:07.871806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:22:26.210 [2024-12-09 05:17:07.871820] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:22:26.210 [2024-12-09 05:17:07.871834] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 01:22:26.210 [2024-12-09 05:17:07.871845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:22:26.210 [2024-12-09 05:17:07.871866] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:22:26.210 [2024-12-09 05:17:07.871877] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 01:22:26.210 [2024-12-09 05:17:07.871891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:22:26.210 05:17:07 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:26.210 05:17:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 01:22:26.210 05:17:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 01:22:26.210 [2024-12-09 05:17:08.268719] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 01:22:26.210 [2024-12-09 05:17:08.271029] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:22:26.210 [2024-12-09 05:17:08.271068] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 01:22:26.210 [2024-12-09 05:17:08.271086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:22:26.210 [2024-12-09 05:17:08.271105] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:22:26.210 [2024-12-09 05:17:08.271119] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 01:22:26.211 [2024-12-09 05:17:08.271131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:22:26.211 [2024-12-09 05:17:08.271147] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:22:26.211 [2024-12-09 05:17:08.271157] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 01:22:26.211 [2024-12-09 05:17:08.271171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:22:26.211 [2024-12-09 05:17:08.271183] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:22:26.211 [2024-12-09 05:17:08.271197] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 01:22:26.211 [2024-12-09 05:17:08.271208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:22:26.211 05:17:08 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 01:22:26.211 05:17:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 01:22:26.211 05:17:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 01:22:26.211 05:17:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 01:22:26.211 05:17:08 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 01:22:26.211 05:17:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 01:22:26.211 05:17:08 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:26.211 05:17:08 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 01:22:26.211 05:17:08 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:26.211 05:17:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 01:22:26.211 05:17:08 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 01:22:26.211 05:17:08 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 01:22:26.211 05:17:08 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 01:22:26.211 05:17:08 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 01:22:26.469 05:17:08 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 01:22:26.469 05:17:08 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 01:22:26.469 05:17:08 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 01:22:26.469 05:17:08 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 01:22:26.469 05:17:08 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 01:22:26.469 05:17:08 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 01:22:26.469 05:17:08 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 01:22:26.469 05:17:08 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 01:22:38.684 05:17:20 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 01:22:38.684 05:17:20 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 01:22:38.684 05:17:20 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 01:22:38.684 05:17:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 01:22:38.684 05:17:20 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:38.684 05:17:20 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 01:22:38.684 05:17:20 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 01:22:38.684 05:17:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 01:22:38.684 05:17:20 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:38.684 05:17:20 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 01:22:38.684 05:17:20 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 01:22:38.684 05:17:20 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 01:22:38.684 05:17:20 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 01:22:38.684 [2024-12-09 05:17:20.849383] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 01:22:38.684 [2024-12-09 05:17:20.852179] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:22:38.684 [2024-12-09 05:17:20.852347] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 01:22:38.684 [2024-12-09 05:17:20.852486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:22:38.684 [2024-12-09 05:17:20.852623] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:22:38.684 [2024-12-09 05:17:20.852662] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 01:22:38.684 [2024-12-09 05:17:20.852715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:22:38.684 [2024-12-09 05:17:20.852873] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:22:38.684 [2024-12-09 05:17:20.852894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 01:22:38.684 [2024-12-09 05:17:20.852907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:22:38.684 [2024-12-09 05:17:20.852924] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:22:38.684 [2024-12-09 05:17:20.852935] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 01:22:38.684 [2024-12-09 05:17:20.852950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:22:38.684 05:17:20 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 01:22:38.684 05:17:20 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 01:22:38.684 05:17:20 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 01:22:38.684 05:17:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 01:22:38.684 05:17:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 01:22:38.684 05:17:20 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 01:22:38.684 05:17:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 01:22:38.684 05:17:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 01:22:38.684 05:17:20 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:38.684 05:17:20 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 01:22:38.684 05:17:20 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:38.684 05:17:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 01:22:38.684 05:17:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 01:22:38.943 [2024-12-09 05:17:21.348584] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 01:22:38.943 [2024-12-09 05:17:21.351036] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:22:38.943 [2024-12-09 05:17:21.351075] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 01:22:38.943 [2024-12-09 05:17:21.351112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:22:38.943 [2024-12-09 05:17:21.351134] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:22:38.943 [2024-12-09 05:17:21.351148] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 01:22:38.943 [2024-12-09 05:17:21.351160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:22:38.943 [2024-12-09 05:17:21.351175] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:22:38.943 [2024-12-09 05:17:21.351186] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 01:22:38.943 [2024-12-09 05:17:21.351200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:22:38.943 [2024-12-09 05:17:21.351213] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:22:38.943 [2024-12-09 05:17:21.351226] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 01:22:38.943 [2024-12-09 05:17:21.351237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:22:39.201 05:17:21 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 01:22:39.201 05:17:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 01:22:39.201 05:17:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 01:22:39.201 05:17:21 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 01:22:39.201 05:17:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 01:22:39.201 05:17:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 01:22:39.202 05:17:21 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:39.202 05:17:21 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 01:22:39.202 05:17:21 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:39.202 05:17:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 01:22:39.202 05:17:21 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 01:22:39.202 05:17:21 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 01:22:39.202 05:17:21 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 01:22:39.202 05:17:21 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 01:22:39.459 05:17:21 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 01:22:39.459 05:17:21 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 01:22:39.459 05:17:21 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 01:22:39.459 05:17:21 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 01:22:39.459 05:17:21 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 01:22:39.459 05:17:21 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 01:22:39.459 05:17:21 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 01:22:39.459 05:17:21 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 01:22:51.811 05:17:33 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 01:22:51.811 05:17:33 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 01:22:51.811 05:17:33 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 01:22:51.811 05:17:33 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 01:22:51.811 05:17:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 01:22:51.811 05:17:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 01:22:51.811 05:17:33 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:51.811 05:17:33 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 01:22:51.811 05:17:33 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:51.811 05:17:33 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 01:22:51.811 05:17:33 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 01:22:51.811 05:17:33 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 01:22:51.811 05:17:33 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 01:22:51.811 [2024-12-09 05:17:33.929201] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 01:22:51.811 05:17:33 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 01:22:51.811 05:17:33 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 01:22:51.811 [2024-12-09 05:17:33.931950] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:22:51.811 [2024-12-09 05:17:33.932093] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 01:22:51.811 [2024-12-09 05:17:33.932339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:22:51.811 [2024-12-09 05:17:33.932378] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:22:51.811 [2024-12-09 05:17:33.932394] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 01:22:51.811 [2024-12-09 05:17:33.932412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:22:51.811 [2024-12-09 05:17:33.932425] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:22:51.811 [2024-12-09 05:17:33.932440] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 01:22:51.811 [2024-12-09 05:17:33.932452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:22:51.811 [2024-12-09 05:17:33.932477] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:22:51.811 [2024-12-09 05:17:33.932489] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 01:22:51.811 [2024-12-09 05:17:33.932504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:22:51.811 05:17:33 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 01:22:51.811 05:17:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 01:22:51.811 05:17:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 01:22:51.811 05:17:33 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 01:22:51.811 05:17:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 01:22:51.811 05:17:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 01:22:51.811 05:17:33 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:51.811 05:17:33 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 01:22:51.811 05:17:33 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:51.811 05:17:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 01:22:51.811 05:17:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 01:22:52.070 [2024-12-09 05:17:34.328595] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 01:22:52.070 [2024-12-09 05:17:34.331023] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:22:52.070 [2024-12-09 05:17:34.331169] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 01:22:52.070 [2024-12-09 05:17:34.331199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:22:52.071 [2024-12-09 05:17:34.331226] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:22:52.071 [2024-12-09 05:17:34.331241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 01:22:52.071 [2024-12-09 05:17:34.331253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:22:52.071 [2024-12-09 05:17:34.331270] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:22:52.071 [2024-12-09 05:17:34.331282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 01:22:52.071 [2024-12-09 05:17:34.331299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:22:52.071 [2024-12-09 05:17:34.331312] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:22:52.071 [2024-12-09 05:17:34.331325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 01:22:52.071 [2024-12-09 05:17:34.331337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:22:52.071 05:17:34 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 01:22:52.071 05:17:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 01:22:52.071 05:17:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 01:22:52.071 05:17:34 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 01:22:52.071 05:17:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 01:22:52.071 05:17:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 01:22:52.071 05:17:34 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:52.071 05:17:34 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 01:22:52.071 05:17:34 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:52.329 05:17:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 01:22:52.329 05:17:34 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 01:22:52.329 05:17:34 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 01:22:52.329 05:17:34 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 01:22:52.329 05:17:34 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 01:22:52.329 05:17:34 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 01:22:52.329 05:17:34 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 01:22:52.329 05:17:34 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 01:22:52.329 05:17:34 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 01:22:52.329 05:17:34 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 01:22:52.588 05:17:34 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 01:22:52.588 05:17:34 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 01:22:52.588 05:17:34 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 01:23:04.793 05:17:46 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 01:23:04.793 05:17:46 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 01:23:04.793 05:17:46 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 01:23:04.793 05:17:46 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 01:23:04.793 05:17:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 01:23:04.793 05:17:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 01:23:04.793 05:17:46 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:04.793 05:17:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 01:23:04.793 05:17:46 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:04.793 05:17:46 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 01:23:04.793 05:17:46 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 01:23:04.793 05:17:46 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.15 01:23:04.793 05:17:46 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.15 01:23:04.793 05:17:46 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 01:23:04.793 05:17:46 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.15 01:23:04.793 05:17:46 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.15 2 01:23:04.793 remove_attach_helper took 45.15s to complete (handling 2 nvme drive(s)) 05:17:46 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 01:23:04.793 05:17:46 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:04.793 05:17:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 01:23:04.793 05:17:46 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:04.793 05:17:46 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 01:23:04.793 05:17:46 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:04.793 05:17:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 01:23:04.793 05:17:46 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:04.793 05:17:46 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 01:23:04.793 05:17:46 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 01:23:04.793 05:17:46 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 01:23:04.793 05:17:46 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 01:23:04.793 05:17:46 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 01:23:04.793 05:17:46 sw_hotplug -- common/autotest_common.sh@711 -- # exec 01:23:04.793 05:17:46 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 01:23:04.793 05:17:46 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 01:23:04.793 05:17:46 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 01:23:04.793 05:17:46 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 01:23:04.793 05:17:46 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 01:23:04.793 05:17:46 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 01:23:04.793 05:17:46 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 01:23:11.381 05:17:52 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 01:23:11.381 05:17:52 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 01:23:11.381 05:17:52 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 01:23:11.381 05:17:53 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 01:23:11.381 05:17:53 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 01:23:11.381 05:17:53 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 01:23:11.381 05:17:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 01:23:11.381 05:17:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 01:23:11.381 05:17:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 01:23:11.381 05:17:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 01:23:11.381 05:17:53 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:11.381 05:17:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 01:23:11.381 05:17:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 01:23:11.381 [2024-12-09 05:17:53.053699] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 01:23:11.381 [2024-12-09 05:17:53.055793] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:23:11.381 [2024-12-09 05:17:53.055839] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 01:23:11.381 [2024-12-09 05:17:53.055856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:11.381 [2024-12-09 05:17:53.055884] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:23:11.381 [2024-12-09 05:17:53.055895] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 01:23:11.381 [2024-12-09 05:17:53.055910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:11.381 [2024-12-09 05:17:53.055923] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:23:11.381 [2024-12-09 05:17:53.055939] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 01:23:11.381 [2024-12-09 05:17:53.055951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:11.381 [2024-12-09 05:17:53.055966] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:23:11.381 [2024-12-09 05:17:53.055977] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 01:23:11.381 [2024-12-09 05:17:53.055995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:11.381 05:17:53 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:11.381 05:17:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 01:23:11.381 05:17:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 01:23:11.381 [2024-12-09 05:17:53.453056] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 01:23:11.381 [2024-12-09 05:17:53.454788] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:23:11.381 [2024-12-09 05:17:53.454830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 01:23:11.381 [2024-12-09 05:17:53.454850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:11.381 [2024-12-09 05:17:53.454874] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:23:11.381 [2024-12-09 05:17:53.454888] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 01:23:11.381 [2024-12-09 05:17:53.454900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:11.381 [2024-12-09 05:17:53.454916] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:23:11.381 [2024-12-09 05:17:53.454927] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 01:23:11.381 [2024-12-09 05:17:53.454944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:11.381 [2024-12-09 05:17:53.454957] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:23:11.381 [2024-12-09 05:17:53.454971] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 01:23:11.381 [2024-12-09 05:17:53.454983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:11.381 05:17:53 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 01:23:11.381 05:17:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 01:23:11.381 05:17:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 01:23:11.381 05:17:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 01:23:11.381 05:17:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 01:23:11.381 05:17:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 01:23:11.381 05:17:53 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:11.381 05:17:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 01:23:11.381 05:17:53 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:11.381 05:17:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 01:23:11.381 05:17:53 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 01:23:11.381 05:17:53 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 01:23:11.381 05:17:53 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 01:23:11.381 05:17:53 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 01:23:11.641 05:17:53 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 01:23:11.641 05:17:53 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 01:23:11.641 05:17:53 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 01:23:11.641 05:17:53 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 01:23:11.641 05:17:53 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 01:23:11.641 05:17:54 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 01:23:11.641 05:17:54 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 01:23:11.641 05:17:54 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 01:23:23.847 05:18:06 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 01:23:23.847 05:18:06 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 01:23:23.847 05:18:06 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 01:23:23.847 05:18:06 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 01:23:23.847 05:18:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 01:23:23.847 05:18:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 01:23:23.847 05:18:06 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:23.847 05:18:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 01:23:23.847 05:18:06 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:23.847 05:18:06 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 01:23:23.847 05:18:06 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 01:23:23.847 05:18:06 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 01:23:23.847 05:18:06 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 01:23:23.847 05:18:06 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 01:23:23.847 05:18:06 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 01:23:23.847 [2024-12-09 05:18:06.132657] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 01:23:23.847 [2024-12-09 05:18:06.135252] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:23:23.847 [2024-12-09 05:18:06.135403] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 01:23:23.847 [2024-12-09 05:18:06.135427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:23.847 [2024-12-09 05:18:06.135455] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:23:23.847 [2024-12-09 05:18:06.135478] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 01:23:23.847 [2024-12-09 05:18:06.135493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:23.847 [2024-12-09 05:18:06.135507] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:23:23.847 [2024-12-09 05:18:06.135523] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 01:23:23.847 [2024-12-09 05:18:06.135534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:23.847 [2024-12-09 05:18:06.135550] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:23:23.847 [2024-12-09 05:18:06.135561] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 01:23:23.847 [2024-12-09 05:18:06.135575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:23.847 05:18:06 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 01:23:23.847 05:18:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 01:23:23.847 05:18:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 01:23:23.847 05:18:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 01:23:23.847 05:18:06 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:23.847 05:18:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 01:23:23.847 05:18:06 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 01:23:23.847 05:18:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 01:23:23.847 05:18:06 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:23.847 05:18:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 01:23:23.847 05:18:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 01:23:24.414 [2024-12-09 05:18:06.631848] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 01:23:24.414 [2024-12-09 05:18:06.634093] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:23:24.414 [2024-12-09 05:18:06.634133] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 01:23:24.414 [2024-12-09 05:18:06.634152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:24.414 [2024-12-09 05:18:06.634174] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:23:24.414 [2024-12-09 05:18:06.634190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 01:23:24.414 [2024-12-09 05:18:06.634202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:24.414 [2024-12-09 05:18:06.634217] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:23:24.414 [2024-12-09 05:18:06.634228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 01:23:24.414 [2024-12-09 05:18:06.634242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:24.414 [2024-12-09 05:18:06.634255] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:23:24.414 [2024-12-09 05:18:06.634268] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 01:23:24.414 [2024-12-09 05:18:06.634279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:24.414 05:18:06 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 01:23:24.414 05:18:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 01:23:24.414 05:18:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 01:23:24.414 05:18:06 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 01:23:24.414 05:18:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 01:23:24.414 05:18:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 01:23:24.414 05:18:06 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:24.414 05:18:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 01:23:24.414 05:18:06 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:24.414 05:18:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 01:23:24.414 05:18:06 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 01:23:24.414 05:18:06 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 01:23:24.414 05:18:06 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 01:23:24.414 05:18:06 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 01:23:24.672 05:18:06 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 01:23:24.672 05:18:06 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 01:23:24.672 05:18:06 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 01:23:24.672 05:18:06 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 01:23:24.672 05:18:06 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 01:23:24.672 05:18:07 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 01:23:24.672 05:18:07 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 01:23:24.672 05:18:07 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 01:23:36.880 05:18:19 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 01:23:36.880 05:18:19 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 01:23:36.880 05:18:19 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 01:23:36.880 05:18:19 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 01:23:36.880 05:18:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 01:23:36.880 05:18:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 01:23:36.880 05:18:19 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:36.880 05:18:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 01:23:36.880 05:18:19 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:36.880 05:18:19 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 01:23:36.880 05:18:19 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 01:23:36.880 05:18:19 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 01:23:36.880 05:18:19 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 01:23:36.880 05:18:19 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 01:23:36.880 05:18:19 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 01:23:36.880 05:18:19 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 01:23:36.880 05:18:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 01:23:36.880 05:18:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 01:23:36.880 05:18:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 01:23:36.880 05:18:19 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:36.880 05:18:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 01:23:36.880 05:18:19 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 01:23:36.880 05:18:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 01:23:36.880 05:18:19 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:36.880 [2024-12-09 05:18:19.211644] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 01:23:36.880 [2024-12-09 05:18:19.213311] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:23:36.880 [2024-12-09 05:18:19.213360] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 01:23:36.880 [2024-12-09 05:18:19.213377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:36.880 [2024-12-09 05:18:19.213402] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:23:36.880 [2024-12-09 05:18:19.213414] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 01:23:36.880 [2024-12-09 05:18:19.213429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:36.880 [2024-12-09 05:18:19.213443] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:23:36.880 [2024-12-09 05:18:19.213471] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 01:23:36.880 [2024-12-09 05:18:19.213483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:36.880 [2024-12-09 05:18:19.213499] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:23:36.880 [2024-12-09 05:18:19.213510] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 01:23:36.880 [2024-12-09 05:18:19.213524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:36.880 05:18:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 01:23:36.880 05:18:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 01:23:37.450 [2024-12-09 05:18:19.610991] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 01:23:37.450 [2024-12-09 05:18:19.613367] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:23:37.450 [2024-12-09 05:18:19.613408] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 01:23:37.450 [2024-12-09 05:18:19.613427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.450 [2024-12-09 05:18:19.613450] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:23:37.450 [2024-12-09 05:18:19.613483] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 01:23:37.450 [2024-12-09 05:18:19.613496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.450 [2024-12-09 05:18:19.613528] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:23:37.450 [2024-12-09 05:18:19.613539] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 01:23:37.450 [2024-12-09 05:18:19.613553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.450 [2024-12-09 05:18:19.613566] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:23:37.450 [2024-12-09 05:18:19.613584] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 01:23:37.450 [2024-12-09 05:18:19.613596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.450 05:18:19 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 01:23:37.450 05:18:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 01:23:37.450 05:18:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 01:23:37.450 05:18:19 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 01:23:37.450 05:18:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 01:23:37.450 05:18:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 01:23:37.450 05:18:19 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:37.450 05:18:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 01:23:37.450 05:18:19 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:37.450 05:18:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 01:23:37.450 05:18:19 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 01:23:37.450 05:18:19 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 01:23:37.450 05:18:19 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 01:23:37.450 05:18:19 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 01:23:37.722 05:18:19 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 01:23:37.722 05:18:19 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 01:23:37.722 05:18:19 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 01:23:37.722 05:18:19 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 01:23:37.722 05:18:19 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 01:23:37.722 05:18:20 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 01:23:37.722 05:18:20 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 01:23:37.722 05:18:20 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 01:23:49.932 05:18:32 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 01:23:49.932 05:18:32 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 01:23:49.932 05:18:32 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 01:23:49.932 05:18:32 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 01:23:49.932 05:18:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 01:23:49.932 05:18:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 01:23:49.932 05:18:32 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:49.932 05:18:32 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 01:23:49.932 05:18:32 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:49.932 05:18:32 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 01:23:49.932 05:18:32 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 01:23:49.932 05:18:32 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.18 01:23:49.932 05:18:32 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.18 01:23:49.932 05:18:32 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 01:23:49.932 05:18:32 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.18 01:23:49.932 05:18:32 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.18 2 01:23:49.932 remove_attach_helper took 45.18s to complete (handling 2 nvme drive(s)) 05:18:32 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 01:23:49.932 05:18:32 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 68666 01:23:49.932 05:18:32 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 68666 ']' 01:23:49.932 05:18:32 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 68666 01:23:49.932 05:18:32 sw_hotplug -- common/autotest_common.sh@959 -- # uname 01:23:49.932 05:18:32 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:23:49.932 05:18:32 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68666 01:23:49.932 killing process with pid 68666 01:23:49.932 05:18:32 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:23:49.932 05:18:32 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:23:49.932 05:18:32 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68666' 01:23:49.932 05:18:32 sw_hotplug -- common/autotest_common.sh@973 -- # kill 68666 01:23:49.932 05:18:32 sw_hotplug -- common/autotest_common.sh@978 -- # wait 68666 01:23:52.467 05:18:34 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 01:23:52.725 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:23:53.292 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 01:23:53.292 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 01:23:53.292 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 01:23:53.292 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 01:23:53.551 01:23:53.551 real 2m33.382s 01:23:53.551 user 1m51.090s 01:23:53.551 sys 0m22.479s 01:23:53.551 05:18:35 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 01:23:53.551 ************************************ 01:23:53.551 END TEST sw_hotplug 01:23:53.551 ************************************ 01:23:53.551 05:18:35 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 01:23:53.551 05:18:35 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 01:23:53.551 05:18:35 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 01:23:53.551 05:18:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:23:53.551 05:18:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:23:53.551 05:18:35 -- common/autotest_common.sh@10 -- # set +x 01:23:53.551 ************************************ 01:23:53.551 START TEST nvme_xnvme 01:23:53.551 ************************************ 01:23:53.551 05:18:35 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 01:23:53.811 * Looking for test storage... 01:23:53.811 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 01:23:53.811 05:18:36 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:23:53.811 05:18:36 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 01:23:53.812 05:18:36 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:23:53.812 05:18:36 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:23:53.812 05:18:36 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:23:53.812 05:18:36 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 01:23:53.812 05:18:36 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 01:23:53.812 05:18:36 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 01:23:53.812 05:18:36 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 01:23:53.812 05:18:36 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 01:23:53.812 05:18:36 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 01:23:53.812 05:18:36 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 01:23:53.812 05:18:36 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 01:23:53.812 05:18:36 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 01:23:53.812 05:18:36 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:23:53.812 05:18:36 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 01:23:53.812 05:18:36 nvme_xnvme -- scripts/common.sh@345 -- # : 1 01:23:53.812 05:18:36 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 01:23:53.812 05:18:36 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:23:53.812 05:18:36 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 01:23:53.812 05:18:36 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 01:23:53.812 05:18:36 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:23:53.812 05:18:36 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 01:23:53.812 05:18:36 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 01:23:53.812 05:18:36 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 01:23:53.812 05:18:36 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 01:23:53.812 05:18:36 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:23:53.812 05:18:36 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 01:23:53.812 05:18:36 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 01:23:53.812 05:18:36 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:23:53.812 05:18:36 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:23:53.812 05:18:36 nvme_xnvme -- scripts/common.sh@368 -- # return 0 01:23:53.812 05:18:36 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:23:53.812 05:18:36 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:23:53.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:23:53.812 --rc genhtml_branch_coverage=1 01:23:53.812 --rc genhtml_function_coverage=1 01:23:53.812 --rc genhtml_legend=1 01:23:53.812 --rc geninfo_all_blocks=1 01:23:53.812 --rc geninfo_unexecuted_blocks=1 01:23:53.812 01:23:53.812 ' 01:23:53.812 05:18:36 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:23:53.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:23:53.812 --rc genhtml_branch_coverage=1 01:23:53.812 --rc genhtml_function_coverage=1 01:23:53.812 --rc genhtml_legend=1 01:23:53.812 --rc geninfo_all_blocks=1 01:23:53.812 --rc geninfo_unexecuted_blocks=1 01:23:53.812 01:23:53.812 ' 01:23:53.812 05:18:36 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:23:53.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:23:53.812 --rc genhtml_branch_coverage=1 01:23:53.812 --rc genhtml_function_coverage=1 01:23:53.812 --rc genhtml_legend=1 01:23:53.812 --rc geninfo_all_blocks=1 01:23:53.812 --rc geninfo_unexecuted_blocks=1 01:23:53.812 01:23:53.812 ' 01:23:53.812 05:18:36 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:23:53.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:23:53.812 --rc genhtml_branch_coverage=1 01:23:53.812 --rc genhtml_function_coverage=1 01:23:53.812 --rc genhtml_legend=1 01:23:53.812 --rc geninfo_all_blocks=1 01:23:53.812 --rc geninfo_unexecuted_blocks=1 01:23:53.812 01:23:53.812 ' 01:23:53.812 05:18:36 nvme_xnvme -- xnvme/common.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/dd/common.sh 01:23:53.812 05:18:36 nvme_xnvme -- dd/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 01:23:53.812 05:18:36 nvme_xnvme -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 01:23:53.812 05:18:36 nvme_xnvme -- common/autotest_common.sh@34 -- # set -e 01:23:53.812 05:18:36 nvme_xnvme -- common/autotest_common.sh@35 -- # shopt -s nullglob 01:23:53.812 05:18:36 nvme_xnvme -- common/autotest_common.sh@36 -- # shopt -s extglob 01:23:53.812 05:18:36 nvme_xnvme -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 01:23:53.812 05:18:36 nvme_xnvme -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 01:23:53.812 05:18:36 nvme_xnvme -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 01:23:53.812 05:18:36 nvme_xnvme -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 01:23:53.812 05:18:36 nvme_xnvme -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 01:23:53.812 05:18:36 nvme_xnvme -- common/build_config.sh@2 -- # CONFIG_ASAN=y 01:23:53.812 05:18:36 nvme_xnvme -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 01:23:53.812 05:18:36 nvme_xnvme -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 01:23:53.812 05:18:36 nvme_xnvme -- common/build_config.sh@5 -- # CONFIG_USDT=n 01:23:53.812 05:18:36 nvme_xnvme -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 01:23:53.812 05:18:36 nvme_xnvme -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 01:23:53.812 05:18:36 nvme_xnvme -- common/build_config.sh@8 -- # CONFIG_RBD=n 01:23:53.812 05:18:36 nvme_xnvme -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 01:23:53.812 05:18:36 nvme_xnvme -- common/build_config.sh@10 -- # CONFIG_IDXD=y 01:23:53.812 05:18:36 nvme_xnvme -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 01:23:53.812 05:18:36 nvme_xnvme -- common/build_config.sh@12 -- # CONFIG_SMA=n 01:23:53.812 05:18:36 nvme_xnvme -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 01:23:53.812 05:18:36 nvme_xnvme -- common/build_config.sh@14 -- # CONFIG_TSAN=n 01:23:53.812 05:18:36 nvme_xnvme -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 01:23:53.812 05:18:36 nvme_xnvme -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 01:23:53.812 05:18:36 nvme_xnvme -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 01:23:53.812 05:18:36 nvme_xnvme -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 01:23:53.812 05:18:36 nvme_xnvme -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 01:23:53.812 05:18:36 nvme_xnvme -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 01:23:53.812 05:18:36 nvme_xnvme -- common/build_config.sh@21 -- # CONFIG_LTO=n 01:23:53.812 05:18:36 nvme_xnvme -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 01:23:53.812 05:18:36 nvme_xnvme -- common/build_config.sh@23 -- # CONFIG_CET=n 01:23:53.812 05:18:36 nvme_xnvme -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 01:23:53.812 05:18:36 nvme_xnvme -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 01:23:53.812 05:18:36 nvme_xnvme -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 01:23:53.812 05:18:36 nvme_xnvme -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 01:23:53.812 05:18:36 nvme_xnvme -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 01:23:53.812 05:18:36 nvme_xnvme -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 01:23:53.812 05:18:36 nvme_xnvme -- common/build_config.sh@30 -- # CONFIG_UBLK=y 01:23:53.812 05:18:36 nvme_xnvme -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 01:23:53.812 05:18:36 nvme_xnvme -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 01:23:53.812 05:18:36 nvme_xnvme -- common/build_config.sh@33 -- # CONFIG_OCF=n 01:23:53.812 05:18:36 nvme_xnvme -- common/build_config.sh@34 -- # CONFIG_FUSE=n 01:23:53.812 05:18:36 nvme_xnvme -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 01:23:53.812 05:18:36 nvme_xnvme -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 01:23:53.812 05:18:36 nvme_xnvme -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 01:23:53.812 05:18:36 nvme_xnvme -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 01:23:53.812 05:18:36 nvme_xnvme -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 01:23:53.812 05:18:36 nvme_xnvme -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 01:23:53.812 05:18:36 nvme_xnvme -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 01:23:53.812 05:18:36 nvme_xnvme -- common/build_config.sh@42 -- # CONFIG_VHOST=y 01:23:53.812 05:18:36 nvme_xnvme -- common/build_config.sh@43 -- # CONFIG_DAOS=n 01:23:53.812 05:18:36 nvme_xnvme -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 01:23:53.812 05:18:36 nvme_xnvme -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 01:23:53.812 05:18:36 nvme_xnvme -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 01:23:53.812 05:18:36 nvme_xnvme -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 01:23:53.812 05:18:36 nvme_xnvme -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 01:23:53.812 05:18:36 nvme_xnvme -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 01:23:53.812 05:18:36 nvme_xnvme -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 01:23:53.812 05:18:36 nvme_xnvme -- common/build_config.sh@51 -- # CONFIG_RDMA=y 01:23:53.812 05:18:36 nvme_xnvme -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 01:23:53.812 05:18:36 nvme_xnvme -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 01:23:53.812 05:18:36 nvme_xnvme -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 01:23:53.813 05:18:36 nvme_xnvme -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 01:23:53.813 05:18:36 nvme_xnvme -- common/build_config.sh@56 -- # CONFIG_XNVME=y 01:23:53.813 05:18:36 nvme_xnvme -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 01:23:53.813 05:18:36 nvme_xnvme -- common/build_config.sh@58 -- # CONFIG_ARCH=native 01:23:53.813 05:18:36 nvme_xnvme -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 01:23:53.813 05:18:36 nvme_xnvme -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 01:23:53.813 05:18:36 nvme_xnvme -- common/build_config.sh@61 -- # CONFIG_WERROR=y 01:23:53.813 05:18:36 nvme_xnvme -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 01:23:53.813 05:18:36 nvme_xnvme -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 01:23:53.813 05:18:36 nvme_xnvme -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 01:23:53.813 05:18:36 nvme_xnvme -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 01:23:53.813 05:18:36 nvme_xnvme -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 01:23:53.813 05:18:36 nvme_xnvme -- common/build_config.sh@67 -- # CONFIG_ISAL=y 01:23:53.813 05:18:36 nvme_xnvme -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 01:23:53.813 05:18:36 nvme_xnvme -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 01:23:53.813 05:18:36 nvme_xnvme -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 01:23:53.813 05:18:36 nvme_xnvme -- common/build_config.sh@71 -- # CONFIG_APPS=y 01:23:53.813 05:18:36 nvme_xnvme -- common/build_config.sh@72 -- # CONFIG_SHARED=y 01:23:53.813 05:18:36 nvme_xnvme -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 01:23:53.813 05:18:36 nvme_xnvme -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 01:23:53.813 05:18:36 nvme_xnvme -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 01:23:53.813 05:18:36 nvme_xnvme -- common/build_config.sh@76 -- # CONFIG_FC=n 01:23:53.813 05:18:36 nvme_xnvme -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 01:23:53.813 05:18:36 nvme_xnvme -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 01:23:53.813 05:18:36 nvme_xnvme -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 01:23:53.813 05:18:36 nvme_xnvme -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 01:23:53.813 05:18:36 nvme_xnvme -- common/build_config.sh@81 -- # CONFIG_TESTS=y 01:23:53.813 05:18:36 nvme_xnvme -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 01:23:53.813 05:18:36 nvme_xnvme -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 01:23:53.813 05:18:36 nvme_xnvme -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 01:23:53.813 05:18:36 nvme_xnvme -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 01:23:53.813 05:18:36 nvme_xnvme -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 01:23:53.813 05:18:36 nvme_xnvme -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 01:23:53.813 05:18:36 nvme_xnvme -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 01:23:53.813 05:18:36 nvme_xnvme -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 01:23:53.813 05:18:36 nvme_xnvme -- common/build_config.sh@90 -- # CONFIG_URING=n 01:23:53.813 05:18:36 nvme_xnvme -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 01:23:53.813 05:18:36 nvme_xnvme -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 01:23:53.813 05:18:36 nvme_xnvme -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 01:23:53.813 05:18:36 nvme_xnvme -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 01:23:53.813 05:18:36 nvme_xnvme -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 01:23:53.813 05:18:36 nvme_xnvme -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 01:23:53.813 05:18:36 nvme_xnvme -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 01:23:53.813 05:18:36 nvme_xnvme -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 01:23:53.813 05:18:36 nvme_xnvme -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 01:23:53.813 05:18:36 nvme_xnvme -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 01:23:53.813 05:18:36 nvme_xnvme -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 01:23:53.813 05:18:36 nvme_xnvme -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 01:23:53.813 05:18:36 nvme_xnvme -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 01:23:53.813 05:18:36 nvme_xnvme -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 01:23:53.813 05:18:36 nvme_xnvme -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 01:23:53.813 05:18:36 nvme_xnvme -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 01:23:53.813 #define SPDK_CONFIG_H 01:23:53.813 #define SPDK_CONFIG_AIO_FSDEV 1 01:23:53.813 #define SPDK_CONFIG_APPS 1 01:23:53.813 #define SPDK_CONFIG_ARCH native 01:23:53.813 #define SPDK_CONFIG_ASAN 1 01:23:53.813 #undef SPDK_CONFIG_AVAHI 01:23:53.813 #undef SPDK_CONFIG_CET 01:23:53.813 #define SPDK_CONFIG_COPY_FILE_RANGE 1 01:23:53.813 #define SPDK_CONFIG_COVERAGE 1 01:23:53.813 #define SPDK_CONFIG_CROSS_PREFIX 01:23:53.813 #undef SPDK_CONFIG_CRYPTO 01:23:53.813 #undef SPDK_CONFIG_CRYPTO_MLX5 01:23:53.813 #undef SPDK_CONFIG_CUSTOMOCF 01:23:53.813 #undef SPDK_CONFIG_DAOS 01:23:53.813 #define SPDK_CONFIG_DAOS_DIR 01:23:53.813 #define SPDK_CONFIG_DEBUG 1 01:23:53.813 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 01:23:53.813 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 01:23:53.813 #define SPDK_CONFIG_DPDK_INC_DIR 01:23:53.813 #define SPDK_CONFIG_DPDK_LIB_DIR 01:23:53.813 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 01:23:53.813 #undef SPDK_CONFIG_DPDK_UADK 01:23:53.813 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 01:23:53.813 #define SPDK_CONFIG_EXAMPLES 1 01:23:53.813 #undef SPDK_CONFIG_FC 01:23:53.813 #define SPDK_CONFIG_FC_PATH 01:23:53.813 #define SPDK_CONFIG_FIO_PLUGIN 1 01:23:53.813 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 01:23:53.813 #define SPDK_CONFIG_FSDEV 1 01:23:53.813 #undef SPDK_CONFIG_FUSE 01:23:53.813 #undef SPDK_CONFIG_FUZZER 01:23:53.813 #define SPDK_CONFIG_FUZZER_LIB 01:23:53.813 #undef SPDK_CONFIG_GOLANG 01:23:53.813 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 01:23:53.813 #define SPDK_CONFIG_HAVE_EVP_MAC 1 01:23:53.813 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 01:23:53.813 #define SPDK_CONFIG_HAVE_KEYUTILS 1 01:23:53.813 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 01:23:53.813 #undef SPDK_CONFIG_HAVE_LIBBSD 01:23:53.813 #undef SPDK_CONFIG_HAVE_LZ4 01:23:53.813 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 01:23:53.813 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 01:23:53.813 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 01:23:53.813 #define SPDK_CONFIG_IDXD 1 01:23:53.813 #define SPDK_CONFIG_IDXD_KERNEL 1 01:23:53.813 #undef SPDK_CONFIG_IPSEC_MB 01:23:53.813 #define SPDK_CONFIG_IPSEC_MB_DIR 01:23:53.813 #define SPDK_CONFIG_ISAL 1 01:23:53.813 #define SPDK_CONFIG_ISAL_CRYPTO 1 01:23:53.813 #define SPDK_CONFIG_ISCSI_INITIATOR 1 01:23:53.813 #define SPDK_CONFIG_LIBDIR 01:23:53.813 #undef SPDK_CONFIG_LTO 01:23:53.813 #define SPDK_CONFIG_MAX_LCORES 128 01:23:53.813 #define SPDK_CONFIG_MAX_NUMA_NODES 1 01:23:53.813 #define SPDK_CONFIG_NVME_CUSE 1 01:23:53.813 #undef SPDK_CONFIG_OCF 01:23:53.813 #define SPDK_CONFIG_OCF_PATH 01:23:53.813 #define SPDK_CONFIG_OPENSSL_PATH 01:23:53.813 #undef SPDK_CONFIG_PGO_CAPTURE 01:23:53.813 #define SPDK_CONFIG_PGO_DIR 01:23:53.813 #undef SPDK_CONFIG_PGO_USE 01:23:53.813 #define SPDK_CONFIG_PREFIX /usr/local 01:23:53.813 #undef SPDK_CONFIG_RAID5F 01:23:53.813 #undef SPDK_CONFIG_RBD 01:23:53.813 #define SPDK_CONFIG_RDMA 1 01:23:53.813 #define SPDK_CONFIG_RDMA_PROV verbs 01:23:53.813 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 01:23:53.813 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 01:23:53.813 #define SPDK_CONFIG_RDMA_SET_TOS 1 01:23:53.813 #define SPDK_CONFIG_SHARED 1 01:23:53.813 #undef SPDK_CONFIG_SMA 01:23:53.813 #define SPDK_CONFIG_TESTS 1 01:23:53.813 #undef SPDK_CONFIG_TSAN 01:23:53.813 #define SPDK_CONFIG_UBLK 1 01:23:53.813 #define SPDK_CONFIG_UBSAN 1 01:23:53.813 #undef SPDK_CONFIG_UNIT_TESTS 01:23:53.813 #undef SPDK_CONFIG_URING 01:23:53.813 #define SPDK_CONFIG_URING_PATH 01:23:53.813 #undef SPDK_CONFIG_URING_ZNS 01:23:53.813 #undef SPDK_CONFIG_USDT 01:23:53.814 #undef SPDK_CONFIG_VBDEV_COMPRESS 01:23:53.814 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 01:23:53.814 #undef SPDK_CONFIG_VFIO_USER 01:23:53.814 #define SPDK_CONFIG_VFIO_USER_DIR 01:23:53.814 #define SPDK_CONFIG_VHOST 1 01:23:53.814 #define SPDK_CONFIG_VIRTIO 1 01:23:53.814 #undef SPDK_CONFIG_VTUNE 01:23:53.814 #define SPDK_CONFIG_VTUNE_DIR 01:23:53.814 #define SPDK_CONFIG_WERROR 1 01:23:53.814 #define SPDK_CONFIG_WPDK_DIR 01:23:53.814 #define SPDK_CONFIG_XNVME 1 01:23:53.814 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 01:23:53.814 05:18:36 nvme_xnvme -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 01:23:53.814 05:18:36 nvme_xnvme -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:23:53.814 05:18:36 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 01:23:53.814 05:18:36 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:23:53.814 05:18:36 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:23:53.814 05:18:36 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:23:53.814 05:18:36 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:23:53.814 05:18:36 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:23:53.814 05:18:36 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:23:53.814 05:18:36 nvme_xnvme -- paths/export.sh@5 -- # export PATH 01:23:53.814 05:18:36 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:23:53.814 05:18:36 nvme_xnvme -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 01:23:53.814 05:18:36 nvme_xnvme -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 01:23:53.814 05:18:36 nvme_xnvme -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 01:23:53.814 05:18:36 nvme_xnvme -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 01:23:53.814 05:18:36 nvme_xnvme -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 01:23:53.814 05:18:36 nvme_xnvme -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 01:23:53.814 05:18:36 nvme_xnvme -- pm/common@64 -- # TEST_TAG=N/A 01:23:53.814 05:18:36 nvme_xnvme -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 01:23:53.814 05:18:36 nvme_xnvme -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 01:23:53.814 05:18:36 nvme_xnvme -- pm/common@68 -- # uname -s 01:23:53.814 05:18:36 nvme_xnvme -- pm/common@68 -- # PM_OS=Linux 01:23:53.814 05:18:36 nvme_xnvme -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 01:23:53.814 05:18:36 nvme_xnvme -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 01:23:53.814 05:18:36 nvme_xnvme -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 01:23:53.814 05:18:36 nvme_xnvme -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 01:23:53.814 05:18:36 nvme_xnvme -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 01:23:53.814 05:18:36 nvme_xnvme -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 01:23:53.814 05:18:36 nvme_xnvme -- pm/common@76 -- # SUDO[0]= 01:23:53.814 05:18:36 nvme_xnvme -- pm/common@76 -- # SUDO[1]='sudo -E' 01:23:53.814 05:18:36 nvme_xnvme -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 01:23:53.814 05:18:36 nvme_xnvme -- pm/common@79 -- # [[ Linux == FreeBSD ]] 01:23:53.814 05:18:36 nvme_xnvme -- pm/common@81 -- # [[ Linux == Linux ]] 01:23:53.814 05:18:36 nvme_xnvme -- pm/common@81 -- # [[ QEMU != QEMU ]] 01:23:53.814 05:18:36 nvme_xnvme -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 01:23:53.814 05:18:36 nvme_xnvme -- common/autotest_common.sh@58 -- # : 0 01:23:53.814 05:18:36 nvme_xnvme -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 01:23:53.814 05:18:36 nvme_xnvme -- common/autotest_common.sh@62 -- # : 0 01:23:53.814 05:18:36 nvme_xnvme -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 01:23:53.814 05:18:36 nvme_xnvme -- common/autotest_common.sh@64 -- # : 0 01:23:53.814 05:18:36 nvme_xnvme -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 01:23:53.814 05:18:36 nvme_xnvme -- common/autotest_common.sh@66 -- # : 1 01:23:53.814 05:18:36 nvme_xnvme -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 01:23:53.814 05:18:36 nvme_xnvme -- common/autotest_common.sh@68 -- # : 0 01:23:53.814 05:18:36 nvme_xnvme -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 01:23:53.814 05:18:36 nvme_xnvme -- common/autotest_common.sh@70 -- # : 01:23:53.814 05:18:36 nvme_xnvme -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 01:23:53.814 05:18:36 nvme_xnvme -- common/autotest_common.sh@72 -- # : 0 01:23:53.814 05:18:36 nvme_xnvme -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 01:23:53.814 05:18:36 nvme_xnvme -- common/autotest_common.sh@74 -- # : 1 01:23:53.814 05:18:36 nvme_xnvme -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 01:23:53.814 05:18:36 nvme_xnvme -- common/autotest_common.sh@76 -- # : 0 01:23:53.814 05:18:36 nvme_xnvme -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 01:23:53.814 05:18:36 nvme_xnvme -- common/autotest_common.sh@78 -- # : 0 01:23:53.814 05:18:36 nvme_xnvme -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 01:23:53.814 05:18:36 nvme_xnvme -- common/autotest_common.sh@80 -- # : 1 01:23:53.814 05:18:36 nvme_xnvme -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 01:23:53.814 05:18:36 nvme_xnvme -- common/autotest_common.sh@82 -- # : 0 01:23:53.814 05:18:36 nvme_xnvme -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 01:23:53.814 05:18:36 nvme_xnvme -- common/autotest_common.sh@84 -- # : 0 01:23:53.814 05:18:36 nvme_xnvme -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 01:23:53.814 05:18:36 nvme_xnvme -- common/autotest_common.sh@86 -- # : 0 01:23:53.814 05:18:36 nvme_xnvme -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 01:23:53.814 05:18:36 nvme_xnvme -- common/autotest_common.sh@88 -- # : 0 01:23:53.814 05:18:36 nvme_xnvme -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 01:23:53.814 05:18:36 nvme_xnvme -- common/autotest_common.sh@90 -- # : 1 01:23:53.814 05:18:36 nvme_xnvme -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 01:23:53.814 05:18:36 nvme_xnvme -- common/autotest_common.sh@92 -- # : 0 01:23:53.814 05:18:36 nvme_xnvme -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 01:23:53.814 05:18:36 nvme_xnvme -- common/autotest_common.sh@94 -- # : 0 01:23:53.814 05:18:36 nvme_xnvme -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 01:23:53.814 05:18:36 nvme_xnvme -- common/autotest_common.sh@96 -- # : 0 01:23:53.814 05:18:36 nvme_xnvme -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 01:23:53.814 05:18:36 nvme_xnvme -- common/autotest_common.sh@98 -- # : 0 01:23:53.814 05:18:36 nvme_xnvme -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 01:23:53.814 05:18:36 nvme_xnvme -- common/autotest_common.sh@100 -- # : 0 01:23:53.814 05:18:36 nvme_xnvme -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 01:23:53.814 05:18:36 nvme_xnvme -- common/autotest_common.sh@102 -- # : rdma 01:23:53.814 05:18:36 nvme_xnvme -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 01:23:53.814 05:18:36 nvme_xnvme -- common/autotest_common.sh@104 -- # : 0 01:23:53.814 05:18:36 nvme_xnvme -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 01:23:53.814 05:18:36 nvme_xnvme -- common/autotest_common.sh@106 -- # : 0 01:23:53.814 05:18:36 nvme_xnvme -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 01:23:53.814 05:18:36 nvme_xnvme -- common/autotest_common.sh@108 -- # : 0 01:23:53.814 05:18:36 nvme_xnvme -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 01:23:53.814 05:18:36 nvme_xnvme -- common/autotest_common.sh@110 -- # : 0 01:23:53.814 05:18:36 nvme_xnvme -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 01:23:53.814 05:18:36 nvme_xnvme -- common/autotest_common.sh@112 -- # : 0 01:23:53.814 05:18:36 nvme_xnvme -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 01:23:53.814 05:18:36 nvme_xnvme -- common/autotest_common.sh@114 -- # : 0 01:23:53.814 05:18:36 nvme_xnvme -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 01:23:53.814 05:18:36 nvme_xnvme -- common/autotest_common.sh@116 -- # : 0 01:23:53.814 05:18:36 nvme_xnvme -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 01:23:53.814 05:18:36 nvme_xnvme -- common/autotest_common.sh@118 -- # : 0 01:23:53.814 05:18:36 nvme_xnvme -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 01:23:53.814 05:18:36 nvme_xnvme -- common/autotest_common.sh@120 -- # : 0 01:23:53.814 05:18:36 nvme_xnvme -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 01:23:53.814 05:18:36 nvme_xnvme -- common/autotest_common.sh@122 -- # : 1 01:23:53.814 05:18:36 nvme_xnvme -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 01:23:53.814 05:18:36 nvme_xnvme -- common/autotest_common.sh@124 -- # : 1 01:23:53.814 05:18:36 nvme_xnvme -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 01:23:53.814 05:18:36 nvme_xnvme -- common/autotest_common.sh@126 -- # : 01:23:53.814 05:18:36 nvme_xnvme -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 01:23:53.814 05:18:36 nvme_xnvme -- common/autotest_common.sh@128 -- # : 0 01:23:53.814 05:18:36 nvme_xnvme -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 01:23:53.815 05:18:36 nvme_xnvme -- common/autotest_common.sh@130 -- # : 0 01:23:53.815 05:18:36 nvme_xnvme -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 01:23:53.815 05:18:36 nvme_xnvme -- common/autotest_common.sh@132 -- # : 1 01:23:53.815 05:18:36 nvme_xnvme -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 01:23:53.815 05:18:36 nvme_xnvme -- common/autotest_common.sh@134 -- # : 0 01:23:53.815 05:18:36 nvme_xnvme -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 01:23:53.815 05:18:36 nvme_xnvme -- common/autotest_common.sh@136 -- # : 0 01:23:53.815 05:18:36 nvme_xnvme -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 01:23:53.815 05:18:36 nvme_xnvme -- common/autotest_common.sh@138 -- # : 0 01:23:53.815 05:18:36 nvme_xnvme -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 01:23:53.815 05:18:36 nvme_xnvme -- common/autotest_common.sh@140 -- # : 01:23:53.815 05:18:36 nvme_xnvme -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 01:23:53.815 05:18:36 nvme_xnvme -- common/autotest_common.sh@142 -- # : true 01:23:53.815 05:18:36 nvme_xnvme -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 01:23:53.815 05:18:36 nvme_xnvme -- common/autotest_common.sh@144 -- # : 0 01:23:53.815 05:18:36 nvme_xnvme -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 01:23:53.815 05:18:36 nvme_xnvme -- common/autotest_common.sh@146 -- # : 0 01:23:53.815 05:18:36 nvme_xnvme -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 01:23:53.815 05:18:36 nvme_xnvme -- common/autotest_common.sh@148 -- # : 0 01:23:53.815 05:18:36 nvme_xnvme -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 01:23:53.815 05:18:36 nvme_xnvme -- common/autotest_common.sh@150 -- # : 0 01:23:53.815 05:18:36 nvme_xnvme -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 01:23:53.815 05:18:36 nvme_xnvme -- common/autotest_common.sh@152 -- # : 0 01:23:53.815 05:18:36 nvme_xnvme -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 01:23:53.815 05:18:36 nvme_xnvme -- common/autotest_common.sh@154 -- # : 01:23:53.815 05:18:36 nvme_xnvme -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 01:23:53.815 05:18:36 nvme_xnvme -- common/autotest_common.sh@156 -- # : 0 01:23:53.815 05:18:36 nvme_xnvme -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 01:23:53.815 05:18:36 nvme_xnvme -- common/autotest_common.sh@158 -- # : 0 01:23:53.815 05:18:36 nvme_xnvme -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 01:23:53.815 05:18:36 nvme_xnvme -- common/autotest_common.sh@160 -- # : 1 01:23:53.815 05:18:36 nvme_xnvme -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 01:23:53.815 05:18:36 nvme_xnvme -- common/autotest_common.sh@162 -- # : 0 01:23:53.815 05:18:36 nvme_xnvme -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 01:23:53.815 05:18:36 nvme_xnvme -- common/autotest_common.sh@164 -- # : 0 01:23:53.815 05:18:36 nvme_xnvme -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 01:23:53.815 05:18:36 nvme_xnvme -- common/autotest_common.sh@166 -- # : 0 01:23:53.815 05:18:36 nvme_xnvme -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 01:23:53.815 05:18:36 nvme_xnvme -- common/autotest_common.sh@169 -- # : 01:23:53.815 05:18:36 nvme_xnvme -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 01:23:53.815 05:18:36 nvme_xnvme -- common/autotest_common.sh@171 -- # : 0 01:23:53.815 05:18:36 nvme_xnvme -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 01:23:53.815 05:18:36 nvme_xnvme -- common/autotest_common.sh@173 -- # : 0 01:23:53.815 05:18:36 nvme_xnvme -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 01:23:53.815 05:18:36 nvme_xnvme -- common/autotest_common.sh@175 -- # : 0 01:23:53.815 05:18:36 nvme_xnvme -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 01:23:53.815 05:18:36 nvme_xnvme -- common/autotest_common.sh@177 -- # : 0 01:23:53.815 05:18:36 nvme_xnvme -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 01:23:53.815 05:18:36 nvme_xnvme -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 01:23:53.815 05:18:36 nvme_xnvme -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 01:23:53.815 05:18:36 nvme_xnvme -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 01:23:53.815 05:18:36 nvme_xnvme -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 01:23:53.815 05:18:36 nvme_xnvme -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 01:23:53.815 05:18:36 nvme_xnvme -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 01:23:53.815 05:18:36 nvme_xnvme -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 01:23:53.815 05:18:36 nvme_xnvme -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 01:23:53.815 05:18:36 nvme_xnvme -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 01:23:53.815 05:18:36 nvme_xnvme -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 01:23:53.815 05:18:36 nvme_xnvme -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 01:23:53.815 05:18:36 nvme_xnvme -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 01:23:53.815 05:18:36 nvme_xnvme -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 01:23:53.815 05:18:36 nvme_xnvme -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 01:23:53.815 05:18:36 nvme_xnvme -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 01:23:53.815 05:18:36 nvme_xnvme -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 01:23:53.815 05:18:36 nvme_xnvme -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 01:23:53.815 05:18:36 nvme_xnvme -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 01:23:53.815 05:18:36 nvme_xnvme -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 01:23:53.815 05:18:36 nvme_xnvme -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 01:23:53.815 05:18:36 nvme_xnvme -- common/autotest_common.sh@206 -- # cat 01:23:53.815 05:18:36 nvme_xnvme -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 01:23:53.815 05:18:36 nvme_xnvme -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 01:23:53.815 05:18:36 nvme_xnvme -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 01:23:53.815 05:18:36 nvme_xnvme -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 01:23:53.815 05:18:36 nvme_xnvme -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 01:23:53.815 05:18:36 nvme_xnvme -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 01:23:53.815 05:18:36 nvme_xnvme -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 01:23:53.815 05:18:36 nvme_xnvme -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 01:23:53.815 05:18:36 nvme_xnvme -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 01:23:53.815 05:18:36 nvme_xnvme -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 01:23:53.815 05:18:36 nvme_xnvme -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 01:23:53.815 05:18:36 nvme_xnvme -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 01:23:53.815 05:18:36 nvme_xnvme -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 01:23:53.815 05:18:36 nvme_xnvme -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 01:23:53.815 05:18:36 nvme_xnvme -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 01:23:53.815 05:18:36 nvme_xnvme -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 01:23:53.815 05:18:36 nvme_xnvme -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 01:23:53.815 05:18:36 nvme_xnvme -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 01:23:53.815 05:18:36 nvme_xnvme -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 01:23:53.815 05:18:36 nvme_xnvme -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 01:23:53.815 05:18:36 nvme_xnvme -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 01:23:53.816 05:18:36 nvme_xnvme -- common/autotest_common.sh@269 -- # _LCOV= 01:23:53.816 05:18:36 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 01:23:53.816 05:18:36 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 01:23:53.816 05:18:36 nvme_xnvme -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 01:23:53.816 05:18:36 nvme_xnvme -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 01:23:53.816 05:18:36 nvme_xnvme -- common/autotest_common.sh@275 -- # lcov_opt= 01:23:53.816 05:18:36 nvme_xnvme -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 01:23:53.816 05:18:36 nvme_xnvme -- common/autotest_common.sh@279 -- # export valgrind= 01:23:53.816 05:18:36 nvme_xnvme -- common/autotest_common.sh@279 -- # valgrind= 01:23:53.816 05:18:36 nvme_xnvme -- common/autotest_common.sh@285 -- # uname -s 01:23:53.816 05:18:36 nvme_xnvme -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 01:23:53.816 05:18:36 nvme_xnvme -- common/autotest_common.sh@286 -- # HUGEMEM=4096 01:23:53.816 05:18:36 nvme_xnvme -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 01:23:53.816 05:18:36 nvme_xnvme -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 01:23:53.816 05:18:36 nvme_xnvme -- common/autotest_common.sh@289 -- # MAKE=make 01:23:53.816 05:18:36 nvme_xnvme -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 01:23:53.816 05:18:36 nvme_xnvme -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 01:23:53.816 05:18:36 nvme_xnvme -- common/autotest_common.sh@306 -- # HUGEMEM=4096 01:23:53.816 05:18:36 nvme_xnvme -- common/autotest_common.sh@308 -- # NO_HUGE=() 01:23:53.816 05:18:36 nvme_xnvme -- common/autotest_common.sh@309 -- # TEST_MODE= 01:23:53.816 05:18:36 nvme_xnvme -- common/autotest_common.sh@331 -- # [[ -z 70034 ]] 01:23:53.816 05:18:36 nvme_xnvme -- common/autotest_common.sh@331 -- # kill -0 70034 01:23:53.816 05:18:36 nvme_xnvme -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 01:23:53.816 05:18:36 nvme_xnvme -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 01:23:53.816 05:18:36 nvme_xnvme -- common/autotest_common.sh@343 -- # local requested_size=2147483648 01:23:54.076 05:18:36 nvme_xnvme -- common/autotest_common.sh@344 -- # local mount target_dir 01:23:54.076 05:18:36 nvme_xnvme -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 01:23:54.076 05:18:36 nvme_xnvme -- common/autotest_common.sh@347 -- # local source fs size avail mount use 01:23:54.076 05:18:36 nvme_xnvme -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 01:23:54.076 05:18:36 nvme_xnvme -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 01:23:54.076 05:18:36 nvme_xnvme -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.7bt2l9 01:23:54.076 05:18:36 nvme_xnvme -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 01:23:54.076 05:18:36 nvme_xnvme -- common/autotest_common.sh@358 -- # [[ -n '' ]] 01:23:54.076 05:18:36 nvme_xnvme -- common/autotest_common.sh@363 -- # [[ -n '' ]] 01:23:54.076 05:18:36 nvme_xnvme -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvme/xnvme /tmp/spdk.7bt2l9/tests/xnvme /tmp/spdk.7bt2l9 01:23:54.076 05:18:36 nvme_xnvme -- common/autotest_common.sh@371 -- # requested_size=2214592512 01:23:54.076 05:18:36 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 01:23:54.076 05:18:36 nvme_xnvme -- common/autotest_common.sh@340 -- # df -T 01:23:54.076 05:18:36 nvme_xnvme -- common/autotest_common.sh@340 -- # grep -v Filesystem 01:23:54.076 05:18:36 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 01:23:54.076 05:18:36 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 01:23:54.076 05:18:36 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13888385024 01:23:54.076 05:18:36 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 01:23:54.076 05:18:36 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5679992832 01:23:54.076 05:18:36 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 01:23:54.076 05:18:36 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 01:23:54.076 05:18:36 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 01:23:54.076 05:18:36 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 01:23:54.076 05:18:36 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 01:23:54.076 05:18:36 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=0 01:23:54.076 05:18:36 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 01:23:54.076 05:18:36 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 01:23:54.076 05:18:36 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 01:23:54.076 05:18:36 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6261665792 01:23:54.076 05:18:36 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266429440 01:23:54.076 05:18:36 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=4763648 01:23:54.076 05:18:36 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 01:23:54.076 05:18:36 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 01:23:54.076 05:18:36 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 01:23:54.076 05:18:36 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=2493775872 01:23:54.076 05:18:36 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506571776 01:23:54.076 05:18:36 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12795904 01:23:54.076 05:18:36 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 01:23:54.076 05:18:36 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 01:23:54.076 05:18:36 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 01:23:54.076 05:18:36 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13888385024 01:23:54.076 05:18:36 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 01:23:54.076 05:18:36 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5679992832 01:23:54.076 05:18:36 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 01:23:54.076 05:18:36 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 01:23:54.076 05:18:36 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 01:23:54.076 05:18:36 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 01:23:54.076 05:18:36 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 01:23:54.076 05:18:36 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 01:23:54.076 05:18:36 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 01:23:54.076 05:18:36 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 01:23:54.076 05:18:36 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 01:23:54.076 05:18:36 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6266281984 01:23:54.076 05:18:36 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266429440 01:23:54.076 05:18:36 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=147456 01:23:54.076 05:18:36 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 01:23:54.076 05:18:36 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 01:23:54.077 05:18:36 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 01:23:54.077 05:18:36 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 01:23:54.077 05:18:36 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 01:23:54.077 05:18:36 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 01:23:54.077 05:18:36 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 01:23:54.077 05:18:36 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 01:23:54.077 05:18:36 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 01:23:54.077 05:18:36 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=1253273600 01:23:54.077 05:18:36 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253285888 01:23:54.077 05:18:36 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 01:23:54.077 05:18:36 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 01:23:54.077 05:18:36 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output 01:23:54.077 05:18:36 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 01:23:54.077 05:18:36 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=97121087488 01:23:54.077 05:18:36 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 01:23:54.077 05:18:36 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=2581692416 01:23:54.077 05:18:36 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 01:23:54.077 05:18:36 nvme_xnvme -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 01:23:54.077 * Looking for test storage... 01:23:54.077 05:18:36 nvme_xnvme -- common/autotest_common.sh@381 -- # local target_space new_size 01:23:54.077 05:18:36 nvme_xnvme -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 01:23:54.077 05:18:36 nvme_xnvme -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 01:23:54.077 05:18:36 nvme_xnvme -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 01:23:54.077 05:18:36 nvme_xnvme -- common/autotest_common.sh@385 -- # mount=/home 01:23:54.077 05:18:36 nvme_xnvme -- common/autotest_common.sh@387 -- # target_space=13888385024 01:23:54.077 05:18:36 nvme_xnvme -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 01:23:54.077 05:18:36 nvme_xnvme -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 01:23:54.077 05:18:36 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 01:23:54.077 05:18:36 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 01:23:54.077 05:18:36 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ /home == / ]] 01:23:54.077 05:18:36 nvme_xnvme -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 01:23:54.077 05:18:36 nvme_xnvme -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 01:23:54.077 05:18:36 nvme_xnvme -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 01:23:54.077 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 01:23:54.077 05:18:36 nvme_xnvme -- common/autotest_common.sh@402 -- # return 0 01:23:54.077 05:18:36 nvme_xnvme -- common/autotest_common.sh@1680 -- # set -o errtrace 01:23:54.077 05:18:36 nvme_xnvme -- common/autotest_common.sh@1681 -- # shopt -s extdebug 01:23:54.077 05:18:36 nvme_xnvme -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 01:23:54.077 05:18:36 nvme_xnvme -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 01:23:54.077 05:18:36 nvme_xnvme -- common/autotest_common.sh@1685 -- # true 01:23:54.077 05:18:36 nvme_xnvme -- common/autotest_common.sh@1687 -- # xtrace_fd 01:23:54.077 05:18:36 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 01:23:54.077 05:18:36 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 01:23:54.077 05:18:36 nvme_xnvme -- common/autotest_common.sh@27 -- # exec 01:23:54.077 05:18:36 nvme_xnvme -- common/autotest_common.sh@29 -- # exec 01:23:54.077 05:18:36 nvme_xnvme -- common/autotest_common.sh@31 -- # xtrace_restore 01:23:54.077 05:18:36 nvme_xnvme -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 01:23:54.077 05:18:36 nvme_xnvme -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 01:23:54.077 05:18:36 nvme_xnvme -- common/autotest_common.sh@18 -- # set -x 01:23:54.077 05:18:36 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:23:54.077 05:18:36 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 01:23:54.077 05:18:36 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:23:54.077 05:18:36 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:23:54.077 05:18:36 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:23:54.077 05:18:36 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 01:23:54.077 05:18:36 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 01:23:54.077 05:18:36 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 01:23:54.077 05:18:36 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 01:23:54.077 05:18:36 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 01:23:54.077 05:18:36 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 01:23:54.077 05:18:36 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 01:23:54.077 05:18:36 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 01:23:54.077 05:18:36 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 01:23:54.077 05:18:36 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:23:54.077 05:18:36 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 01:23:54.077 05:18:36 nvme_xnvme -- scripts/common.sh@345 -- # : 1 01:23:54.077 05:18:36 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 01:23:54.077 05:18:36 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:23:54.077 05:18:36 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 01:23:54.077 05:18:36 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 01:23:54.077 05:18:36 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:23:54.077 05:18:36 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 01:23:54.077 05:18:36 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 01:23:54.077 05:18:36 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 01:23:54.077 05:18:36 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 01:23:54.077 05:18:36 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:23:54.077 05:18:36 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 01:23:54.077 05:18:36 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 01:23:54.077 05:18:36 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:23:54.077 05:18:36 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:23:54.077 05:18:36 nvme_xnvme -- scripts/common.sh@368 -- # return 0 01:23:54.077 05:18:36 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:23:54.077 05:18:36 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:23:54.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:23:54.077 --rc genhtml_branch_coverage=1 01:23:54.077 --rc genhtml_function_coverage=1 01:23:54.077 --rc genhtml_legend=1 01:23:54.077 --rc geninfo_all_blocks=1 01:23:54.077 --rc geninfo_unexecuted_blocks=1 01:23:54.077 01:23:54.077 ' 01:23:54.077 05:18:36 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:23:54.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:23:54.077 --rc genhtml_branch_coverage=1 01:23:54.077 --rc genhtml_function_coverage=1 01:23:54.077 --rc genhtml_legend=1 01:23:54.077 --rc geninfo_all_blocks=1 01:23:54.077 --rc geninfo_unexecuted_blocks=1 01:23:54.077 01:23:54.077 ' 01:23:54.077 05:18:36 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:23:54.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:23:54.077 --rc genhtml_branch_coverage=1 01:23:54.077 --rc genhtml_function_coverage=1 01:23:54.077 --rc genhtml_legend=1 01:23:54.077 --rc geninfo_all_blocks=1 01:23:54.077 --rc geninfo_unexecuted_blocks=1 01:23:54.077 01:23:54.077 ' 01:23:54.077 05:18:36 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:23:54.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:23:54.077 --rc genhtml_branch_coverage=1 01:23:54.077 --rc genhtml_function_coverage=1 01:23:54.077 --rc genhtml_legend=1 01:23:54.077 --rc geninfo_all_blocks=1 01:23:54.077 --rc geninfo_unexecuted_blocks=1 01:23:54.077 01:23:54.077 ' 01:23:54.077 05:18:36 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:23:54.077 05:18:36 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 01:23:54.077 05:18:36 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:23:54.077 05:18:36 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:23:54.077 05:18:36 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:23:54.077 05:18:36 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:23:54.077 05:18:36 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:23:54.077 05:18:36 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:23:54.077 05:18:36 nvme_xnvme -- paths/export.sh@5 -- # export PATH 01:23:54.077 05:18:36 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:23:54.077 05:18:36 nvme_xnvme -- xnvme/common.sh@12 -- # xnvme_io=('libaio' 'io_uring' 'io_uring_cmd') 01:23:54.077 05:18:36 nvme_xnvme -- xnvme/common.sh@12 -- # declare -a xnvme_io 01:23:54.077 05:18:36 nvme_xnvme -- xnvme/common.sh@18 -- # libaio=('randread' 'randwrite') 01:23:54.078 05:18:36 nvme_xnvme -- xnvme/common.sh@18 -- # declare -a libaio 01:23:54.078 05:18:36 nvme_xnvme -- xnvme/common.sh@23 -- # io_uring=('randread' 'randwrite') 01:23:54.078 05:18:36 nvme_xnvme -- xnvme/common.sh@23 -- # declare -a io_uring 01:23:54.078 05:18:36 nvme_xnvme -- xnvme/common.sh@27 -- # io_uring_cmd=('randread' 'randwrite' 'unmap' 'write_zeroes') 01:23:54.078 05:18:36 nvme_xnvme -- xnvme/common.sh@27 -- # declare -a io_uring_cmd 01:23:54.078 05:18:36 nvme_xnvme -- xnvme/common.sh@33 -- # libaio_fio=('randread' 'randwrite') 01:23:54.078 05:18:36 nvme_xnvme -- xnvme/common.sh@33 -- # declare -a libaio_fio 01:23:54.078 05:18:36 nvme_xnvme -- xnvme/common.sh@37 -- # io_uring_fio=('randread' 'randwrite') 01:23:54.078 05:18:36 nvme_xnvme -- xnvme/common.sh@37 -- # declare -a io_uring_fio 01:23:54.078 05:18:36 nvme_xnvme -- xnvme/common.sh@41 -- # io_uring_cmd_fio=('randread' 'randwrite') 01:23:54.078 05:18:36 nvme_xnvme -- xnvme/common.sh@41 -- # declare -a io_uring_cmd_fio 01:23:54.078 05:18:36 nvme_xnvme -- xnvme/common.sh@45 -- # xnvme_filename=(['libaio']='/dev/nvme0n1' ['io_uring']='/dev/nvme0n1' ['io_uring_cmd']='/dev/ng0n1') 01:23:54.078 05:18:36 nvme_xnvme -- xnvme/common.sh@45 -- # declare -A xnvme_filename 01:23:54.078 05:18:36 nvme_xnvme -- xnvme/common.sh@51 -- # xnvme_conserve_cpu=('false' 'true') 01:23:54.078 05:18:36 nvme_xnvme -- xnvme/common.sh@51 -- # declare -a xnvme_conserve_cpu 01:23:54.078 05:18:36 nvme_xnvme -- xnvme/common.sh@57 -- # method_bdev_xnvme_create_0=(['name']='xnvme_bdev' ['filename']='/dev/nvme0n1' ['io_mechanism']='libaio' ['conserve_cpu']='false') 01:23:54.078 05:18:36 nvme_xnvme -- xnvme/common.sh@57 -- # declare -A method_bdev_xnvme_create_0 01:23:54.078 05:18:36 nvme_xnvme -- xnvme/common.sh@89 -- # prep_nvme 01:23:54.078 05:18:36 nvme_xnvme -- xnvme/common.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 01:23:54.645 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:23:54.903 Waiting for block devices as requested 01:23:54.903 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 01:23:55.162 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 01:23:55.162 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 01:23:55.419 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 01:24:00.756 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 01:24:00.756 05:18:42 nvme_xnvme -- xnvme/common.sh@73 -- # modprobe -r nvme 01:24:00.756 05:18:43 nvme_xnvme -- xnvme/common.sh@74 -- # nproc 01:24:00.756 05:18:43 nvme_xnvme -- xnvme/common.sh@74 -- # modprobe nvme poll_queues=10 01:24:01.014 05:18:43 nvme_xnvme -- xnvme/common.sh@77 -- # local nvme 01:24:01.014 05:18:43 nvme_xnvme -- xnvme/common.sh@78 -- # for nvme in /dev/nvme*n!(*p*) 01:24:01.014 05:18:43 nvme_xnvme -- xnvme/common.sh@79 -- # block_in_use /dev/nvme0n1 01:24:01.014 05:18:43 nvme_xnvme -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 01:24:01.014 05:18:43 nvme_xnvme -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 01:24:01.014 No valid GPT data, bailing 01:24:01.014 05:18:43 nvme_xnvme -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 01:24:01.273 05:18:43 nvme_xnvme -- scripts/common.sh@394 -- # pt= 01:24:01.273 05:18:43 nvme_xnvme -- scripts/common.sh@395 -- # return 1 01:24:01.273 05:18:43 nvme_xnvme -- xnvme/common.sh@80 -- # xnvme_filename["libaio"]=/dev/nvme0n1 01:24:01.273 05:18:43 nvme_xnvme -- xnvme/common.sh@81 -- # xnvme_filename["io_uring"]=/dev/nvme0n1 01:24:01.273 05:18:43 nvme_xnvme -- xnvme/common.sh@82 -- # xnvme_filename["io_uring_cmd"]=/dev/ng0n1 01:24:01.273 05:18:43 nvme_xnvme -- xnvme/common.sh@83 -- # return 0 01:24:01.273 05:18:43 nvme_xnvme -- xnvme/xnvme.sh@73 -- # trap 'killprocess "$spdk_tgt"' EXIT 01:24:01.273 05:18:43 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 01:24:01.273 05:18:43 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 01:24:01.273 05:18:43 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 01:24:01.273 05:18:43 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 01:24:01.273 05:18:43 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 01:24:01.273 05:18:43 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 01:24:01.273 05:18:43 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 01:24:01.273 05:18:43 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 01:24:01.273 05:18:43 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 01:24:01.273 05:18:43 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:24:01.273 05:18:43 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 01:24:01.273 05:18:43 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 01:24:01.273 ************************************ 01:24:01.273 START TEST xnvme_rpc 01:24:01.273 ************************************ 01:24:01.273 05:18:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 01:24:01.273 05:18:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 01:24:01.273 05:18:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 01:24:01.273 05:18:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 01:24:01.273 05:18:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 01:24:01.273 05:18:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70435 01:24:01.273 05:18:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:24:01.273 05:18:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70435 01:24:01.273 05:18:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70435 ']' 01:24:01.273 05:18:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:24:01.273 05:18:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 01:24:01.273 05:18:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:24:01.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:24:01.273 05:18:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 01:24:01.273 05:18:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 01:24:01.274 [2024-12-09 05:18:43.605253] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:24:01.274 [2024-12-09 05:18:43.605617] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70435 ] 01:24:01.532 [2024-12-09 05:18:43.778662] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:24:01.532 [2024-12-09 05:18:43.892753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:24:02.468 05:18:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:24:02.468 05:18:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 01:24:02.469 05:18:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio '' 01:24:02.469 05:18:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:02.469 05:18:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 01:24:02.469 xnvme_bdev 01:24:02.469 05:18:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:02.469 05:18:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 01:24:02.469 05:18:44 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 01:24:02.469 05:18:44 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 01:24:02.469 05:18:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:02.469 05:18:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 01:24:02.469 05:18:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:02.469 05:18:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 01:24:02.469 05:18:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 01:24:02.469 05:18:44 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 01:24:02.469 05:18:44 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 01:24:02.469 05:18:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:02.469 05:18:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 01:24:02.469 05:18:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:02.469 05:18:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 01:24:02.469 05:18:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 01:24:02.469 05:18:44 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 01:24:02.469 05:18:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:02.469 05:18:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 01:24:02.469 05:18:44 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 01:24:02.728 05:18:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:02.728 05:18:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 01:24:02.728 05:18:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 01:24:02.728 05:18:44 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 01:24:02.728 05:18:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:02.728 05:18:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 01:24:02.728 05:18:44 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 01:24:02.728 05:18:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:02.728 05:18:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 01:24:02.728 05:18:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 01:24:02.728 05:18:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:02.728 05:18:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 01:24:02.728 05:18:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:02.728 05:18:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70435 01:24:02.728 05:18:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70435 ']' 01:24:02.728 05:18:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70435 01:24:02.728 05:18:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 01:24:02.728 05:18:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:24:02.728 05:18:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70435 01:24:02.728 killing process with pid 70435 01:24:02.728 05:18:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:24:02.728 05:18:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:24:02.728 05:18:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70435' 01:24:02.728 05:18:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70435 01:24:02.728 05:18:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70435 01:24:05.261 01:24:05.261 real 0m4.141s 01:24:05.261 user 0m4.174s 01:24:05.261 sys 0m0.572s 01:24:05.261 05:18:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 01:24:05.261 ************************************ 01:24:05.261 END TEST xnvme_rpc 01:24:05.261 ************************************ 01:24:05.261 05:18:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 01:24:05.261 05:18:47 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 01:24:05.261 05:18:47 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:24:05.261 05:18:47 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 01:24:05.261 05:18:47 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 01:24:05.261 ************************************ 01:24:05.261 START TEST xnvme_bdevperf 01:24:05.261 ************************************ 01:24:05.261 05:18:47 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 01:24:05.261 05:18:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 01:24:05.262 05:18:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 01:24:05.262 05:18:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 01:24:05.262 05:18:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 01:24:05.262 05:18:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 01:24:05.262 05:18:47 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 01:24:05.262 05:18:47 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 01:24:05.520 { 01:24:05.520 "subsystems": [ 01:24:05.520 { 01:24:05.520 "subsystem": "bdev", 01:24:05.520 "config": [ 01:24:05.520 { 01:24:05.520 "params": { 01:24:05.520 "io_mechanism": "libaio", 01:24:05.520 "conserve_cpu": false, 01:24:05.520 "filename": "/dev/nvme0n1", 01:24:05.520 "name": "xnvme_bdev" 01:24:05.520 }, 01:24:05.520 "method": "bdev_xnvme_create" 01:24:05.520 }, 01:24:05.520 { 01:24:05.520 "method": "bdev_wait_for_examine" 01:24:05.520 } 01:24:05.520 ] 01:24:05.520 } 01:24:05.520 ] 01:24:05.520 } 01:24:05.520 [2024-12-09 05:18:47.810271] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:24:05.520 [2024-12-09 05:18:47.810570] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70520 ] 01:24:05.779 [2024-12-09 05:18:47.993472] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:24:05.779 [2024-12-09 05:18:48.126108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:24:06.345 Running I/O for 5 seconds... 01:24:08.216 34356.00 IOPS, 134.20 MiB/s [2024-12-09T05:18:51.608Z] 36982.50 IOPS, 144.46 MiB/s [2024-12-09T05:18:52.983Z] 42612.67 IOPS, 166.46 MiB/s [2024-12-09T05:18:53.917Z] 44885.50 IOPS, 175.33 MiB/s 01:24:11.461 Latency(us) 01:24:11.461 [2024-12-09T05:18:53.917Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:24:11.461 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 01:24:11.461 xnvme_bdev : 5.00 44405.04 173.46 0.00 0.00 1438.02 170.26 5553.45 01:24:11.461 [2024-12-09T05:18:53.917Z] =================================================================================================================== 01:24:11.461 [2024-12-09T05:18:53.917Z] Total : 44405.04 173.46 0.00 0.00 1438.02 170.26 5553.45 01:24:12.398 05:18:54 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 01:24:12.398 05:18:54 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 01:24:12.398 05:18:54 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 01:24:12.398 05:18:54 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 01:24:12.398 05:18:54 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 01:24:12.657 { 01:24:12.657 "subsystems": [ 01:24:12.657 { 01:24:12.657 "subsystem": "bdev", 01:24:12.657 "config": [ 01:24:12.657 { 01:24:12.657 "params": { 01:24:12.657 "io_mechanism": "libaio", 01:24:12.657 "conserve_cpu": false, 01:24:12.657 "filename": "/dev/nvme0n1", 01:24:12.657 "name": "xnvme_bdev" 01:24:12.657 }, 01:24:12.657 "method": "bdev_xnvme_create" 01:24:12.657 }, 01:24:12.657 { 01:24:12.657 "method": "bdev_wait_for_examine" 01:24:12.657 } 01:24:12.657 ] 01:24:12.657 } 01:24:12.657 ] 01:24:12.657 } 01:24:12.657 [2024-12-09 05:18:54.939787] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:24:12.657 [2024-12-09 05:18:54.939906] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70609 ] 01:24:12.915 [2024-12-09 05:18:55.120994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:24:12.915 [2024-12-09 05:18:55.249125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:24:13.481 Running I/O for 5 seconds... 01:24:15.352 50209.00 IOPS, 196.13 MiB/s [2024-12-09T05:18:58.744Z] 52136.00 IOPS, 203.66 MiB/s [2024-12-09T05:19:00.118Z] 52252.00 IOPS, 204.11 MiB/s [2024-12-09T05:19:00.685Z] 52058.00 IOPS, 203.35 MiB/s 01:24:18.229 Latency(us) 01:24:18.229 [2024-12-09T05:19:00.685Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:24:18.229 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 01:24:18.229 xnvme_bdev : 5.00 52016.24 203.19 0.00 0.00 1227.45 173.55 6422.00 01:24:18.229 [2024-12-09T05:19:00.685Z] =================================================================================================================== 01:24:18.229 [2024-12-09T05:19:00.685Z] Total : 52016.24 203.19 0.00 0.00 1227.45 173.55 6422.00 01:24:19.607 01:24:19.607 real 0m14.236s 01:24:19.607 user 0m5.505s 01:24:19.607 sys 0m6.714s 01:24:19.607 05:19:01 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 01:24:19.607 05:19:01 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 01:24:19.608 ************************************ 01:24:19.608 END TEST xnvme_bdevperf 01:24:19.608 ************************************ 01:24:19.608 05:19:02 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 01:24:19.608 05:19:02 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:24:19.608 05:19:02 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 01:24:19.608 05:19:02 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 01:24:19.608 ************************************ 01:24:19.608 START TEST xnvme_fio_plugin 01:24:19.608 ************************************ 01:24:19.608 05:19:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 01:24:19.608 05:19:02 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 01:24:19.608 05:19:02 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 01:24:19.608 05:19:02 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 01:24:19.608 05:19:02 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 01:24:19.608 05:19:02 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 01:24:19.608 05:19:02 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 01:24:19.608 05:19:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 01:24:19.608 05:19:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 01:24:19.608 05:19:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:24:19.608 05:19:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 01:24:19.608 05:19:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 01:24:19.608 05:19:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:24:19.608 05:19:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 01:24:19.608 05:19:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 01:24:19.608 05:19:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:24:19.608 05:19:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:24:19.608 05:19:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 01:24:19.608 05:19:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:24:19.866 { 01:24:19.866 "subsystems": [ 01:24:19.866 { 01:24:19.866 "subsystem": "bdev", 01:24:19.866 "config": [ 01:24:19.866 { 01:24:19.866 "params": { 01:24:19.866 "io_mechanism": "libaio", 01:24:19.866 "conserve_cpu": false, 01:24:19.866 "filename": "/dev/nvme0n1", 01:24:19.866 "name": "xnvme_bdev" 01:24:19.866 }, 01:24:19.866 "method": "bdev_xnvme_create" 01:24:19.866 }, 01:24:19.866 { 01:24:19.866 "method": "bdev_wait_for_examine" 01:24:19.866 } 01:24:19.866 ] 01:24:19.866 } 01:24:19.866 ] 01:24:19.866 } 01:24:19.866 05:19:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 01:24:19.866 05:19:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 01:24:19.866 05:19:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 01:24:19.866 05:19:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 01:24:19.866 05:19:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 01:24:19.866 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 01:24:19.866 fio-3.35 01:24:19.866 Starting 1 thread 01:24:26.424 01:24:26.424 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70737: Mon Dec 9 05:19:08 2024 01:24:26.424 read: IOPS=50.4k, BW=197MiB/s (206MB/s)(984MiB/5001msec) 01:24:26.424 slat (usec): min=4, max=1166, avg=17.30, stdev=35.16 01:24:26.424 clat (usec): min=84, max=5497, avg=768.20, stdev=433.01 01:24:26.424 lat (usec): min=125, max=5610, avg=785.50, stdev=434.34 01:24:26.424 clat percentiles (usec): 01:24:26.424 | 1.00th=[ 172], 5.00th=[ 258], 10.00th=[ 330], 20.00th=[ 445], 01:24:26.424 | 30.00th=[ 537], 40.00th=[ 619], 50.00th=[ 701], 60.00th=[ 791], 01:24:26.424 | 70.00th=[ 889], 80.00th=[ 1029], 90.00th=[ 1237], 95.00th=[ 1434], 01:24:26.424 | 99.00th=[ 2409], 99.50th=[ 3032], 99.90th=[ 4228], 99.95th=[ 4490], 01:24:26.424 | 99.99th=[ 4883] 01:24:26.424 bw ( KiB/s): min=154533, max=241736, per=98.63%, avg=198657.44, stdev=36879.03, samples=9 01:24:26.424 iops : min=38633, max=60434, avg=49664.33, stdev=9219.79, samples=9 01:24:26.424 lat (usec) : 100=0.11%, 250=4.37%, 500=21.47%, 750=29.93%, 1000=22.42% 01:24:26.424 lat (msec) : 2=20.06%, 4=1.48%, 10=0.14% 01:24:26.424 cpu : usr=26.08%, sys=59.84%, ctx=28, majf=0, minf=764 01:24:26.424 IO depths : 1=0.1%, 2=0.9%, 4=3.5%, 8=9.9%, 16=24.7%, 32=59.0%, >=64=2.0% 01:24:26.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:24:26.424 complete : 0=0.0%, 4=98.1%, 8=0.1%, 16=0.1%, 32=0.2%, 64=1.7%, >=64=0.0% 01:24:26.424 issued rwts: total=251826,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:24:26.424 latency : target=0, window=0, percentile=100.00%, depth=64 01:24:26.424 01:24:26.424 Run status group 0 (all jobs): 01:24:26.424 READ: bw=197MiB/s (206MB/s), 197MiB/s-197MiB/s (206MB/s-206MB/s), io=984MiB (1031MB), run=5001-5001msec 01:24:27.361 ----------------------------------------------------- 01:24:27.361 Suppressions used: 01:24:27.361 count bytes template 01:24:27.361 1 11 /usr/src/fio/parse.c 01:24:27.361 1 8 libtcmalloc_minimal.so 01:24:27.361 1 904 libcrypto.so 01:24:27.361 ----------------------------------------------------- 01:24:27.361 01:24:27.361 05:19:09 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 01:24:27.361 05:19:09 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 01:24:27.361 05:19:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 01:24:27.361 05:19:09 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 01:24:27.361 05:19:09 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 01:24:27.361 05:19:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 01:24:27.361 05:19:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:24:27.361 05:19:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 01:24:27.361 05:19:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 01:24:27.361 05:19:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:24:27.361 05:19:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 01:24:27.361 05:19:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 01:24:27.361 05:19:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:24:27.361 05:19:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 01:24:27.361 05:19:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:24:27.361 05:19:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:24:27.361 05:19:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 01:24:27.361 05:19:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 01:24:27.361 05:19:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 01:24:27.361 05:19:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 01:24:27.361 05:19:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 01:24:27.361 { 01:24:27.361 "subsystems": [ 01:24:27.361 { 01:24:27.361 "subsystem": "bdev", 01:24:27.361 "config": [ 01:24:27.361 { 01:24:27.361 "params": { 01:24:27.361 "io_mechanism": "libaio", 01:24:27.361 "conserve_cpu": false, 01:24:27.361 "filename": "/dev/nvme0n1", 01:24:27.361 "name": "xnvme_bdev" 01:24:27.361 }, 01:24:27.361 "method": "bdev_xnvme_create" 01:24:27.361 }, 01:24:27.361 { 01:24:27.361 "method": "bdev_wait_for_examine" 01:24:27.361 } 01:24:27.361 ] 01:24:27.361 } 01:24:27.361 ] 01:24:27.361 } 01:24:27.620 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 01:24:27.620 fio-3.35 01:24:27.620 Starting 1 thread 01:24:34.179 01:24:34.179 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70835: Mon Dec 9 05:19:15 2024 01:24:34.179 write: IOPS=51.9k, BW=203MiB/s (213MB/s)(1014MiB/5001msec); 0 zone resets 01:24:34.179 slat (usec): min=4, max=1171, avg=16.62, stdev=35.93 01:24:34.179 clat (usec): min=41, max=5259, avg=760.25, stdev=380.37 01:24:34.179 lat (usec): min=153, max=5330, avg=776.87, stdev=379.29 01:24:34.179 clat percentiles (usec): 01:24:34.179 | 1.00th=[ 178], 5.00th=[ 273], 10.00th=[ 347], 20.00th=[ 453], 01:24:34.179 | 30.00th=[ 545], 40.00th=[ 635], 50.00th=[ 717], 60.00th=[ 799], 01:24:34.179 | 70.00th=[ 898], 80.00th=[ 1012], 90.00th=[ 1205], 95.00th=[ 1385], 01:24:34.179 | 99.00th=[ 1958], 99.50th=[ 2474], 99.90th=[ 3654], 99.95th=[ 4146], 01:24:34.179 | 99.99th=[ 4555] 01:24:34.179 bw ( KiB/s): min=179096, max=225008, per=100.00%, avg=209435.56, stdev=17180.76, samples=9 01:24:34.179 iops : min=44774, max=56252, avg=52358.89, stdev=4295.19, samples=9 01:24:34.179 lat (usec) : 50=0.01%, 100=0.13%, 250=3.56%, 500=21.12%, 750=29.20% 01:24:34.179 lat (usec) : 1000=25.17% 01:24:34.179 lat (msec) : 2=19.89%, 4=0.87%, 10=0.07% 01:24:34.179 cpu : usr=28.22%, sys=59.06%, ctx=14, majf=0, minf=765 01:24:34.179 IO depths : 1=0.2%, 2=0.8%, 4=3.3%, 8=9.6%, 16=24.8%, 32=59.3%, >=64=2.0% 01:24:34.179 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:24:34.179 complete : 0=0.0%, 4=98.1%, 8=0.1%, 16=0.1%, 32=0.2%, 64=1.7%, >=64=0.0% 01:24:34.179 issued rwts: total=0,259704,0,0 short=0,0,0,0 dropped=0,0,0,0 01:24:34.179 latency : target=0, window=0, percentile=100.00%, depth=64 01:24:34.179 01:24:34.179 Run status group 0 (all jobs): 01:24:34.179 WRITE: bw=203MiB/s (213MB/s), 203MiB/s-203MiB/s (213MB/s-213MB/s), io=1014MiB (1064MB), run=5001-5001msec 01:24:35.117 ----------------------------------------------------- 01:24:35.117 Suppressions used: 01:24:35.117 count bytes template 01:24:35.117 1 11 /usr/src/fio/parse.c 01:24:35.117 1 8 libtcmalloc_minimal.so 01:24:35.117 1 904 libcrypto.so 01:24:35.117 ----------------------------------------------------- 01:24:35.117 01:24:35.117 01:24:35.117 real 0m15.243s 01:24:35.117 user 0m6.701s 01:24:35.117 sys 0m6.861s 01:24:35.117 05:19:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 01:24:35.117 05:19:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 01:24:35.117 ************************************ 01:24:35.117 END TEST xnvme_fio_plugin 01:24:35.117 ************************************ 01:24:35.117 05:19:17 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 01:24:35.117 05:19:17 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 01:24:35.117 05:19:17 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 01:24:35.117 05:19:17 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 01:24:35.117 05:19:17 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:24:35.117 05:19:17 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 01:24:35.117 05:19:17 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 01:24:35.117 ************************************ 01:24:35.117 START TEST xnvme_rpc 01:24:35.117 ************************************ 01:24:35.117 05:19:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 01:24:35.117 05:19:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 01:24:35.117 05:19:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 01:24:35.117 05:19:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 01:24:35.117 05:19:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 01:24:35.117 05:19:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70921 01:24:35.117 05:19:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70921 01:24:35.117 05:19:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:24:35.117 05:19:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70921 ']' 01:24:35.117 05:19:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:24:35.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:24:35.117 05:19:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 01:24:35.117 05:19:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:24:35.117 05:19:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 01:24:35.117 05:19:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 01:24:35.117 [2024-12-09 05:19:17.473640] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:24:35.117 [2024-12-09 05:19:17.473767] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70921 ] 01:24:35.376 [2024-12-09 05:19:17.662559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:24:35.376 [2024-12-09 05:19:17.795617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:24:36.333 05:19:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:24:36.333 05:19:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 01:24:36.333 05:19:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio -c 01:24:36.333 05:19:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:36.333 05:19:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 01:24:36.333 xnvme_bdev 01:24:36.333 05:19:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:36.333 05:19:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 01:24:36.333 05:19:18 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 01:24:36.333 05:19:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:36.333 05:19:18 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 01:24:36.333 05:19:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 01:24:36.596 05:19:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:36.596 05:19:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 01:24:36.596 05:19:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 01:24:36.596 05:19:18 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 01:24:36.596 05:19:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:36.596 05:19:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 01:24:36.596 05:19:18 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 01:24:36.596 05:19:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:36.596 05:19:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 01:24:36.596 05:19:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 01:24:36.596 05:19:18 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 01:24:36.596 05:19:18 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 01:24:36.596 05:19:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:36.596 05:19:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 01:24:36.596 05:19:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:36.596 05:19:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 01:24:36.596 05:19:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 01:24:36.596 05:19:18 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 01:24:36.596 05:19:18 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 01:24:36.596 05:19:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:36.596 05:19:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 01:24:36.596 05:19:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:36.596 05:19:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 01:24:36.596 05:19:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 01:24:36.596 05:19:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:36.596 05:19:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 01:24:36.596 05:19:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:36.596 05:19:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70921 01:24:36.596 05:19:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70921 ']' 01:24:36.596 05:19:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70921 01:24:36.596 05:19:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 01:24:36.596 05:19:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:24:36.596 05:19:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70921 01:24:36.596 killing process with pid 70921 01:24:36.596 05:19:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:24:36.596 05:19:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:24:36.596 05:19:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70921' 01:24:36.596 05:19:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70921 01:24:36.596 05:19:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70921 01:24:39.885 ************************************ 01:24:39.885 END TEST xnvme_rpc 01:24:39.885 ************************************ 01:24:39.885 01:24:39.885 real 0m4.280s 01:24:39.885 user 0m4.130s 01:24:39.885 sys 0m0.722s 01:24:39.885 05:19:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 01:24:39.885 05:19:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 01:24:39.885 05:19:21 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 01:24:39.885 05:19:21 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:24:39.885 05:19:21 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 01:24:39.885 05:19:21 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 01:24:39.885 ************************************ 01:24:39.885 START TEST xnvme_bdevperf 01:24:39.885 ************************************ 01:24:39.885 05:19:21 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 01:24:39.885 05:19:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 01:24:39.885 05:19:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 01:24:39.885 05:19:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 01:24:39.885 05:19:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 01:24:39.885 05:19:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 01:24:39.885 05:19:21 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 01:24:39.885 05:19:21 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 01:24:39.885 { 01:24:39.885 "subsystems": [ 01:24:39.885 { 01:24:39.885 "subsystem": "bdev", 01:24:39.885 "config": [ 01:24:39.885 { 01:24:39.885 "params": { 01:24:39.885 "io_mechanism": "libaio", 01:24:39.885 "conserve_cpu": true, 01:24:39.885 "filename": "/dev/nvme0n1", 01:24:39.885 "name": "xnvme_bdev" 01:24:39.885 }, 01:24:39.885 "method": "bdev_xnvme_create" 01:24:39.885 }, 01:24:39.885 { 01:24:39.885 "method": "bdev_wait_for_examine" 01:24:39.885 } 01:24:39.885 ] 01:24:39.885 } 01:24:39.885 ] 01:24:39.885 } 01:24:39.885 [2024-12-09 05:19:21.809125] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:24:39.885 [2024-12-09 05:19:21.809241] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71012 ] 01:24:39.885 [2024-12-09 05:19:21.995606] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:24:39.885 [2024-12-09 05:19:22.127204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:24:40.143 Running I/O for 5 seconds... 01:24:42.448 46659.00 IOPS, 182.26 MiB/s [2024-12-09T05:19:25.837Z] 45790.00 IOPS, 178.87 MiB/s [2024-12-09T05:19:26.773Z] 45795.33 IOPS, 178.89 MiB/s [2024-12-09T05:19:27.710Z] 45755.50 IOPS, 178.73 MiB/s 01:24:45.254 Latency(us) 01:24:45.254 [2024-12-09T05:19:27.710Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:24:45.254 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 01:24:45.254 xnvme_bdev : 5.00 45726.69 178.62 0.00 0.00 1396.07 212.20 5211.30 01:24:45.254 [2024-12-09T05:19:27.710Z] =================================================================================================================== 01:24:45.254 [2024-12-09T05:19:27.710Z] Total : 45726.69 178.62 0.00 0.00 1396.07 212.20 5211.30 01:24:46.631 05:19:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 01:24:46.631 05:19:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 01:24:46.631 05:19:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 01:24:46.631 05:19:28 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 01:24:46.631 05:19:28 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 01:24:46.631 { 01:24:46.631 "subsystems": [ 01:24:46.631 { 01:24:46.631 "subsystem": "bdev", 01:24:46.631 "config": [ 01:24:46.631 { 01:24:46.631 "params": { 01:24:46.631 "io_mechanism": "libaio", 01:24:46.631 "conserve_cpu": true, 01:24:46.631 "filename": "/dev/nvme0n1", 01:24:46.631 "name": "xnvme_bdev" 01:24:46.631 }, 01:24:46.631 "method": "bdev_xnvme_create" 01:24:46.631 }, 01:24:46.631 { 01:24:46.631 "method": "bdev_wait_for_examine" 01:24:46.631 } 01:24:46.631 ] 01:24:46.631 } 01:24:46.631 ] 01:24:46.631 } 01:24:46.631 [2024-12-09 05:19:28.924608] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:24:46.631 [2024-12-09 05:19:28.924950] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71094 ] 01:24:46.908 [2024-12-09 05:19:29.111994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:24:46.908 [2024-12-09 05:19:29.244997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:24:47.473 Running I/O for 5 seconds... 01:24:49.338 45391.00 IOPS, 177.31 MiB/s [2024-12-09T05:19:32.726Z] 45160.50 IOPS, 176.41 MiB/s [2024-12-09T05:19:34.097Z] 44616.33 IOPS, 174.28 MiB/s [2024-12-09T05:19:34.663Z] 44225.75 IOPS, 172.76 MiB/s 01:24:52.207 Latency(us) 01:24:52.207 [2024-12-09T05:19:34.663Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:24:52.207 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 01:24:52.207 xnvme_bdev : 5.00 44059.81 172.11 0.00 0.00 1449.01 163.68 5158.66 01:24:52.207 [2024-12-09T05:19:34.663Z] =================================================================================================================== 01:24:52.207 [2024-12-09T05:19:34.663Z] Total : 44059.81 172.11 0.00 0.00 1449.01 163.68 5158.66 01:24:53.581 01:24:53.581 real 0m14.248s 01:24:53.581 user 0m5.456s 01:24:53.581 sys 0m6.328s 01:24:53.581 05:19:35 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 01:24:53.581 05:19:35 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 01:24:53.581 ************************************ 01:24:53.581 END TEST xnvme_bdevperf 01:24:53.581 ************************************ 01:24:53.581 05:19:36 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 01:24:53.581 05:19:36 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:24:53.581 05:19:36 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 01:24:53.582 05:19:36 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 01:24:53.840 ************************************ 01:24:53.841 START TEST xnvme_fio_plugin 01:24:53.841 ************************************ 01:24:53.841 05:19:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 01:24:53.841 05:19:36 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 01:24:53.841 05:19:36 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 01:24:53.841 05:19:36 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 01:24:53.841 05:19:36 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 01:24:53.841 05:19:36 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 01:24:53.841 05:19:36 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 01:24:53.841 05:19:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 01:24:53.841 05:19:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 01:24:53.841 05:19:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 01:24:53.841 05:19:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:24:53.841 05:19:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 01:24:53.841 05:19:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:24:53.841 05:19:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 01:24:53.841 05:19:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 01:24:53.841 05:19:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:24:53.841 05:19:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:24:53.841 05:19:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 01:24:53.841 05:19:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:24:53.841 { 01:24:53.841 "subsystems": [ 01:24:53.841 { 01:24:53.841 "subsystem": "bdev", 01:24:53.841 "config": [ 01:24:53.841 { 01:24:53.841 "params": { 01:24:53.841 "io_mechanism": "libaio", 01:24:53.841 "conserve_cpu": true, 01:24:53.841 "filename": "/dev/nvme0n1", 01:24:53.841 "name": "xnvme_bdev" 01:24:53.841 }, 01:24:53.841 "method": "bdev_xnvme_create" 01:24:53.841 }, 01:24:53.841 { 01:24:53.841 "method": "bdev_wait_for_examine" 01:24:53.841 } 01:24:53.841 ] 01:24:53.841 } 01:24:53.841 ] 01:24:53.841 } 01:24:53.841 05:19:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 01:24:53.841 05:19:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 01:24:53.841 05:19:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 01:24:53.841 05:19:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 01:24:53.841 05:19:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 01:24:54.099 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 01:24:54.099 fio-3.35 01:24:54.099 Starting 1 thread 01:25:00.657 01:25:00.657 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71214: Mon Dec 9 05:19:42 2024 01:25:00.657 read: IOPS=47.6k, BW=186MiB/s (195MB/s)(929MiB/5001msec) 01:25:00.657 slat (usec): min=4, max=982, avg=18.25, stdev=33.91 01:25:00.657 clat (usec): min=84, max=5251, avg=814.28, stdev=443.55 01:25:00.657 lat (usec): min=137, max=5299, avg=832.53, stdev=444.30 01:25:00.657 clat percentiles (usec): 01:25:00.657 | 1.00th=[ 178], 5.00th=[ 258], 10.00th=[ 330], 20.00th=[ 457], 01:25:00.657 | 30.00th=[ 570], 40.00th=[ 668], 50.00th=[ 775], 60.00th=[ 873], 01:25:00.657 | 70.00th=[ 979], 80.00th=[ 1090], 90.00th=[ 1270], 95.00th=[ 1450], 01:25:00.657 | 99.00th=[ 2507], 99.50th=[ 3097], 99.90th=[ 4080], 99.95th=[ 4293], 01:25:00.657 | 99.99th=[ 4817] 01:25:00.657 bw ( KiB/s): min=177624, max=200360, per=100.00%, avg=191991.11, stdev=7760.25, samples=9 01:25:00.657 iops : min=44406, max=50090, avg=47997.78, stdev=1940.06, samples=9 01:25:00.657 lat (usec) : 100=0.07%, 250=4.47%, 500=19.44%, 750=23.78%, 1000=24.38% 01:25:00.657 lat (msec) : 2=26.06%, 4=1.67%, 10=0.12% 01:25:00.657 cpu : usr=26.56%, sys=57.78%, ctx=44, majf=0, minf=764 01:25:00.657 IO depths : 1=0.1%, 2=0.9%, 4=3.7%, 8=10.6%, 16=25.6%, 32=57.3%, >=64=1.8% 01:25:00.657 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:25:00.657 complete : 0=0.0%, 4=98.2%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.7%, >=64=0.0% 01:25:00.657 issued rwts: total=237841,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:25:00.657 latency : target=0, window=0, percentile=100.00%, depth=64 01:25:00.657 01:25:00.657 Run status group 0 (all jobs): 01:25:00.657 READ: bw=186MiB/s (195MB/s), 186MiB/s-186MiB/s (195MB/s-195MB/s), io=929MiB (974MB), run=5001-5001msec 01:25:01.224 ----------------------------------------------------- 01:25:01.224 Suppressions used: 01:25:01.224 count bytes template 01:25:01.224 1 11 /usr/src/fio/parse.c 01:25:01.224 1 8 libtcmalloc_minimal.so 01:25:01.224 1 904 libcrypto.so 01:25:01.224 ----------------------------------------------------- 01:25:01.224 01:25:01.483 05:19:43 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 01:25:01.484 05:19:43 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 01:25:01.484 05:19:43 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 01:25:01.484 05:19:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 01:25:01.484 05:19:43 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 01:25:01.484 05:19:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 01:25:01.484 05:19:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 01:25:01.484 05:19:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:25:01.484 05:19:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 01:25:01.484 05:19:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:25:01.484 05:19:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 01:25:01.484 05:19:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 01:25:01.484 05:19:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:25:01.484 05:19:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:25:01.484 05:19:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:25:01.484 05:19:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 01:25:01.484 { 01:25:01.484 "subsystems": [ 01:25:01.484 { 01:25:01.484 "subsystem": "bdev", 01:25:01.484 "config": [ 01:25:01.484 { 01:25:01.484 "params": { 01:25:01.484 "io_mechanism": "libaio", 01:25:01.484 "conserve_cpu": true, 01:25:01.484 "filename": "/dev/nvme0n1", 01:25:01.484 "name": "xnvme_bdev" 01:25:01.484 }, 01:25:01.484 "method": "bdev_xnvme_create" 01:25:01.484 }, 01:25:01.484 { 01:25:01.484 "method": "bdev_wait_for_examine" 01:25:01.484 } 01:25:01.484 ] 01:25:01.484 } 01:25:01.484 ] 01:25:01.484 } 01:25:01.484 05:19:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 01:25:01.484 05:19:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 01:25:01.484 05:19:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 01:25:01.484 05:19:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 01:25:01.484 05:19:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 01:25:01.484 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 01:25:01.484 fio-3.35 01:25:01.484 Starting 1 thread 01:25:08.051 01:25:08.051 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71319: Mon Dec 9 05:19:49 2024 01:25:08.051 write: IOPS=43.5k, BW=170MiB/s (178MB/s)(850MiB/5001msec); 0 zone resets 01:25:08.051 slat (usec): min=4, max=1182, avg=20.17, stdev=32.05 01:25:08.051 clat (usec): min=85, max=6942, avg=873.31, stdev=489.04 01:25:08.051 lat (usec): min=127, max=6952, avg=893.48, stdev=491.11 01:25:08.051 clat percentiles (usec): 01:25:08.051 | 1.00th=[ 196], 5.00th=[ 277], 10.00th=[ 351], 20.00th=[ 474], 01:25:08.051 | 30.00th=[ 594], 40.00th=[ 709], 50.00th=[ 824], 60.00th=[ 947], 01:25:08.051 | 70.00th=[ 1057], 80.00th=[ 1188], 90.00th=[ 1352], 95.00th=[ 1532], 01:25:08.051 | 99.00th=[ 2868], 99.50th=[ 3490], 99.90th=[ 4293], 99.95th=[ 4621], 01:25:08.051 | 99.99th=[ 5211] 01:25:08.051 bw ( KiB/s): min=169560, max=183456, per=100.00%, avg=175474.11, stdev=4210.20, samples=9 01:25:08.051 iops : min=42390, max=45864, avg=43868.44, stdev=1052.58, samples=9 01:25:08.051 lat (usec) : 100=0.05%, 250=3.36%, 500=18.70%, 750=21.34%, 1000=21.57% 01:25:08.051 lat (msec) : 2=32.69%, 4=2.08%, 10=0.21% 01:25:08.051 cpu : usr=25.54%, sys=56.38%, ctx=56, majf=0, minf=765 01:25:08.051 IO depths : 1=0.1%, 2=0.8%, 4=3.8%, 8=11.2%, 16=26.2%, 32=56.1%, >=64=1.8% 01:25:08.051 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:25:08.051 complete : 0=0.0%, 4=98.3%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.7%, >=64=0.0% 01:25:08.051 issued rwts: total=0,217645,0,0 short=0,0,0,0 dropped=0,0,0,0 01:25:08.051 latency : target=0, window=0, percentile=100.00%, depth=64 01:25:08.051 01:25:08.051 Run status group 0 (all jobs): 01:25:08.051 WRITE: bw=170MiB/s (178MB/s), 170MiB/s-170MiB/s (178MB/s-178MB/s), io=850MiB (891MB), run=5001-5001msec 01:25:08.987 ----------------------------------------------------- 01:25:08.987 Suppressions used: 01:25:08.987 count bytes template 01:25:08.987 1 11 /usr/src/fio/parse.c 01:25:08.987 1 8 libtcmalloc_minimal.so 01:25:08.987 1 904 libcrypto.so 01:25:08.987 ----------------------------------------------------- 01:25:08.987 01:25:08.987 01:25:08.987 real 0m15.286s 01:25:08.987 user 0m6.635s 01:25:08.987 sys 0m6.626s 01:25:08.987 ************************************ 01:25:08.987 END TEST xnvme_fio_plugin 01:25:08.987 ************************************ 01:25:08.987 05:19:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 01:25:08.987 05:19:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 01:25:08.987 05:19:51 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 01:25:08.987 05:19:51 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 01:25:08.987 05:19:51 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 01:25:08.987 05:19:51 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 01:25:08.987 05:19:51 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 01:25:08.987 05:19:51 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 01:25:08.987 05:19:51 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 01:25:08.987 05:19:51 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 01:25:08.987 05:19:51 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 01:25:08.987 05:19:51 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:25:08.987 05:19:51 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 01:25:08.987 05:19:51 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 01:25:08.987 ************************************ 01:25:08.987 START TEST xnvme_rpc 01:25:08.987 ************************************ 01:25:08.987 05:19:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 01:25:08.987 05:19:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 01:25:08.987 05:19:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 01:25:08.987 05:19:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 01:25:08.987 05:19:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 01:25:08.987 05:19:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71406 01:25:08.987 05:19:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:25:08.987 05:19:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71406 01:25:08.987 05:19:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71406 ']' 01:25:08.987 05:19:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:25:08.987 05:19:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 01:25:08.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:25:08.987 05:19:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:25:08.987 05:19:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 01:25:08.987 05:19:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 01:25:09.245 [2024-12-09 05:19:51.530888] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:25:09.245 [2024-12-09 05:19:51.531721] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71406 ] 01:25:09.503 [2024-12-09 05:19:51.737041] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:25:09.503 [2024-12-09 05:19:51.862943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:25:10.440 05:19:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:25:10.440 05:19:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 01:25:10.440 05:19:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring '' 01:25:10.440 05:19:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:10.440 05:19:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 01:25:10.440 xnvme_bdev 01:25:10.440 05:19:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:10.440 05:19:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 01:25:10.440 05:19:52 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 01:25:10.440 05:19:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:10.440 05:19:52 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 01:25:10.440 05:19:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 01:25:10.699 05:19:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:10.699 05:19:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 01:25:10.699 05:19:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 01:25:10.699 05:19:52 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 01:25:10.699 05:19:52 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 01:25:10.699 05:19:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:10.699 05:19:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 01:25:10.699 05:19:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:10.699 05:19:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 01:25:10.699 05:19:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 01:25:10.699 05:19:52 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 01:25:10.699 05:19:52 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 01:25:10.699 05:19:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:10.699 05:19:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 01:25:10.699 05:19:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:10.699 05:19:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 01:25:10.699 05:19:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 01:25:10.699 05:19:53 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 01:25:10.699 05:19:53 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 01:25:10.699 05:19:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:10.699 05:19:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 01:25:10.699 05:19:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:10.699 05:19:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 01:25:10.699 05:19:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 01:25:10.699 05:19:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:10.699 05:19:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 01:25:10.699 05:19:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:10.699 05:19:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71406 01:25:10.699 05:19:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71406 ']' 01:25:10.699 05:19:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71406 01:25:10.699 05:19:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 01:25:10.699 05:19:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:25:10.699 05:19:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71406 01:25:10.699 killing process with pid 71406 01:25:10.700 05:19:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:25:10.700 05:19:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:25:10.700 05:19:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71406' 01:25:10.700 05:19:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71406 01:25:10.700 05:19:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71406 01:25:13.285 01:25:13.285 real 0m4.321s 01:25:13.285 user 0m4.200s 01:25:13.285 sys 0m0.727s 01:25:13.285 ************************************ 01:25:13.285 END TEST xnvme_rpc 01:25:13.285 ************************************ 01:25:13.285 05:19:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 01:25:13.285 05:19:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 01:25:13.544 05:19:55 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 01:25:13.544 05:19:55 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:25:13.544 05:19:55 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 01:25:13.544 05:19:55 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 01:25:13.544 ************************************ 01:25:13.544 START TEST xnvme_bdevperf 01:25:13.544 ************************************ 01:25:13.544 05:19:55 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 01:25:13.544 05:19:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 01:25:13.544 05:19:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 01:25:13.544 05:19:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 01:25:13.544 05:19:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 01:25:13.544 05:19:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 01:25:13.544 05:19:55 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 01:25:13.544 05:19:55 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 01:25:13.544 { 01:25:13.544 "subsystems": [ 01:25:13.544 { 01:25:13.544 "subsystem": "bdev", 01:25:13.544 "config": [ 01:25:13.544 { 01:25:13.544 "params": { 01:25:13.544 "io_mechanism": "io_uring", 01:25:13.544 "conserve_cpu": false, 01:25:13.544 "filename": "/dev/nvme0n1", 01:25:13.544 "name": "xnvme_bdev" 01:25:13.544 }, 01:25:13.544 "method": "bdev_xnvme_create" 01:25:13.544 }, 01:25:13.544 { 01:25:13.544 "method": "bdev_wait_for_examine" 01:25:13.544 } 01:25:13.544 ] 01:25:13.544 } 01:25:13.544 ] 01:25:13.544 } 01:25:13.544 [2024-12-09 05:19:55.908266] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:25:13.544 [2024-12-09 05:19:55.908393] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71498 ] 01:25:13.802 [2024-12-09 05:19:56.094696] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:25:13.802 [2024-12-09 05:19:56.212482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:25:14.368 Running I/O for 5 seconds... 01:25:16.233 44479.00 IOPS, 173.75 MiB/s [2024-12-09T05:19:59.620Z] 40509.50 IOPS, 158.24 MiB/s [2024-12-09T05:20:01.003Z] 39949.33 IOPS, 156.05 MiB/s [2024-12-09T05:20:01.936Z] 38991.00 IOPS, 152.31 MiB/s [2024-12-09T05:20:01.936Z] 37564.20 IOPS, 146.74 MiB/s 01:25:19.480 Latency(us) 01:25:19.480 [2024-12-09T05:20:01.936Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:25:19.480 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 01:25:19.480 xnvme_bdev : 5.01 37544.69 146.66 0.00 0.00 1700.26 366.83 7369.51 01:25:19.480 [2024-12-09T05:20:01.936Z] =================================================================================================================== 01:25:19.480 [2024-12-09T05:20:01.936Z] Total : 37544.69 146.66 0.00 0.00 1700.26 366.83 7369.51 01:25:20.856 05:20:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 01:25:20.856 05:20:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 01:25:20.856 05:20:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 01:25:20.856 05:20:02 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 01:25:20.856 05:20:02 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 01:25:20.856 { 01:25:20.856 "subsystems": [ 01:25:20.856 { 01:25:20.856 "subsystem": "bdev", 01:25:20.856 "config": [ 01:25:20.856 { 01:25:20.856 "params": { 01:25:20.856 "io_mechanism": "io_uring", 01:25:20.856 "conserve_cpu": false, 01:25:20.856 "filename": "/dev/nvme0n1", 01:25:20.856 "name": "xnvme_bdev" 01:25:20.856 }, 01:25:20.856 "method": "bdev_xnvme_create" 01:25:20.856 }, 01:25:20.856 { 01:25:20.856 "method": "bdev_wait_for_examine" 01:25:20.856 } 01:25:20.856 ] 01:25:20.856 } 01:25:20.856 ] 01:25:20.856 } 01:25:20.856 [2024-12-09 05:20:03.028335] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:25:20.856 [2024-12-09 05:20:03.028518] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71580 ] 01:25:20.856 [2024-12-09 05:20:03.218033] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:25:21.115 [2024-12-09 05:20:03.342383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:25:21.374 Running I/O for 5 seconds... 01:25:23.684 27648.00 IOPS, 108.00 MiB/s [2024-12-09T05:20:07.072Z] 29376.00 IOPS, 114.75 MiB/s [2024-12-09T05:20:08.007Z] 27861.33 IOPS, 108.83 MiB/s [2024-12-09T05:20:08.942Z] 28032.00 IOPS, 109.50 MiB/s [2024-12-09T05:20:08.942Z] 27865.60 IOPS, 108.85 MiB/s 01:25:26.486 Latency(us) 01:25:26.486 [2024-12-09T05:20:08.942Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:25:26.486 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 01:25:26.486 xnvme_bdev : 5.01 27821.69 108.68 0.00 0.00 2293.18 1105.43 8369.66 01:25:26.486 [2024-12-09T05:20:08.942Z] =================================================================================================================== 01:25:26.486 [2024-12-09T05:20:08.942Z] Total : 27821.69 108.68 0.00 0.00 2293.18 1105.43 8369.66 01:25:27.864 01:25:27.864 real 0m14.258s 01:25:27.864 user 0m6.984s 01:25:27.864 sys 0m7.041s 01:25:27.864 05:20:10 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 01:25:27.864 ************************************ 01:25:27.864 END TEST xnvme_bdevperf 01:25:27.864 ************************************ 01:25:27.864 05:20:10 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 01:25:27.864 05:20:10 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 01:25:27.864 05:20:10 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:25:27.864 05:20:10 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 01:25:27.864 05:20:10 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 01:25:27.864 ************************************ 01:25:27.864 START TEST xnvme_fio_plugin 01:25:27.864 ************************************ 01:25:27.864 05:20:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 01:25:27.864 05:20:10 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 01:25:27.864 05:20:10 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 01:25:27.864 05:20:10 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 01:25:27.864 05:20:10 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 01:25:27.864 05:20:10 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 01:25:27.864 05:20:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 01:25:27.864 05:20:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 01:25:27.864 05:20:10 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 01:25:27.864 05:20:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:25:27.864 05:20:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 01:25:27.864 05:20:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 01:25:27.864 05:20:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:25:27.864 05:20:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 01:25:27.864 05:20:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 01:25:27.864 05:20:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:25:27.864 05:20:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:25:27.864 05:20:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 01:25:27.864 05:20:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:25:27.864 05:20:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 01:25:27.864 05:20:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 01:25:27.864 { 01:25:27.864 "subsystems": [ 01:25:27.864 { 01:25:27.864 "subsystem": "bdev", 01:25:27.864 "config": [ 01:25:27.864 { 01:25:27.864 "params": { 01:25:27.864 "io_mechanism": "io_uring", 01:25:27.864 "conserve_cpu": false, 01:25:27.864 "filename": "/dev/nvme0n1", 01:25:27.864 "name": "xnvme_bdev" 01:25:27.864 }, 01:25:27.864 "method": "bdev_xnvme_create" 01:25:27.864 }, 01:25:27.864 { 01:25:27.864 "method": "bdev_wait_for_examine" 01:25:27.864 } 01:25:27.864 ] 01:25:27.864 } 01:25:27.864 ] 01:25:27.864 } 01:25:27.864 05:20:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 01:25:27.864 05:20:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 01:25:27.864 05:20:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 01:25:28.122 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 01:25:28.122 fio-3.35 01:25:28.122 Starting 1 thread 01:25:34.685 01:25:34.686 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71701: Mon Dec 9 05:20:16 2024 01:25:34.686 read: IOPS=27.0k, BW=105MiB/s (110MB/s)(527MiB/5001msec) 01:25:34.686 slat (usec): min=3, max=225, avg= 6.58, stdev= 2.41 01:25:34.686 clat (usec): min=1340, max=4769, avg=2111.36, stdev=303.15 01:25:34.686 lat (usec): min=1345, max=4778, avg=2117.94, stdev=304.25 01:25:34.686 clat percentiles (usec): 01:25:34.686 | 1.00th=[ 1500], 5.00th=[ 1631], 10.00th=[ 1713], 20.00th=[ 1827], 01:25:34.686 | 30.00th=[ 1942], 40.00th=[ 2024], 50.00th=[ 2114], 60.00th=[ 2180], 01:25:34.686 | 70.00th=[ 2278], 80.00th=[ 2376], 90.00th=[ 2507], 95.00th=[ 2606], 01:25:34.686 | 99.00th=[ 2769], 99.50th=[ 2835], 99.90th=[ 3032], 99.95th=[ 3294], 01:25:34.686 | 99.99th=[ 4686] 01:25:34.686 bw ( KiB/s): min=93696, max=124416, per=98.37%, avg=106154.67, stdev=11099.90, samples=9 01:25:34.686 iops : min=23424, max=31104, avg=26538.67, stdev=2774.97, samples=9 01:25:34.686 lat (msec) : 2=36.95%, 4=63.01%, 10=0.05% 01:25:34.686 cpu : usr=33.44%, sys=65.40%, ctx=11, majf=0, minf=762 01:25:34.686 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 01:25:34.686 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:25:34.686 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 01:25:34.686 issued rwts: total=134912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:25:34.686 latency : target=0, window=0, percentile=100.00%, depth=64 01:25:34.686 01:25:34.686 Run status group 0 (all jobs): 01:25:34.686 READ: bw=105MiB/s (110MB/s), 105MiB/s-105MiB/s (110MB/s-110MB/s), io=527MiB (553MB), run=5001-5001msec 01:25:35.622 ----------------------------------------------------- 01:25:35.622 Suppressions used: 01:25:35.622 count bytes template 01:25:35.622 1 11 /usr/src/fio/parse.c 01:25:35.622 1 8 libtcmalloc_minimal.so 01:25:35.622 1 904 libcrypto.so 01:25:35.622 ----------------------------------------------------- 01:25:35.622 01:25:35.622 05:20:17 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 01:25:35.622 05:20:17 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 01:25:35.622 05:20:17 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 01:25:35.622 05:20:17 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 01:25:35.622 05:20:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 01:25:35.622 05:20:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 01:25:35.622 05:20:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 01:25:35.622 05:20:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:25:35.622 05:20:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 01:25:35.622 05:20:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:25:35.622 05:20:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 01:25:35.622 05:20:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 01:25:35.622 05:20:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:25:35.622 05:20:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:25:35.622 05:20:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:25:35.622 05:20:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 01:25:35.622 05:20:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 01:25:35.622 05:20:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 01:25:35.622 05:20:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 01:25:35.622 05:20:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 01:25:35.622 05:20:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 01:25:35.622 { 01:25:35.622 "subsystems": [ 01:25:35.622 { 01:25:35.622 "subsystem": "bdev", 01:25:35.622 "config": [ 01:25:35.622 { 01:25:35.622 "params": { 01:25:35.622 "io_mechanism": "io_uring", 01:25:35.622 "conserve_cpu": false, 01:25:35.622 "filename": "/dev/nvme0n1", 01:25:35.622 "name": "xnvme_bdev" 01:25:35.622 }, 01:25:35.622 "method": "bdev_xnvme_create" 01:25:35.622 }, 01:25:35.622 { 01:25:35.622 "method": "bdev_wait_for_examine" 01:25:35.622 } 01:25:35.622 ] 01:25:35.622 } 01:25:35.622 ] 01:25:35.622 } 01:25:35.881 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 01:25:35.881 fio-3.35 01:25:35.881 Starting 1 thread 01:25:42.448 01:25:42.448 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71802: Mon Dec 9 05:20:23 2024 01:25:42.448 write: IOPS=24.8k, BW=96.9MiB/s (102MB/s)(485MiB/5001msec); 0 zone resets 01:25:42.448 slat (nsec): min=2468, max=93598, avg=8032.55, stdev=2609.47 01:25:42.448 clat (usec): min=518, max=10233, avg=2264.84, stdev=361.69 01:25:42.448 lat (usec): min=524, max=10245, avg=2272.88, stdev=362.69 01:25:42.448 clat percentiles (usec): 01:25:42.448 | 1.00th=[ 1450], 5.00th=[ 1631], 10.00th=[ 1778], 20.00th=[ 2040], 01:25:42.448 | 30.00th=[ 2147], 40.00th=[ 2212], 50.00th=[ 2311], 60.00th=[ 2376], 01:25:42.448 | 70.00th=[ 2442], 80.00th=[ 2540], 90.00th=[ 2638], 95.00th=[ 2704], 01:25:42.448 | 99.00th=[ 2835], 99.50th=[ 2868], 99.90th=[ 2999], 99.95th=[ 9634], 01:25:42.448 | 99.99th=[10159] 01:25:42.448 bw ( KiB/s): min=93696, max=118784, per=100.00%, avg=99534.44, stdev=9864.07, samples=9 01:25:42.448 iops : min=23424, max=29696, avg=24883.56, stdev=2466.05, samples=9 01:25:42.448 lat (usec) : 750=0.01% 01:25:42.448 lat (msec) : 2=17.84%, 4=82.10%, 10=0.03%, 20=0.02% 01:25:42.448 cpu : usr=38.12%, sys=60.74%, ctx=11, majf=0, minf=763 01:25:42.448 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 01:25:42.448 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:25:42.448 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.1%, 32=0.0%, 64=1.5%, >=64=0.0% 01:25:42.448 issued rwts: total=0,124112,0,0 short=0,0,0,0 dropped=0,0,0,0 01:25:42.448 latency : target=0, window=0, percentile=100.00%, depth=64 01:25:42.448 01:25:42.448 Run status group 0 (all jobs): 01:25:42.448 WRITE: bw=96.9MiB/s (102MB/s), 96.9MiB/s-96.9MiB/s (102MB/s-102MB/s), io=485MiB (508MB), run=5001-5001msec 01:25:43.381 ----------------------------------------------------- 01:25:43.381 Suppressions used: 01:25:43.381 count bytes template 01:25:43.381 1 11 /usr/src/fio/parse.c 01:25:43.381 1 8 libtcmalloc_minimal.so 01:25:43.381 1 904 libcrypto.so 01:25:43.381 ----------------------------------------------------- 01:25:43.381 01:25:43.381 01:25:43.381 real 0m15.391s 01:25:43.381 user 0m7.879s 01:25:43.381 sys 0m7.135s 01:25:43.382 ************************************ 01:25:43.382 END TEST xnvme_fio_plugin 01:25:43.382 ************************************ 01:25:43.382 05:20:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 01:25:43.382 05:20:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 01:25:43.382 05:20:25 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 01:25:43.382 05:20:25 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 01:25:43.382 05:20:25 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 01:25:43.382 05:20:25 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 01:25:43.382 05:20:25 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:25:43.382 05:20:25 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 01:25:43.382 05:20:25 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 01:25:43.382 ************************************ 01:25:43.382 START TEST xnvme_rpc 01:25:43.382 ************************************ 01:25:43.382 05:20:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 01:25:43.382 05:20:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 01:25:43.382 05:20:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 01:25:43.382 05:20:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 01:25:43.382 05:20:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 01:25:43.382 05:20:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71894 01:25:43.382 05:20:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:25:43.382 05:20:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71894 01:25:43.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:25:43.382 05:20:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71894 ']' 01:25:43.382 05:20:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:25:43.382 05:20:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 01:25:43.382 05:20:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:25:43.382 05:20:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 01:25:43.382 05:20:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 01:25:43.382 [2024-12-09 05:20:25.733358] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:25:43.382 [2024-12-09 05:20:25.733524] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71894 ] 01:25:43.639 [2024-12-09 05:20:25.910380] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:25:43.639 [2024-12-09 05:20:26.052680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:25:45.016 05:20:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:25:45.016 05:20:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 01:25:45.016 05:20:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring -c 01:25:45.016 05:20:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:45.016 05:20:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 01:25:45.016 xnvme_bdev 01:25:45.016 05:20:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:45.016 05:20:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 01:25:45.016 05:20:27 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 01:25:45.016 05:20:27 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 01:25:45.016 05:20:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:45.016 05:20:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 01:25:45.016 05:20:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:45.016 05:20:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 01:25:45.016 05:20:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 01:25:45.016 05:20:27 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 01:25:45.016 05:20:27 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 01:25:45.016 05:20:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:45.016 05:20:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 01:25:45.016 05:20:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:45.016 05:20:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 01:25:45.016 05:20:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 01:25:45.016 05:20:27 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 01:25:45.016 05:20:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:45.016 05:20:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 01:25:45.016 05:20:27 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 01:25:45.016 05:20:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:45.016 05:20:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 01:25:45.016 05:20:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 01:25:45.016 05:20:27 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 01:25:45.016 05:20:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:45.016 05:20:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 01:25:45.016 05:20:27 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 01:25:45.016 05:20:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:45.016 05:20:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 01:25:45.016 05:20:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 01:25:45.016 05:20:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:45.016 05:20:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 01:25:45.016 05:20:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:45.016 05:20:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71894 01:25:45.016 05:20:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71894 ']' 01:25:45.016 05:20:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71894 01:25:45.016 05:20:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 01:25:45.016 05:20:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:25:45.016 05:20:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71894 01:25:45.016 killing process with pid 71894 01:25:45.016 05:20:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:25:45.016 05:20:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:25:45.016 05:20:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71894' 01:25:45.016 05:20:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71894 01:25:45.016 05:20:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71894 01:25:48.304 01:25:48.304 real 0m4.460s 01:25:48.304 user 0m4.377s 01:25:48.304 sys 0m0.707s 01:25:48.304 ************************************ 01:25:48.304 END TEST xnvme_rpc 01:25:48.304 ************************************ 01:25:48.304 05:20:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 01:25:48.304 05:20:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 01:25:48.304 05:20:30 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 01:25:48.304 05:20:30 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:25:48.304 05:20:30 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 01:25:48.304 05:20:30 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 01:25:48.304 ************************************ 01:25:48.304 START TEST xnvme_bdevperf 01:25:48.304 ************************************ 01:25:48.304 05:20:30 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 01:25:48.304 05:20:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 01:25:48.304 05:20:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 01:25:48.304 05:20:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 01:25:48.304 05:20:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 01:25:48.304 05:20:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 01:25:48.304 05:20:30 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 01:25:48.304 05:20:30 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 01:25:48.304 { 01:25:48.304 "subsystems": [ 01:25:48.304 { 01:25:48.304 "subsystem": "bdev", 01:25:48.304 "config": [ 01:25:48.304 { 01:25:48.304 "params": { 01:25:48.304 "io_mechanism": "io_uring", 01:25:48.304 "conserve_cpu": true, 01:25:48.304 "filename": "/dev/nvme0n1", 01:25:48.304 "name": "xnvme_bdev" 01:25:48.304 }, 01:25:48.304 "method": "bdev_xnvme_create" 01:25:48.304 }, 01:25:48.304 { 01:25:48.304 "method": "bdev_wait_for_examine" 01:25:48.304 } 01:25:48.304 ] 01:25:48.304 } 01:25:48.304 ] 01:25:48.304 } 01:25:48.304 [2024-12-09 05:20:30.266902] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:25:48.304 [2024-12-09 05:20:30.267219] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71980 ] 01:25:48.304 [2024-12-09 05:20:30.455658] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:25:48.304 [2024-12-09 05:20:30.600720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:25:48.563 Running I/O for 5 seconds... 01:25:50.874 29824.00 IOPS, 116.50 MiB/s [2024-12-09T05:20:34.265Z] 28544.00 IOPS, 111.50 MiB/s [2024-12-09T05:20:35.200Z] 28853.33 IOPS, 112.71 MiB/s [2024-12-09T05:20:36.132Z] 27536.00 IOPS, 107.56 MiB/s 01:25:53.676 Latency(us) 01:25:53.676 [2024-12-09T05:20:36.132Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:25:53.676 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 01:25:53.676 xnvme_bdev : 5.01 26735.04 104.43 0.00 0.00 2386.53 1046.21 8632.85 01:25:53.676 [2024-12-09T05:20:36.132Z] =================================================================================================================== 01:25:53.676 [2024-12-09T05:20:36.132Z] Total : 26735.04 104.43 0.00 0.00 2386.53 1046.21 8632.85 01:25:55.056 05:20:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 01:25:55.056 05:20:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 01:25:55.056 05:20:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 01:25:55.056 05:20:37 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 01:25:55.056 05:20:37 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 01:25:55.056 { 01:25:55.056 "subsystems": [ 01:25:55.056 { 01:25:55.056 "subsystem": "bdev", 01:25:55.056 "config": [ 01:25:55.056 { 01:25:55.056 "params": { 01:25:55.056 "io_mechanism": "io_uring", 01:25:55.056 "conserve_cpu": true, 01:25:55.056 "filename": "/dev/nvme0n1", 01:25:55.056 "name": "xnvme_bdev" 01:25:55.056 }, 01:25:55.056 "method": "bdev_xnvme_create" 01:25:55.056 }, 01:25:55.056 { 01:25:55.056 "method": "bdev_wait_for_examine" 01:25:55.056 } 01:25:55.056 ] 01:25:55.056 } 01:25:55.056 ] 01:25:55.056 } 01:25:55.056 [2024-12-09 05:20:37.429761] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:25:55.056 [2024-12-09 05:20:37.429881] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72065 ] 01:25:55.315 [2024-12-09 05:20:37.613061] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:25:55.315 [2024-12-09 05:20:37.757594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:25:55.883 Running I/O for 5 seconds... 01:25:57.755 28544.00 IOPS, 111.50 MiB/s [2024-12-09T05:20:41.588Z] 30400.00 IOPS, 118.75 MiB/s [2024-12-09T05:20:42.177Z] 28074.67 IOPS, 109.67 MiB/s [2024-12-09T05:20:43.552Z] 26952.00 IOPS, 105.28 MiB/s [2024-12-09T05:20:43.552Z] 26412.80 IOPS, 103.17 MiB/s 01:26:01.096 Latency(us) 01:26:01.096 [2024-12-09T05:20:43.552Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:26:01.096 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 01:26:01.097 xnvme_bdev : 5.01 26367.03 103.00 0.00 0.00 2419.42 947.51 8422.30 01:26:01.097 [2024-12-09T05:20:43.553Z] =================================================================================================================== 01:26:01.097 [2024-12-09T05:20:43.553Z] Total : 26367.03 103.00 0.00 0.00 2419.42 947.51 8422.30 01:26:02.032 ************************************ 01:26:02.032 END TEST xnvme_bdevperf 01:26:02.032 ************************************ 01:26:02.032 01:26:02.032 real 0m14.295s 01:26:02.032 user 0m8.219s 01:26:02.032 sys 0m5.544s 01:26:02.032 05:20:44 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 01:26:02.032 05:20:44 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 01:26:02.290 05:20:44 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 01:26:02.291 05:20:44 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:26:02.291 05:20:44 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 01:26:02.291 05:20:44 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 01:26:02.291 ************************************ 01:26:02.291 START TEST xnvme_fio_plugin 01:26:02.291 ************************************ 01:26:02.291 05:20:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 01:26:02.291 05:20:44 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 01:26:02.291 05:20:44 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 01:26:02.291 05:20:44 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 01:26:02.291 05:20:44 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 01:26:02.291 05:20:44 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 01:26:02.291 05:20:44 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 01:26:02.291 05:20:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 01:26:02.291 05:20:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 01:26:02.291 05:20:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 01:26:02.291 05:20:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:26:02.291 05:20:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 01:26:02.291 05:20:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:26:02.291 05:20:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 01:26:02.291 05:20:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 01:26:02.291 05:20:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:26:02.291 05:20:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:26:02.291 05:20:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 01:26:02.291 05:20:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:26:02.291 05:20:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 01:26:02.291 { 01:26:02.291 "subsystems": [ 01:26:02.291 { 01:26:02.291 "subsystem": "bdev", 01:26:02.291 "config": [ 01:26:02.291 { 01:26:02.291 "params": { 01:26:02.291 "io_mechanism": "io_uring", 01:26:02.291 "conserve_cpu": true, 01:26:02.291 "filename": "/dev/nvme0n1", 01:26:02.291 "name": "xnvme_bdev" 01:26:02.291 }, 01:26:02.291 "method": "bdev_xnvme_create" 01:26:02.291 }, 01:26:02.291 { 01:26:02.291 "method": "bdev_wait_for_examine" 01:26:02.291 } 01:26:02.291 ] 01:26:02.291 } 01:26:02.291 ] 01:26:02.291 } 01:26:02.291 05:20:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 01:26:02.291 05:20:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 01:26:02.291 05:20:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 01:26:02.291 05:20:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 01:26:02.549 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 01:26:02.549 fio-3.35 01:26:02.549 Starting 1 thread 01:26:09.163 01:26:09.163 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72191: Mon Dec 9 05:20:50 2024 01:26:09.163 read: IOPS=29.7k, BW=116MiB/s (122MB/s)(580MiB/5002msec) 01:26:09.163 slat (nsec): min=2332, max=70577, avg=6030.22, stdev=2652.15 01:26:09.163 clat (usec): min=1110, max=3700, avg=1916.62, stdev=415.94 01:26:09.163 lat (usec): min=1114, max=3720, avg=1922.65, stdev=417.58 01:26:09.163 clat percentiles (usec): 01:26:09.163 | 1.00th=[ 1270], 5.00th=[ 1369], 10.00th=[ 1434], 20.00th=[ 1532], 01:26:09.163 | 30.00th=[ 1614], 40.00th=[ 1713], 50.00th=[ 1827], 60.00th=[ 1991], 01:26:09.163 | 70.00th=[ 2180], 80.00th=[ 2343], 90.00th=[ 2507], 95.00th=[ 2638], 01:26:09.163 | 99.00th=[ 2802], 99.50th=[ 2933], 99.90th=[ 3425], 99.95th=[ 3556], 01:26:09.163 | 99.99th=[ 3654] 01:26:09.163 bw ( KiB/s): min=99840, max=147672, per=100.00%, avg=121026.67, stdev=16712.46, samples=9 01:26:09.163 iops : min=24960, max=36918, avg=30256.67, stdev=4178.11, samples=9 01:26:09.163 lat (msec) : 2=60.44%, 4=39.56% 01:26:09.163 cpu : usr=46.51%, sys=49.55%, ctx=13, majf=0, minf=762 01:26:09.163 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 01:26:09.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:26:09.163 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 01:26:09.163 issued rwts: total=148416,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:26:09.163 latency : target=0, window=0, percentile=100.00%, depth=64 01:26:09.163 01:26:09.163 Run status group 0 (all jobs): 01:26:09.163 READ: bw=116MiB/s (122MB/s), 116MiB/s-116MiB/s (122MB/s-122MB/s), io=580MiB (608MB), run=5002-5002msec 01:26:10.098 ----------------------------------------------------- 01:26:10.098 Suppressions used: 01:26:10.098 count bytes template 01:26:10.098 1 11 /usr/src/fio/parse.c 01:26:10.098 1 8 libtcmalloc_minimal.so 01:26:10.098 1 904 libcrypto.so 01:26:10.098 ----------------------------------------------------- 01:26:10.098 01:26:10.098 05:20:52 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 01:26:10.098 05:20:52 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 01:26:10.098 05:20:52 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 01:26:10.098 05:20:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 01:26:10.098 05:20:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 01:26:10.098 05:20:52 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 01:26:10.098 05:20:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:26:10.098 05:20:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 01:26:10.098 05:20:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 01:26:10.099 05:20:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:26:10.099 05:20:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 01:26:10.099 05:20:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 01:26:10.099 05:20:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:26:10.099 05:20:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:26:10.099 05:20:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:26:10.099 05:20:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 01:26:10.099 05:20:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 01:26:10.099 05:20:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 01:26:10.099 05:20:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 01:26:10.099 05:20:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 01:26:10.099 05:20:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 01:26:10.099 { 01:26:10.099 "subsystems": [ 01:26:10.099 { 01:26:10.099 "subsystem": "bdev", 01:26:10.099 "config": [ 01:26:10.099 { 01:26:10.099 "params": { 01:26:10.099 "io_mechanism": "io_uring", 01:26:10.099 "conserve_cpu": true, 01:26:10.099 "filename": "/dev/nvme0n1", 01:26:10.099 "name": "xnvme_bdev" 01:26:10.099 }, 01:26:10.099 "method": "bdev_xnvme_create" 01:26:10.099 }, 01:26:10.099 { 01:26:10.099 "method": "bdev_wait_for_examine" 01:26:10.099 } 01:26:10.099 ] 01:26:10.099 } 01:26:10.099 ] 01:26:10.099 } 01:26:10.356 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 01:26:10.356 fio-3.35 01:26:10.356 Starting 1 thread 01:26:16.916 01:26:16.916 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72287: Mon Dec 9 05:20:58 2024 01:26:16.916 write: IOPS=26.3k, BW=103MiB/s (108MB/s)(515MiB/5002msec); 0 zone resets 01:26:16.916 slat (usec): min=2, max=143, avg= 7.17, stdev= 3.27 01:26:16.916 clat (usec): min=1110, max=3456, avg=2145.70, stdev=423.99 01:26:16.916 lat (usec): min=1114, max=3518, avg=2152.86, stdev=425.75 01:26:16.916 clat percentiles (usec): 01:26:16.916 | 1.00th=[ 1270], 5.00th=[ 1418], 10.00th=[ 1516], 20.00th=[ 1696], 01:26:16.916 | 30.00th=[ 1909], 40.00th=[ 2089], 50.00th=[ 2212], 60.00th=[ 2311], 01:26:16.916 | 70.00th=[ 2409], 80.00th=[ 2540], 90.00th=[ 2671], 95.00th=[ 2769], 01:26:16.916 | 99.00th=[ 2900], 99.50th=[ 2933], 99.90th=[ 3032], 99.95th=[ 3097], 01:26:16.916 | 99.99th=[ 3326] 01:26:16.916 bw ( KiB/s): min=89600, max=130560, per=100.00%, avg=106609.78, stdev=13523.92, samples=9 01:26:16.916 iops : min=22400, max=32640, avg=26652.44, stdev=3380.98, samples=9 01:26:16.916 lat (msec) : 2=34.89%, 4=65.11% 01:26:16.916 cpu : usr=47.45%, sys=48.39%, ctx=11, majf=0, minf=763 01:26:16.916 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 01:26:16.916 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:26:16.916 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 01:26:16.916 issued rwts: total=0,131776,0,0 short=0,0,0,0 dropped=0,0,0,0 01:26:16.916 latency : target=0, window=0, percentile=100.00%, depth=64 01:26:16.916 01:26:16.916 Run status group 0 (all jobs): 01:26:16.916 WRITE: bw=103MiB/s (108MB/s), 103MiB/s-103MiB/s (108MB/s-108MB/s), io=515MiB (540MB), run=5002-5002msec 01:26:17.855 ----------------------------------------------------- 01:26:17.855 Suppressions used: 01:26:17.855 count bytes template 01:26:17.855 1 11 /usr/src/fio/parse.c 01:26:17.855 1 8 libtcmalloc_minimal.so 01:26:17.855 1 904 libcrypto.so 01:26:17.855 ----------------------------------------------------- 01:26:17.855 01:26:17.855 01:26:17.855 real 0m15.468s 01:26:17.855 user 0m9.075s 01:26:17.855 sys 0m5.725s 01:26:17.855 ************************************ 01:26:17.855 END TEST xnvme_fio_plugin 01:26:17.855 ************************************ 01:26:17.855 05:20:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 01:26:17.855 05:20:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 01:26:17.855 05:21:00 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 01:26:17.855 05:21:00 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring_cmd 01:26:17.855 05:21:00 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/ng0n1 01:26:17.855 05:21:00 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/ng0n1 01:26:17.855 05:21:00 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 01:26:17.855 05:21:00 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 01:26:17.855 05:21:00 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 01:26:17.855 05:21:00 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 01:26:17.855 05:21:00 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 01:26:17.855 05:21:00 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:26:17.855 05:21:00 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 01:26:17.855 05:21:00 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 01:26:17.855 ************************************ 01:26:17.855 START TEST xnvme_rpc 01:26:17.855 ************************************ 01:26:17.855 05:21:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 01:26:17.855 05:21:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 01:26:17.855 05:21:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 01:26:17.855 05:21:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 01:26:17.855 05:21:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 01:26:17.855 05:21:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=72379 01:26:17.855 05:21:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:26:17.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:26:17.855 05:21:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 72379 01:26:17.855 05:21:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 72379 ']' 01:26:17.855 05:21:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:26:17.855 05:21:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 01:26:17.855 05:21:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:26:17.855 05:21:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 01:26:17.855 05:21:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 01:26:17.855 [2024-12-09 05:21:00.197127] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:26:17.855 [2024-12-09 05:21:00.197289] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72379 ] 01:26:18.114 [2024-12-09 05:21:00.376168] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:26:18.114 [2024-12-09 05:21:00.496929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:26:19.050 05:21:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:26:19.050 05:21:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 01:26:19.050 05:21:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd '' 01:26:19.050 05:21:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:19.050 05:21:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 01:26:19.050 xnvme_bdev 01:26:19.050 05:21:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:19.050 05:21:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 01:26:19.050 05:21:01 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 01:26:19.050 05:21:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:19.050 05:21:01 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 01:26:19.050 05:21:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 01:26:19.050 05:21:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:19.050 05:21:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 01:26:19.050 05:21:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 01:26:19.050 05:21:01 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 01:26:19.050 05:21:01 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 01:26:19.050 05:21:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:19.050 05:21:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 01:26:19.050 05:21:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:19.050 05:21:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 01:26:19.050 05:21:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 01:26:19.050 05:21:01 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 01:26:19.051 05:21:01 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 01:26:19.051 05:21:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:19.051 05:21:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 01:26:19.310 05:21:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:19.310 05:21:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 01:26:19.310 05:21:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 01:26:19.310 05:21:01 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 01:26:19.310 05:21:01 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 01:26:19.310 05:21:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:19.310 05:21:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 01:26:19.310 05:21:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:19.310 05:21:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 01:26:19.310 05:21:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 01:26:19.310 05:21:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:19.310 05:21:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 01:26:19.310 05:21:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:19.310 05:21:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 72379 01:26:19.310 05:21:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 72379 ']' 01:26:19.310 05:21:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 72379 01:26:19.310 05:21:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 01:26:19.310 05:21:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:26:19.310 05:21:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72379 01:26:19.310 killing process with pid 72379 01:26:19.310 05:21:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:26:19.310 05:21:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:26:19.310 05:21:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72379' 01:26:19.310 05:21:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 72379 01:26:19.310 05:21:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 72379 01:26:21.878 01:26:21.878 real 0m4.157s 01:26:21.878 user 0m4.221s 01:26:21.878 sys 0m0.567s 01:26:21.878 05:21:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 01:26:21.878 ************************************ 01:26:21.878 END TEST xnvme_rpc 01:26:21.878 ************************************ 01:26:21.878 05:21:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 01:26:21.878 05:21:04 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 01:26:21.878 05:21:04 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:26:21.878 05:21:04 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 01:26:21.878 05:21:04 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 01:26:21.878 ************************************ 01:26:21.878 START TEST xnvme_bdevperf 01:26:21.878 ************************************ 01:26:21.878 05:21:04 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 01:26:21.878 05:21:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 01:26:21.878 05:21:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 01:26:21.878 05:21:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 01:26:21.878 05:21:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 01:26:21.878 05:21:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 01:26:21.878 05:21:04 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 01:26:21.878 05:21:04 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 01:26:22.136 { 01:26:22.136 "subsystems": [ 01:26:22.136 { 01:26:22.136 "subsystem": "bdev", 01:26:22.136 "config": [ 01:26:22.136 { 01:26:22.136 "params": { 01:26:22.136 "io_mechanism": "io_uring_cmd", 01:26:22.136 "conserve_cpu": false, 01:26:22.136 "filename": "/dev/ng0n1", 01:26:22.136 "name": "xnvme_bdev" 01:26:22.136 }, 01:26:22.136 "method": "bdev_xnvme_create" 01:26:22.136 }, 01:26:22.136 { 01:26:22.136 "method": "bdev_wait_for_examine" 01:26:22.136 } 01:26:22.136 ] 01:26:22.136 } 01:26:22.136 ] 01:26:22.136 } 01:26:22.136 [2024-12-09 05:21:04.422140] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:26:22.136 [2024-12-09 05:21:04.422294] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72460 ] 01:26:22.394 [2024-12-09 05:21:04.610690] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:26:22.394 [2024-12-09 05:21:04.760020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:26:22.962 Running I/O for 5 seconds... 01:26:24.829 27904.00 IOPS, 109.00 MiB/s [2024-12-09T05:21:08.220Z] 28480.00 IOPS, 111.25 MiB/s [2024-12-09T05:21:09.600Z] 29687.00 IOPS, 115.96 MiB/s [2024-12-09T05:21:10.535Z] 29717.25 IOPS, 116.08 MiB/s 01:26:28.079 Latency(us) 01:26:28.079 [2024-12-09T05:21:10.535Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:26:28.079 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 01:26:28.079 xnvme_bdev : 5.00 29737.85 116.16 0.00 0.00 2145.54 776.43 16318.20 01:26:28.079 [2024-12-09T05:21:10.535Z] =================================================================================================================== 01:26:28.079 [2024-12-09T05:21:10.535Z] Total : 29737.85 116.16 0.00 0.00 2145.54 776.43 16318.20 01:26:29.454 05:21:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 01:26:29.454 05:21:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 01:26:29.454 05:21:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 01:26:29.454 05:21:11 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 01:26:29.454 05:21:11 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 01:26:29.454 { 01:26:29.454 "subsystems": [ 01:26:29.454 { 01:26:29.454 "subsystem": "bdev", 01:26:29.454 "config": [ 01:26:29.454 { 01:26:29.454 "params": { 01:26:29.454 "io_mechanism": "io_uring_cmd", 01:26:29.454 "conserve_cpu": false, 01:26:29.454 "filename": "/dev/ng0n1", 01:26:29.454 "name": "xnvme_bdev" 01:26:29.454 }, 01:26:29.454 "method": "bdev_xnvme_create" 01:26:29.454 }, 01:26:29.454 { 01:26:29.454 "method": "bdev_wait_for_examine" 01:26:29.454 } 01:26:29.454 ] 01:26:29.454 } 01:26:29.454 ] 01:26:29.454 } 01:26:29.454 [2024-12-09 05:21:11.623061] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:26:29.454 [2024-12-09 05:21:11.623200] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72545 ] 01:26:29.454 [2024-12-09 05:21:11.812359] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:26:29.713 [2024-12-09 05:21:11.953787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:26:29.972 Running I/O for 5 seconds... 01:26:31.912 32192.00 IOPS, 125.75 MiB/s [2024-12-09T05:21:15.750Z] 32320.00 IOPS, 126.25 MiB/s [2024-12-09T05:21:16.685Z] 30954.67 IOPS, 120.92 MiB/s [2024-12-09T05:21:17.620Z] 31072.00 IOPS, 121.38 MiB/s [2024-12-09T05:21:17.620Z] 32844.40 IOPS, 128.30 MiB/s 01:26:35.164 Latency(us) 01:26:35.164 [2024-12-09T05:21:17.620Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:26:35.164 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 01:26:35.164 xnvme_bdev : 5.00 32834.94 128.26 0.00 0.00 1943.19 1052.79 5632.41 01:26:35.164 [2024-12-09T05:21:17.620Z] =================================================================================================================== 01:26:35.164 [2024-12-09T05:21:17.620Z] Total : 32834.94 128.26 0.00 0.00 1943.19 1052.79 5632.41 01:26:36.537 05:21:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 01:26:36.537 05:21:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 01:26:36.537 05:21:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 01:26:36.537 05:21:18 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 01:26:36.537 05:21:18 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 01:26:36.537 { 01:26:36.537 "subsystems": [ 01:26:36.537 { 01:26:36.537 "subsystem": "bdev", 01:26:36.537 "config": [ 01:26:36.537 { 01:26:36.537 "params": { 01:26:36.537 "io_mechanism": "io_uring_cmd", 01:26:36.537 "conserve_cpu": false, 01:26:36.537 "filename": "/dev/ng0n1", 01:26:36.537 "name": "xnvme_bdev" 01:26:36.537 }, 01:26:36.537 "method": "bdev_xnvme_create" 01:26:36.537 }, 01:26:36.537 { 01:26:36.537 "method": "bdev_wait_for_examine" 01:26:36.537 } 01:26:36.537 ] 01:26:36.537 } 01:26:36.537 ] 01:26:36.537 } 01:26:36.537 [2024-12-09 05:21:18.756647] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:26:36.537 [2024-12-09 05:21:18.757039] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72627 ] 01:26:36.537 [2024-12-09 05:21:18.944570] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:26:36.795 [2024-12-09 05:21:19.086734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:26:37.053 Running I/O for 5 seconds... 01:26:39.362 63744.00 IOPS, 249.00 MiB/s [2024-12-09T05:21:22.753Z] 67104.00 IOPS, 262.12 MiB/s [2024-12-09T05:21:23.685Z] 67498.67 IOPS, 263.67 MiB/s [2024-12-09T05:21:24.618Z] 68144.00 IOPS, 266.19 MiB/s 01:26:42.162 Latency(us) 01:26:42.162 [2024-12-09T05:21:24.618Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:26:42.162 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 01:26:42.162 xnvme_bdev : 5.00 68506.53 267.60 0.00 0.00 931.38 556.00 3579.48 01:26:42.162 [2024-12-09T05:21:24.618Z] =================================================================================================================== 01:26:42.162 [2024-12-09T05:21:24.618Z] Total : 68506.53 267.60 0.00 0.00 931.38 556.00 3579.48 01:26:43.535 05:21:25 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 01:26:43.535 05:21:25 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 01:26:43.535 05:21:25 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 01:26:43.535 05:21:25 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 01:26:43.535 05:21:25 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 01:26:43.535 { 01:26:43.535 "subsystems": [ 01:26:43.535 { 01:26:43.535 "subsystem": "bdev", 01:26:43.535 "config": [ 01:26:43.535 { 01:26:43.535 "params": { 01:26:43.535 "io_mechanism": "io_uring_cmd", 01:26:43.535 "conserve_cpu": false, 01:26:43.535 "filename": "/dev/ng0n1", 01:26:43.535 "name": "xnvme_bdev" 01:26:43.535 }, 01:26:43.535 "method": "bdev_xnvme_create" 01:26:43.535 }, 01:26:43.535 { 01:26:43.535 "method": "bdev_wait_for_examine" 01:26:43.535 } 01:26:43.535 ] 01:26:43.535 } 01:26:43.535 ] 01:26:43.535 } 01:26:43.535 [2024-12-09 05:21:25.909027] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:26:43.535 [2024-12-09 05:21:25.909520] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72709 ] 01:26:43.808 [2024-12-09 05:21:26.099927] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:26:43.808 [2024-12-09 05:21:26.244394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:26:44.373 Running I/O for 5 seconds... 01:26:46.318 17595.00 IOPS, 68.73 MiB/s [2024-12-09T05:21:29.706Z] 32686.00 IOPS, 127.68 MiB/s [2024-12-09T05:21:30.679Z] 42746.00 IOPS, 166.98 MiB/s [2024-12-09T05:21:32.051Z] 47540.00 IOPS, 185.70 MiB/s [2024-12-09T05:21:32.051Z] 50537.00 IOPS, 197.41 MiB/s 01:26:49.595 Latency(us) 01:26:49.595 [2024-12-09T05:21:32.051Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:26:49.595 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 01:26:49.595 xnvme_bdev : 5.00 50518.15 197.34 0.00 0.00 1263.88 70.32 32004.73 01:26:49.595 [2024-12-09T05:21:32.051Z] =================================================================================================================== 01:26:49.595 [2024-12-09T05:21:32.051Z] Total : 50518.15 197.34 0.00 0.00 1263.88 70.32 32004.73 01:26:50.533 01:26:50.533 real 0m28.596s 01:26:50.533 user 0m14.553s 01:26:50.533 sys 0m13.618s 01:26:50.533 ************************************ 01:26:50.533 END TEST xnvme_bdevperf 01:26:50.533 ************************************ 01:26:50.533 05:21:32 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 01:26:50.533 05:21:32 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 01:26:50.533 05:21:32 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 01:26:50.533 05:21:32 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:26:50.533 05:21:32 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 01:26:50.533 05:21:32 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 01:26:50.533 ************************************ 01:26:50.533 START TEST xnvme_fio_plugin 01:26:50.533 ************************************ 01:26:50.533 05:21:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 01:26:50.533 05:21:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 01:26:50.533 05:21:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 01:26:50.533 05:21:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 01:26:50.533 05:21:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 01:26:50.533 05:21:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 01:26:50.792 05:21:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 01:26:50.792 05:21:32 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 01:26:50.792 05:21:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 01:26:50.792 05:21:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 01:26:50.792 05:21:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:26:50.792 05:21:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 01:26:50.792 05:21:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:26:50.792 05:21:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 01:26:50.792 05:21:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 01:26:50.792 05:21:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:26:50.792 05:21:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:26:50.792 05:21:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 01:26:50.792 05:21:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:26:50.792 05:21:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 01:26:50.792 05:21:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 01:26:50.792 05:21:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 01:26:50.793 05:21:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 01:26:50.793 05:21:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 01:26:50.793 { 01:26:50.793 "subsystems": [ 01:26:50.793 { 01:26:50.793 "subsystem": "bdev", 01:26:50.793 "config": [ 01:26:50.793 { 01:26:50.793 "params": { 01:26:50.793 "io_mechanism": "io_uring_cmd", 01:26:50.793 "conserve_cpu": false, 01:26:50.793 "filename": "/dev/ng0n1", 01:26:50.793 "name": "xnvme_bdev" 01:26:50.793 }, 01:26:50.793 "method": "bdev_xnvme_create" 01:26:50.793 }, 01:26:50.793 { 01:26:50.793 "method": "bdev_wait_for_examine" 01:26:50.793 } 01:26:50.793 ] 01:26:50.793 } 01:26:50.793 ] 01:26:50.793 } 01:26:50.793 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 01:26:50.793 fio-3.35 01:26:50.793 Starting 1 thread 01:26:57.359 01:26:57.359 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72833: Mon Dec 9 05:21:39 2024 01:26:57.359 read: IOPS=29.7k, BW=116MiB/s (122MB/s)(580MiB/5002msec) 01:26:57.359 slat (usec): min=2, max=101, avg= 6.41, stdev= 2.35 01:26:57.359 clat (usec): min=1027, max=10944, avg=1902.22, stdev=361.47 01:26:57.359 lat (usec): min=1030, max=10951, avg=1908.63, stdev=362.51 01:26:57.359 clat percentiles (usec): 01:26:57.359 | 1.00th=[ 1221], 5.00th=[ 1385], 10.00th=[ 1516], 20.00th=[ 1647], 01:26:57.359 | 30.00th=[ 1729], 40.00th=[ 1811], 50.00th=[ 1876], 60.00th=[ 1958], 01:26:57.359 | 70.00th=[ 2040], 80.00th=[ 2147], 90.00th=[ 2343], 95.00th=[ 2474], 01:26:57.359 | 99.00th=[ 2606], 99.50th=[ 2671], 99.90th=[ 2835], 99.95th=[ 3359], 01:26:57.359 | 99.99th=[10814] 01:26:57.359 bw ( KiB/s): min=98304, max=150528, per=100.00%, avg=121116.44, stdev=14117.45, samples=9 01:26:57.359 iops : min=24576, max=37632, avg=30279.11, stdev=3529.36, samples=9 01:26:57.359 lat (msec) : 2=65.71%, 4=34.25%, 20=0.04% 01:26:57.359 cpu : usr=35.01%, sys=63.93%, ctx=7, majf=0, minf=762 01:26:57.359 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 01:26:57.359 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:26:57.359 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 01:26:57.359 issued rwts: total=148480,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:26:57.359 latency : target=0, window=0, percentile=100.00%, depth=64 01:26:57.359 01:26:57.359 Run status group 0 (all jobs): 01:26:57.359 READ: bw=116MiB/s (122MB/s), 116MiB/s-116MiB/s (122MB/s-122MB/s), io=580MiB (608MB), run=5002-5002msec 01:26:58.295 ----------------------------------------------------- 01:26:58.295 Suppressions used: 01:26:58.295 count bytes template 01:26:58.295 1 11 /usr/src/fio/parse.c 01:26:58.295 1 8 libtcmalloc_minimal.so 01:26:58.295 1 904 libcrypto.so 01:26:58.295 ----------------------------------------------------- 01:26:58.295 01:26:58.295 05:21:40 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 01:26:58.295 05:21:40 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 01:26:58.295 05:21:40 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 01:26:58.295 05:21:40 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 01:26:58.295 05:21:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 01:26:58.295 05:21:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 01:26:58.295 05:21:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 01:26:58.295 05:21:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:26:58.295 05:21:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 01:26:58.295 05:21:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:26:58.295 05:21:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 01:26:58.295 05:21:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 01:26:58.295 05:21:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:26:58.295 05:21:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:26:58.295 05:21:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 01:26:58.295 05:21:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:26:58.295 05:21:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 01:26:58.295 05:21:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 01:26:58.295 05:21:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 01:26:58.295 05:21:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 01:26:58.295 05:21:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 01:26:58.295 { 01:26:58.295 "subsystems": [ 01:26:58.295 { 01:26:58.295 "subsystem": "bdev", 01:26:58.295 "config": [ 01:26:58.295 { 01:26:58.295 "params": { 01:26:58.295 "io_mechanism": "io_uring_cmd", 01:26:58.295 "conserve_cpu": false, 01:26:58.295 "filename": "/dev/ng0n1", 01:26:58.295 "name": "xnvme_bdev" 01:26:58.295 }, 01:26:58.295 "method": "bdev_xnvme_create" 01:26:58.295 }, 01:26:58.295 { 01:26:58.295 "method": "bdev_wait_for_examine" 01:26:58.295 } 01:26:58.295 ] 01:26:58.295 } 01:26:58.295 ] 01:26:58.295 } 01:26:58.553 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 01:26:58.553 fio-3.35 01:26:58.553 Starting 1 thread 01:27:05.117 01:27:05.117 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72929: Mon Dec 9 05:21:46 2024 01:27:05.117 write: IOPS=32.6k, BW=127MiB/s (134MB/s)(637MiB/5001msec); 0 zone resets 01:27:05.117 slat (nsec): min=2200, max=72324, avg=5744.90, stdev=3182.11 01:27:05.117 clat (usec): min=564, max=3677, avg=1736.99, stdev=518.23 01:27:05.117 lat (usec): min=568, max=3705, avg=1742.74, stdev=520.31 01:27:05.117 clat percentiles (usec): 01:27:05.117 | 1.00th=[ 938], 5.00th=[ 1037], 10.00th=[ 1106], 20.00th=[ 1237], 01:27:05.117 | 30.00th=[ 1369], 40.00th=[ 1483], 50.00th=[ 1614], 60.00th=[ 1795], 01:27:05.117 | 70.00th=[ 2089], 80.00th=[ 2311], 90.00th=[ 2507], 95.00th=[ 2606], 01:27:05.117 | 99.00th=[ 2802], 99.50th=[ 2868], 99.90th=[ 3064], 99.95th=[ 3261], 01:27:05.117 | 99.99th=[ 3556] 01:27:05.117 bw ( KiB/s): min=92672, max=154536, per=100.00%, avg=134548.00, stdev=22986.17, samples=9 01:27:05.117 iops : min=23168, max=38630, avg=33637.00, stdev=5746.31, samples=9 01:27:05.117 lat (usec) : 750=0.02%, 1000=3.00% 01:27:05.117 lat (msec) : 2=63.86%, 4=33.12% 01:27:05.117 cpu : usr=38.14%, sys=60.72%, ctx=9, majf=0, minf=763 01:27:05.117 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 01:27:05.117 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:27:05.117 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 01:27:05.117 issued rwts: total=0,163040,0,0 short=0,0,0,0 dropped=0,0,0,0 01:27:05.117 latency : target=0, window=0, percentile=100.00%, depth=64 01:27:05.117 01:27:05.117 Run status group 0 (all jobs): 01:27:05.117 WRITE: bw=127MiB/s (134MB/s), 127MiB/s-127MiB/s (134MB/s-134MB/s), io=637MiB (668MB), run=5001-5001msec 01:27:06.053 ----------------------------------------------------- 01:27:06.053 Suppressions used: 01:27:06.053 count bytes template 01:27:06.053 1 11 /usr/src/fio/parse.c 01:27:06.053 1 8 libtcmalloc_minimal.so 01:27:06.053 1 904 libcrypto.so 01:27:06.053 ----------------------------------------------------- 01:27:06.053 01:27:06.053 01:27:06.053 real 0m15.281s 01:27:06.053 user 0m7.837s 01:27:06.053 sys 0m7.081s 01:27:06.053 ************************************ 01:27:06.053 END TEST xnvme_fio_plugin 01:27:06.053 ************************************ 01:27:06.053 05:21:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 01:27:06.053 05:21:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 01:27:06.053 05:21:48 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 01:27:06.053 05:21:48 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 01:27:06.053 05:21:48 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 01:27:06.053 05:21:48 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 01:27:06.053 05:21:48 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:27:06.053 05:21:48 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 01:27:06.053 05:21:48 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 01:27:06.053 ************************************ 01:27:06.053 START TEST xnvme_rpc 01:27:06.053 ************************************ 01:27:06.053 05:21:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 01:27:06.053 05:21:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 01:27:06.053 05:21:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 01:27:06.053 05:21:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 01:27:06.053 05:21:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 01:27:06.053 05:21:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=73023 01:27:06.053 05:21:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:27:06.053 05:21:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 73023 01:27:06.053 05:21:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 73023 ']' 01:27:06.053 05:21:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:27:06.053 05:21:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 01:27:06.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:27:06.053 05:21:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:27:06.053 05:21:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 01:27:06.053 05:21:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 01:27:06.053 [2024-12-09 05:21:48.457332] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:27:06.053 [2024-12-09 05:21:48.457477] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73023 ] 01:27:06.312 [2024-12-09 05:21:48.645172] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:27:06.570 [2024-12-09 05:21:48.775352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:27:07.506 05:21:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:27:07.506 05:21:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 01:27:07.506 05:21:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd -c 01:27:07.506 05:21:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:07.506 05:21:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 01:27:07.506 xnvme_bdev 01:27:07.506 05:21:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:07.506 05:21:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 01:27:07.506 05:21:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 01:27:07.506 05:21:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 01:27:07.506 05:21:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:07.506 05:21:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 01:27:07.506 05:21:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:07.506 05:21:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 01:27:07.506 05:21:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 01:27:07.506 05:21:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 01:27:07.506 05:21:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:07.506 05:21:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 01:27:07.506 05:21:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 01:27:07.506 05:21:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:07.506 05:21:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 01:27:07.506 05:21:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 01:27:07.506 05:21:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 01:27:07.506 05:21:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 01:27:07.506 05:21:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:07.506 05:21:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 01:27:07.506 05:21:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:07.506 05:21:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 01:27:07.506 05:21:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 01:27:07.507 05:21:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 01:27:07.507 05:21:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 01:27:07.507 05:21:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:07.507 05:21:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 01:27:07.507 05:21:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:07.507 05:21:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 01:27:07.507 05:21:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 01:27:07.507 05:21:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:07.507 05:21:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 01:27:07.507 05:21:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:07.507 05:21:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 73023 01:27:07.507 05:21:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 73023 ']' 01:27:07.507 05:21:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 73023 01:27:07.507 05:21:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 01:27:07.507 05:21:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:27:07.764 05:21:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73023 01:27:07.764 killing process with pid 73023 01:27:07.764 05:21:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:27:07.764 05:21:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:27:07.764 05:21:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73023' 01:27:07.764 05:21:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 73023 01:27:07.764 05:21:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 73023 01:27:10.299 01:27:10.299 real 0m4.305s 01:27:10.299 user 0m4.162s 01:27:10.299 sys 0m0.724s 01:27:10.299 05:21:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 01:27:10.299 05:21:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 01:27:10.299 ************************************ 01:27:10.299 END TEST xnvme_rpc 01:27:10.299 ************************************ 01:27:10.299 05:21:52 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 01:27:10.299 05:21:52 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:27:10.299 05:21:52 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 01:27:10.299 05:21:52 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 01:27:10.299 ************************************ 01:27:10.299 START TEST xnvme_bdevperf 01:27:10.299 ************************************ 01:27:10.299 05:21:52 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 01:27:10.299 05:21:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 01:27:10.299 05:21:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 01:27:10.299 05:21:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 01:27:10.299 05:21:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 01:27:10.299 05:21:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 01:27:10.299 05:21:52 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 01:27:10.299 05:21:52 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 01:27:10.558 { 01:27:10.558 "subsystems": [ 01:27:10.558 { 01:27:10.558 "subsystem": "bdev", 01:27:10.558 "config": [ 01:27:10.558 { 01:27:10.558 "params": { 01:27:10.558 "io_mechanism": "io_uring_cmd", 01:27:10.558 "conserve_cpu": true, 01:27:10.558 "filename": "/dev/ng0n1", 01:27:10.558 "name": "xnvme_bdev" 01:27:10.558 }, 01:27:10.558 "method": "bdev_xnvme_create" 01:27:10.558 }, 01:27:10.558 { 01:27:10.558 "method": "bdev_wait_for_examine" 01:27:10.558 } 01:27:10.558 ] 01:27:10.558 } 01:27:10.558 ] 01:27:10.558 } 01:27:10.558 [2024-12-09 05:21:52.821973] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:27:10.558 [2024-12-09 05:21:52.822290] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73104 ] 01:27:10.558 [2024-12-09 05:21:53.008483] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:27:10.817 [2024-12-09 05:21:53.138451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:27:11.383 Running I/O for 5 seconds... 01:27:13.251 31680.00 IOPS, 123.75 MiB/s [2024-12-09T05:21:56.642Z] 31648.00 IOPS, 123.62 MiB/s [2024-12-09T05:21:57.579Z] 31914.67 IOPS, 124.67 MiB/s [2024-12-09T05:21:58.957Z] 29744.00 IOPS, 116.19 MiB/s [2024-12-09T05:21:58.957Z] 28787.20 IOPS, 112.45 MiB/s 01:27:16.501 Latency(us) 01:27:16.501 [2024-12-09T05:21:58.957Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:27:16.501 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 01:27:16.501 xnvme_bdev : 5.01 28729.87 112.23 0.00 0.00 2220.86 927.77 10106.76 01:27:16.501 [2024-12-09T05:21:58.957Z] =================================================================================================================== 01:27:16.501 [2024-12-09T05:21:58.957Z] Total : 28729.87 112.23 0.00 0.00 2220.86 927.77 10106.76 01:27:17.436 05:21:59 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 01:27:17.436 05:21:59 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 01:27:17.436 05:21:59 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 01:27:17.436 05:21:59 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 01:27:17.436 05:21:59 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 01:27:17.695 { 01:27:17.695 "subsystems": [ 01:27:17.695 { 01:27:17.695 "subsystem": "bdev", 01:27:17.695 "config": [ 01:27:17.695 { 01:27:17.695 "params": { 01:27:17.695 "io_mechanism": "io_uring_cmd", 01:27:17.695 "conserve_cpu": true, 01:27:17.695 "filename": "/dev/ng0n1", 01:27:17.695 "name": "xnvme_bdev" 01:27:17.695 }, 01:27:17.695 "method": "bdev_xnvme_create" 01:27:17.695 }, 01:27:17.695 { 01:27:17.695 "method": "bdev_wait_for_examine" 01:27:17.695 } 01:27:17.695 ] 01:27:17.695 } 01:27:17.695 ] 01:27:17.695 } 01:27:17.695 [2024-12-09 05:21:59.933295] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:27:17.695 [2024-12-09 05:21:59.933428] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73189 ] 01:27:17.695 [2024-12-09 05:22:00.118372] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:27:17.954 [2024-12-09 05:22:00.248697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:27:18.212 Running I/O for 5 seconds... 01:27:20.519 36736.00 IOPS, 143.50 MiB/s [2024-12-09T05:22:03.907Z] 33184.00 IOPS, 129.62 MiB/s [2024-12-09T05:22:04.840Z] 32832.00 IOPS, 128.25 MiB/s [2024-12-09T05:22:05.775Z] 32736.00 IOPS, 127.88 MiB/s 01:27:23.319 Latency(us) 01:27:23.319 [2024-12-09T05:22:05.775Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:27:23.319 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 01:27:23.319 xnvme_bdev : 5.01 30813.74 120.37 0.00 0.00 2070.30 664.57 7948.54 01:27:23.319 [2024-12-09T05:22:05.775Z] =================================================================================================================== 01:27:23.319 [2024-12-09T05:22:05.775Z] Total : 30813.74 120.37 0.00 0.00 2070.30 664.57 7948.54 01:27:24.695 05:22:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 01:27:24.695 05:22:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 01:27:24.695 05:22:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 01:27:24.695 05:22:06 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 01:27:24.695 05:22:06 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 01:27:24.695 { 01:27:24.695 "subsystems": [ 01:27:24.695 { 01:27:24.695 "subsystem": "bdev", 01:27:24.695 "config": [ 01:27:24.695 { 01:27:24.695 "params": { 01:27:24.695 "io_mechanism": "io_uring_cmd", 01:27:24.695 "conserve_cpu": true, 01:27:24.695 "filename": "/dev/ng0n1", 01:27:24.695 "name": "xnvme_bdev" 01:27:24.695 }, 01:27:24.695 "method": "bdev_xnvme_create" 01:27:24.695 }, 01:27:24.695 { 01:27:24.695 "method": "bdev_wait_for_examine" 01:27:24.695 } 01:27:24.695 ] 01:27:24.695 } 01:27:24.695 ] 01:27:24.695 } 01:27:24.695 [2024-12-09 05:22:07.017033] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:27:24.695 [2024-12-09 05:22:07.017156] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73269 ] 01:27:24.954 [2024-12-09 05:22:07.205218] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:27:24.954 [2024-12-09 05:22:07.335357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:27:25.531 Running I/O for 5 seconds... 01:27:27.446 71296.00 IOPS, 278.50 MiB/s [2024-12-09T05:22:10.836Z] 70240.00 IOPS, 274.38 MiB/s [2024-12-09T05:22:11.770Z] 70250.67 IOPS, 274.42 MiB/s [2024-12-09T05:22:13.145Z] 70480.00 IOPS, 275.31 MiB/s 01:27:30.689 Latency(us) 01:27:30.689 [2024-12-09T05:22:13.145Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:27:30.689 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 01:27:30.689 xnvme_bdev : 5.00 70395.00 274.98 0.00 0.00 906.40 638.25 2895.16 01:27:30.689 [2024-12-09T05:22:13.145Z] =================================================================================================================== 01:27:30.689 [2024-12-09T05:22:13.145Z] Total : 70395.00 274.98 0.00 0.00 906.40 638.25 2895.16 01:27:31.624 05:22:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 01:27:31.624 05:22:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 01:27:31.624 05:22:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 01:27:31.624 05:22:13 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 01:27:31.624 05:22:13 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 01:27:31.624 { 01:27:31.624 "subsystems": [ 01:27:31.624 { 01:27:31.624 "subsystem": "bdev", 01:27:31.624 "config": [ 01:27:31.624 { 01:27:31.624 "params": { 01:27:31.624 "io_mechanism": "io_uring_cmd", 01:27:31.624 "conserve_cpu": true, 01:27:31.624 "filename": "/dev/ng0n1", 01:27:31.624 "name": "xnvme_bdev" 01:27:31.624 }, 01:27:31.624 "method": "bdev_xnvme_create" 01:27:31.624 }, 01:27:31.624 { 01:27:31.624 "method": "bdev_wait_for_examine" 01:27:31.624 } 01:27:31.624 ] 01:27:31.624 } 01:27:31.624 ] 01:27:31.624 } 01:27:31.624 [2024-12-09 05:22:14.074510] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:27:31.624 [2024-12-09 05:22:14.074871] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73343 ] 01:27:31.882 [2024-12-09 05:22:14.261114] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:27:32.140 [2024-12-09 05:22:14.391522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:27:32.397 Running I/O for 5 seconds... 01:27:34.696 59940.00 IOPS, 234.14 MiB/s [2024-12-09T05:22:18.079Z] 57839.00 IOPS, 225.93 MiB/s [2024-12-09T05:22:19.013Z] 57491.33 IOPS, 224.58 MiB/s [2024-12-09T05:22:19.944Z] 57115.25 IOPS, 223.11 MiB/s [2024-12-09T05:22:19.944Z] 57626.00 IOPS, 225.10 MiB/s 01:27:37.488 Latency(us) 01:27:37.488 [2024-12-09T05:22:19.944Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:27:37.488 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 01:27:37.488 xnvme_bdev : 5.00 57597.74 224.99 0.00 0.00 1106.00 56.75 15160.13 01:27:37.488 [2024-12-09T05:22:19.944Z] =================================================================================================================== 01:27:37.488 [2024-12-09T05:22:19.944Z] Total : 57597.74 224.99 0.00 0.00 1106.00 56.75 15160.13 01:27:38.859 01:27:38.859 real 0m28.341s 01:27:38.859 user 0m18.091s 01:27:38.859 sys 0m8.305s 01:27:38.859 05:22:21 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 01:27:38.860 05:22:21 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 01:27:38.860 ************************************ 01:27:38.860 END TEST xnvme_bdevperf 01:27:38.860 ************************************ 01:27:38.860 05:22:21 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 01:27:38.860 05:22:21 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:27:38.860 05:22:21 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 01:27:38.860 05:22:21 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 01:27:38.860 ************************************ 01:27:38.860 START TEST xnvme_fio_plugin 01:27:38.860 ************************************ 01:27:38.860 05:22:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 01:27:38.860 05:22:21 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 01:27:38.860 05:22:21 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 01:27:38.860 05:22:21 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 01:27:38.860 05:22:21 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 01:27:38.860 05:22:21 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 01:27:38.860 05:22:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 01:27:38.860 05:22:21 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 01:27:38.860 05:22:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 01:27:38.860 05:22:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 01:27:38.860 05:22:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:27:38.860 05:22:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 01:27:38.860 05:22:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:27:38.860 05:22:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 01:27:38.860 05:22:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 01:27:38.860 05:22:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:27:38.860 05:22:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:27:38.860 05:22:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 01:27:38.860 05:22:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:27:38.860 05:22:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 01:27:38.860 05:22:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 01:27:38.860 05:22:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 01:27:38.860 05:22:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 01:27:38.860 05:22:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 01:27:38.860 { 01:27:38.860 "subsystems": [ 01:27:38.860 { 01:27:38.860 "subsystem": "bdev", 01:27:38.860 "config": [ 01:27:38.860 { 01:27:38.860 "params": { 01:27:38.860 "io_mechanism": "io_uring_cmd", 01:27:38.860 "conserve_cpu": true, 01:27:38.860 "filename": "/dev/ng0n1", 01:27:38.860 "name": "xnvme_bdev" 01:27:38.860 }, 01:27:38.860 "method": "bdev_xnvme_create" 01:27:38.860 }, 01:27:38.860 { 01:27:38.860 "method": "bdev_wait_for_examine" 01:27:38.860 } 01:27:38.860 ] 01:27:38.860 } 01:27:38.860 ] 01:27:38.860 } 01:27:39.119 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 01:27:39.119 fio-3.35 01:27:39.119 Starting 1 thread 01:27:45.685 01:27:45.685 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73467: Mon Dec 9 05:22:27 2024 01:27:45.685 read: IOPS=24.4k, BW=95.3MiB/s (99.9MB/s)(477MiB/5001msec) 01:27:45.685 slat (nsec): min=2488, max=83783, avg=8216.24, stdev=3536.81 01:27:45.685 clat (usec): min=936, max=3625, avg=2294.65, stdev=341.67 01:27:45.685 lat (usec): min=939, max=3643, avg=2302.86, stdev=342.98 01:27:45.685 clat percentiles (usec): 01:27:45.685 | 1.00th=[ 1172], 5.00th=[ 1516], 10.00th=[ 1893], 20.00th=[ 2114], 01:27:45.685 | 30.00th=[ 2212], 40.00th=[ 2278], 50.00th=[ 2343], 60.00th=[ 2409], 01:27:45.685 | 70.00th=[ 2474], 80.00th=[ 2573], 90.00th=[ 2638], 95.00th=[ 2704], 01:27:45.685 | 99.00th=[ 2900], 99.50th=[ 3130], 99.90th=[ 3425], 99.95th=[ 3490], 01:27:45.685 | 99.99th=[ 3589] 01:27:45.685 bw ( KiB/s): min=91136, max=121101, per=100.00%, avg=97594.33, stdev=9308.37, samples=9 01:27:45.685 iops : min=22784, max=30275, avg=24398.56, stdev=2327.01, samples=9 01:27:45.685 lat (usec) : 1000=0.04% 01:27:45.685 lat (msec) : 2=12.59%, 4=87.37% 01:27:45.685 cpu : usr=44.74%, sys=51.52%, ctx=8, majf=0, minf=762 01:27:45.685 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 01:27:45.685 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:27:45.685 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 01:27:45.685 issued rwts: total=121984,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:27:45.685 latency : target=0, window=0, percentile=100.00%, depth=64 01:27:45.685 01:27:45.685 Run status group 0 (all jobs): 01:27:45.685 READ: bw=95.3MiB/s (99.9MB/s), 95.3MiB/s-95.3MiB/s (99.9MB/s-99.9MB/s), io=477MiB (500MB), run=5001-5001msec 01:27:46.620 ----------------------------------------------------- 01:27:46.620 Suppressions used: 01:27:46.620 count bytes template 01:27:46.620 1 11 /usr/src/fio/parse.c 01:27:46.620 1 8 libtcmalloc_minimal.so 01:27:46.620 1 904 libcrypto.so 01:27:46.620 ----------------------------------------------------- 01:27:46.620 01:27:46.620 05:22:28 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 01:27:46.620 05:22:28 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 01:27:46.620 05:22:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 01:27:46.620 05:22:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 01:27:46.620 05:22:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:27:46.620 05:22:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 01:27:46.620 05:22:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:27:46.620 05:22:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 01:27:46.620 05:22:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 01:27:46.620 05:22:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:27:46.620 05:22:28 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 01:27:46.620 05:22:28 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 01:27:46.620 05:22:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 01:27:46.620 05:22:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:27:46.620 05:22:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 01:27:46.620 05:22:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:27:46.620 05:22:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 01:27:46.620 05:22:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 01:27:46.620 05:22:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 01:27:46.620 05:22:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 01:27:46.620 05:22:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 01:27:46.620 { 01:27:46.620 "subsystems": [ 01:27:46.620 { 01:27:46.620 "subsystem": "bdev", 01:27:46.620 "config": [ 01:27:46.620 { 01:27:46.620 "params": { 01:27:46.620 "io_mechanism": "io_uring_cmd", 01:27:46.620 "conserve_cpu": true, 01:27:46.620 "filename": "/dev/ng0n1", 01:27:46.620 "name": "xnvme_bdev" 01:27:46.620 }, 01:27:46.620 "method": "bdev_xnvme_create" 01:27:46.620 }, 01:27:46.620 { 01:27:46.620 "method": "bdev_wait_for_examine" 01:27:46.620 } 01:27:46.620 ] 01:27:46.620 } 01:27:46.620 ] 01:27:46.620 } 01:27:46.620 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 01:27:46.620 fio-3.35 01:27:46.620 Starting 1 thread 01:27:53.188 01:27:53.188 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73569: Mon Dec 9 05:22:34 2024 01:27:53.188 write: IOPS=35.3k, BW=138MiB/s (145MB/s)(690MiB/5001msec); 0 zone resets 01:27:53.188 slat (nsec): min=2336, max=98999, avg=5061.24, stdev=2892.10 01:27:53.188 clat (usec): min=727, max=4221, avg=1614.25, stdev=454.10 01:27:53.188 lat (usec): min=730, max=4223, avg=1619.31, stdev=455.88 01:27:53.188 clat percentiles (usec): 01:27:53.188 | 1.00th=[ 881], 5.00th=[ 1045], 10.00th=[ 1123], 20.00th=[ 1221], 01:27:53.188 | 30.00th=[ 1303], 40.00th=[ 1401], 50.00th=[ 1500], 60.00th=[ 1647], 01:27:53.188 | 70.00th=[ 1811], 80.00th=[ 2040], 90.00th=[ 2311], 95.00th=[ 2474], 01:27:53.188 | 99.00th=[ 2737], 99.50th=[ 2769], 99.90th=[ 2900], 99.95th=[ 2966], 01:27:53.188 | 99.99th=[ 3163] 01:27:53.188 bw ( KiB/s): min=98816, max=161792, per=99.63%, avg=140764.67, stdev=21365.96, samples=9 01:27:53.188 iops : min=24704, max=40448, avg=35191.11, stdev=5341.46, samples=9 01:27:53.188 lat (usec) : 750=0.01%, 1000=3.39% 01:27:53.188 lat (msec) : 2=74.99%, 4=21.61%, 10=0.01% 01:27:53.188 cpu : usr=57.74%, sys=39.40%, ctx=14, majf=0, minf=763 01:27:53.188 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 01:27:53.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:27:53.188 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 01:27:53.188 issued rwts: total=0,176638,0,0 short=0,0,0,0 dropped=0,0,0,0 01:27:53.188 latency : target=0, window=0, percentile=100.00%, depth=64 01:27:53.188 01:27:53.188 Run status group 0 (all jobs): 01:27:53.188 WRITE: bw=138MiB/s (145MB/s), 138MiB/s-138MiB/s (145MB/s-145MB/s), io=690MiB (724MB), run=5001-5001msec 01:27:54.122 ----------------------------------------------------- 01:27:54.122 Suppressions used: 01:27:54.122 count bytes template 01:27:54.122 1 11 /usr/src/fio/parse.c 01:27:54.122 1 8 libtcmalloc_minimal.so 01:27:54.122 1 904 libcrypto.so 01:27:54.122 ----------------------------------------------------- 01:27:54.122 01:27:54.122 01:27:54.122 real 0m15.212s 01:27:54.122 user 0m9.257s 01:27:54.122 sys 0m5.368s 01:27:54.122 05:22:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 01:27:54.122 05:22:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 01:27:54.122 ************************************ 01:27:54.122 END TEST xnvme_fio_plugin 01:27:54.122 ************************************ 01:27:54.122 Process with pid 73023 is not found 01:27:54.122 05:22:36 nvme_xnvme -- xnvme/xnvme.sh@1 -- # killprocess 73023 01:27:54.122 05:22:36 nvme_xnvme -- common/autotest_common.sh@954 -- # '[' -z 73023 ']' 01:27:54.122 05:22:36 nvme_xnvme -- common/autotest_common.sh@958 -- # kill -0 73023 01:27:54.122 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (73023) - No such process 01:27:54.122 05:22:36 nvme_xnvme -- common/autotest_common.sh@981 -- # echo 'Process with pid 73023 is not found' 01:27:54.122 01:27:54.122 real 4m0.535s 01:27:54.122 user 2m12.667s 01:27:54.122 sys 1m32.791s 01:27:54.122 05:22:36 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 01:27:54.122 ************************************ 01:27:54.122 END TEST nvme_xnvme 01:27:54.122 ************************************ 01:27:54.122 05:22:36 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 01:27:54.122 05:22:36 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 01:27:54.122 05:22:36 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:27:54.122 05:22:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:27:54.122 05:22:36 -- common/autotest_common.sh@10 -- # set +x 01:27:54.122 ************************************ 01:27:54.122 START TEST blockdev_xnvme 01:27:54.122 ************************************ 01:27:54.122 05:22:36 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 01:27:54.380 * Looking for test storage... 01:27:54.380 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 01:27:54.380 05:22:36 blockdev_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:27:54.380 05:22:36 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 01:27:54.380 05:22:36 blockdev_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:27:54.380 05:22:36 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:27:54.380 05:22:36 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:27:54.380 05:22:36 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 01:27:54.380 05:22:36 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 01:27:54.380 05:22:36 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 01:27:54.380 05:22:36 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 01:27:54.380 05:22:36 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 01:27:54.380 05:22:36 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 01:27:54.380 05:22:36 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 01:27:54.380 05:22:36 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 01:27:54.380 05:22:36 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 01:27:54.380 05:22:36 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:27:54.380 05:22:36 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 01:27:54.380 05:22:36 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 01:27:54.380 05:22:36 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 01:27:54.380 05:22:36 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:27:54.380 05:22:36 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 01:27:54.380 05:22:36 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 01:27:54.380 05:22:36 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:27:54.380 05:22:36 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 01:27:54.380 05:22:36 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 01:27:54.380 05:22:36 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 01:27:54.380 05:22:36 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 01:27:54.380 05:22:36 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:27:54.380 05:22:36 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 01:27:54.380 05:22:36 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 01:27:54.380 05:22:36 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:27:54.380 05:22:36 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:27:54.380 05:22:36 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 01:27:54.380 05:22:36 blockdev_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:27:54.380 05:22:36 blockdev_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:27:54.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:27:54.380 --rc genhtml_branch_coverage=1 01:27:54.380 --rc genhtml_function_coverage=1 01:27:54.380 --rc genhtml_legend=1 01:27:54.380 --rc geninfo_all_blocks=1 01:27:54.380 --rc geninfo_unexecuted_blocks=1 01:27:54.380 01:27:54.380 ' 01:27:54.380 05:22:36 blockdev_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:27:54.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:27:54.380 --rc genhtml_branch_coverage=1 01:27:54.380 --rc genhtml_function_coverage=1 01:27:54.380 --rc genhtml_legend=1 01:27:54.380 --rc geninfo_all_blocks=1 01:27:54.380 --rc geninfo_unexecuted_blocks=1 01:27:54.380 01:27:54.380 ' 01:27:54.380 05:22:36 blockdev_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:27:54.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:27:54.380 --rc genhtml_branch_coverage=1 01:27:54.380 --rc genhtml_function_coverage=1 01:27:54.380 --rc genhtml_legend=1 01:27:54.380 --rc geninfo_all_blocks=1 01:27:54.380 --rc geninfo_unexecuted_blocks=1 01:27:54.380 01:27:54.380 ' 01:27:54.380 05:22:36 blockdev_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:27:54.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:27:54.380 --rc genhtml_branch_coverage=1 01:27:54.380 --rc genhtml_function_coverage=1 01:27:54.380 --rc genhtml_legend=1 01:27:54.380 --rc geninfo_all_blocks=1 01:27:54.380 --rc geninfo_unexecuted_blocks=1 01:27:54.380 01:27:54.380 ' 01:27:54.380 05:22:36 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 01:27:54.380 05:22:36 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 01:27:54.380 05:22:36 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 01:27:54.380 05:22:36 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 01:27:54.380 05:22:36 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 01:27:54.380 05:22:36 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 01:27:54.380 05:22:36 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 01:27:54.380 05:22:36 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 01:27:54.380 05:22:36 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 01:27:54.380 05:22:36 blockdev_xnvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 01:27:54.380 05:22:36 blockdev_xnvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 01:27:54.380 05:22:36 blockdev_xnvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 01:27:54.380 05:22:36 blockdev_xnvme -- bdev/blockdev.sh@711 -- # uname -s 01:27:54.380 05:22:36 blockdev_xnvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 01:27:54.380 05:22:36 blockdev_xnvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 01:27:54.380 05:22:36 blockdev_xnvme -- bdev/blockdev.sh@719 -- # test_type=xnvme 01:27:54.380 05:22:36 blockdev_xnvme -- bdev/blockdev.sh@720 -- # crypto_device= 01:27:54.380 05:22:36 blockdev_xnvme -- bdev/blockdev.sh@721 -- # dek= 01:27:54.380 05:22:36 blockdev_xnvme -- bdev/blockdev.sh@722 -- # env_ctx= 01:27:54.380 05:22:36 blockdev_xnvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 01:27:54.380 05:22:36 blockdev_xnvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 01:27:54.380 05:22:36 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == bdev ]] 01:27:54.380 05:22:36 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == crypto_* ]] 01:27:54.380 05:22:36 blockdev_xnvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 01:27:54.380 05:22:36 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=73703 01:27:54.380 05:22:36 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 01:27:54.380 05:22:36 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 01:27:54.380 05:22:36 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 73703 01:27:54.380 05:22:36 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 73703 ']' 01:27:54.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:27:54.380 05:22:36 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:27:54.380 05:22:36 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 01:27:54.380 05:22:36 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:27:54.380 05:22:36 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 01:27:54.380 05:22:36 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 01:27:54.639 [2024-12-09 05:22:36.871079] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:27:54.639 [2024-12-09 05:22:36.872855] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73703 ] 01:27:54.639 [2024-12-09 05:22:37.077332] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:27:54.912 [2024-12-09 05:22:37.203954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:27:55.848 05:22:38 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:27:55.848 05:22:38 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 01:27:55.848 05:22:38 blockdev_xnvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 01:27:55.848 05:22:38 blockdev_xnvme -- bdev/blockdev.sh@766 -- # setup_xnvme_conf 01:27:55.848 05:22:38 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 01:27:55.848 05:22:38 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 01:27:55.848 05:22:38 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 01:27:56.784 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:27:57.351 0000:00:11.0 (1b36 0010): Already using the nvme driver 01:27:57.351 0000:00:10.0 (1b36 0010): Already using the nvme driver 01:27:57.351 0000:00:12.0 (1b36 0010): Already using the nvme driver 01:27:57.351 0000:00:13.0 (1b36 0010): Already using the nvme driver 01:27:57.351 05:22:39 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 01:27:57.351 05:22:39 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 01:27:57.351 05:22:39 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 01:27:57.351 05:22:39 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local nvme bdf 01:27:57.351 05:22:39 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 01:27:57.351 05:22:39 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 01:27:57.351 05:22:39 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 01:27:57.351 05:22:39 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 01:27:57.351 05:22:39 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:27:57.351 05:22:39 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 01:27:57.351 05:22:39 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n2 01:27:57.351 05:22:39 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n2 01:27:57.351 05:22:39 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 01:27:57.351 05:22:39 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:27:57.351 05:22:39 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 01:27:57.351 05:22:39 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n3 01:27:57.351 05:22:39 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n3 01:27:57.351 05:22:39 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 01:27:57.351 05:22:39 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:27:57.351 05:22:39 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 01:27:57.351 05:22:39 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1c1n1 01:27:57.351 05:22:39 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1c1n1 01:27:57.351 05:22:39 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1c1n1/queue/zoned ]] 01:27:57.351 05:22:39 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:27:57.351 05:22:39 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 01:27:57.351 05:22:39 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 01:27:57.351 05:22:39 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1n1 01:27:57.351 05:22:39 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 01:27:57.351 05:22:39 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:27:57.351 05:22:39 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 01:27:57.352 05:22:39 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 01:27:57.352 05:22:39 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 01:27:57.352 05:22:39 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 01:27:57.352 05:22:39 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:27:57.352 05:22:39 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 01:27:57.352 05:22:39 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 01:27:57.352 05:22:39 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 01:27:57.352 05:22:39 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 01:27:57.352 05:22:39 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:27:57.352 05:22:39 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 01:27:57.352 05:22:39 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 01:27:57.352 05:22:39 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 01:27:57.352 05:22:39 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 01:27:57.352 05:22:39 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 01:27:57.352 05:22:39 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n2 ]] 01:27:57.352 05:22:39 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 01:27:57.352 05:22:39 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 01:27:57.352 05:22:39 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 01:27:57.352 05:22:39 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n3 ]] 01:27:57.352 05:22:39 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 01:27:57.352 05:22:39 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 01:27:57.352 05:22:39 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 01:27:57.352 05:22:39 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 01:27:57.352 05:22:39 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 01:27:57.352 05:22:39 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 01:27:57.352 05:22:39 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 01:27:57.352 05:22:39 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 01:27:57.352 05:22:39 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 01:27:57.352 05:22:39 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 01:27:57.352 05:22:39 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 01:27:57.352 05:22:39 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 01:27:57.352 05:22:39 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 01:27:57.352 05:22:39 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 01:27:57.352 05:22:39 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 01:27:57.352 05:22:39 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 01:27:57.352 05:22:39 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:57.352 05:22:39 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 01:27:57.352 05:22:39 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring -c' 'bdev_xnvme_create /dev/nvme0n2 nvme0n2 io_uring -c' 'bdev_xnvme_create /dev/nvme0n3 nvme0n3 io_uring -c' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring -c' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring -c' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring -c' 01:27:57.352 nvme0n1 01:27:57.352 nvme0n2 01:27:57.352 nvme0n3 01:27:57.352 nvme1n1 01:27:57.610 nvme2n1 01:27:57.610 nvme3n1 01:27:57.610 05:22:39 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:57.610 05:22:39 blockdev_xnvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 01:27:57.610 05:22:39 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:57.610 05:22:39 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 01:27:57.610 05:22:39 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:57.610 05:22:39 blockdev_xnvme -- bdev/blockdev.sh@777 -- # cat 01:27:57.610 05:22:39 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 01:27:57.610 05:22:39 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:57.610 05:22:39 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 01:27:57.610 05:22:39 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:57.610 05:22:39 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 01:27:57.610 05:22:39 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:57.610 05:22:39 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 01:27:57.610 05:22:39 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:57.610 05:22:39 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 01:27:57.610 05:22:39 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:57.610 05:22:39 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 01:27:57.610 05:22:39 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:57.610 05:22:39 blockdev_xnvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 01:27:57.610 05:22:39 blockdev_xnvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 01:27:57.610 05:22:39 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:57.610 05:22:39 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 01:27:57.610 05:22:39 blockdev_xnvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 01:27:57.610 05:22:39 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:57.610 05:22:39 blockdev_xnvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 01:27:57.610 05:22:40 blockdev_xnvme -- bdev/blockdev.sh@786 -- # jq -r .name 01:27:57.611 05:22:40 blockdev_xnvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "02594689-49b4-4a0e-8a0e-4a524dee8196"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "02594689-49b4-4a0e-8a0e-4a524dee8196",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "13a1009d-cba8-49f6-8d8c-3653c5ba11f5"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "13a1009d-cba8-49f6-8d8c-3653c5ba11f5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "f25d4b29-3271-424d-974a-a08feb9a7c0e"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "f25d4b29-3271-424d-974a-a08feb9a7c0e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "99bb4bb8-eaa8-4f9b-9380-0685df77e375"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "99bb4bb8-eaa8-4f9b-9380-0685df77e375",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "db11173f-6660-4f8e-b848-87f4fd638f8d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "db11173f-6660-4f8e-b848-87f4fd638f8d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "6db0f66b-fd8c-41ce-853e-9549eec293be"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "6db0f66b-fd8c-41ce-853e-9549eec293be",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 01:27:57.611 05:22:40 blockdev_xnvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 01:27:57.611 05:22:40 blockdev_xnvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=nvme0n1 01:27:57.611 05:22:40 blockdev_xnvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 01:27:57.611 05:22:40 blockdev_xnvme -- bdev/blockdev.sh@791 -- # killprocess 73703 01:27:57.611 05:22:40 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 73703 ']' 01:27:57.611 05:22:40 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 73703 01:27:57.611 05:22:40 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 01:27:57.611 05:22:40 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:27:57.611 05:22:40 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73703 01:27:57.869 killing process with pid 73703 01:27:57.869 05:22:40 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:27:57.869 05:22:40 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:27:57.869 05:22:40 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73703' 01:27:57.869 05:22:40 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 73703 01:27:57.869 05:22:40 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 73703 01:28:00.451 05:22:42 blockdev_xnvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 01:28:00.451 05:22:42 blockdev_xnvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 01:28:00.451 05:22:42 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 01:28:00.451 05:22:42 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 01:28:00.451 05:22:42 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 01:28:00.451 ************************************ 01:28:00.451 START TEST bdev_hello_world 01:28:00.451 ************************************ 01:28:00.451 05:22:42 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 01:28:00.451 [2024-12-09 05:22:42.812552] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:28:00.451 [2024-12-09 05:22:42.812690] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74004 ] 01:28:00.709 [2024-12-09 05:22:43.002674] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:28:00.709 [2024-12-09 05:22:43.137360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:28:01.276 [2024-12-09 05:22:43.629874] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 01:28:01.276 [2024-12-09 05:22:43.630136] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 01:28:01.276 [2024-12-09 05:22:43.630167] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 01:28:01.276 [2024-12-09 05:22:43.632700] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 01:28:01.276 [2024-12-09 05:22:43.633125] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 01:28:01.276 [2024-12-09 05:22:43.633154] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 01:28:01.276 [2024-12-09 05:22:43.633444] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 01:28:01.276 01:28:01.276 [2024-12-09 05:22:43.633486] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 01:28:02.650 01:28:02.650 real 0m2.176s 01:28:02.650 user 0m1.738s 01:28:02.650 sys 0m0.318s 01:28:02.650 ************************************ 01:28:02.650 END TEST bdev_hello_world 01:28:02.650 ************************************ 01:28:02.650 05:22:44 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 01:28:02.650 05:22:44 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 01:28:02.650 05:22:44 blockdev_xnvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 01:28:02.650 05:22:44 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:28:02.650 05:22:44 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 01:28:02.650 05:22:44 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 01:28:02.650 ************************************ 01:28:02.650 START TEST bdev_bounds 01:28:02.650 ************************************ 01:28:02.650 05:22:44 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 01:28:02.650 Process bdevio pid: 74046 01:28:02.650 05:22:44 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=74046 01:28:02.650 05:22:44 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 01:28:02.650 05:22:44 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 01:28:02.650 05:22:44 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 74046' 01:28:02.650 05:22:44 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 74046 01:28:02.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:28:02.650 05:22:44 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 74046 ']' 01:28:02.650 05:22:44 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:28:02.650 05:22:44 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 01:28:02.650 05:22:44 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:28:02.650 05:22:44 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 01:28:02.650 05:22:44 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 01:28:02.650 [2024-12-09 05:22:45.062379] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:28:02.650 [2024-12-09 05:22:45.062516] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74046 ] 01:28:02.909 [2024-12-09 05:22:45.247724] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 01:28:03.168 [2024-12-09 05:22:45.380846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:28:03.168 [2024-12-09 05:22:45.380989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:28:03.168 [2024-12-09 05:22:45.381035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:28:03.735 05:22:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:28:03.735 05:22:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 01:28:03.735 05:22:45 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 01:28:03.735 I/O targets: 01:28:03.736 nvme0n1: 1048576 blocks of 4096 bytes (4096 MiB) 01:28:03.736 nvme0n2: 1048576 blocks of 4096 bytes (4096 MiB) 01:28:03.736 nvme0n3: 1048576 blocks of 4096 bytes (4096 MiB) 01:28:03.736 nvme1n1: 262144 blocks of 4096 bytes (1024 MiB) 01:28:03.736 nvme2n1: 1548666 blocks of 4096 bytes (6050 MiB) 01:28:03.736 nvme3n1: 1310720 blocks of 4096 bytes (5120 MiB) 01:28:03.736 01:28:03.736 01:28:03.736 CUnit - A unit testing framework for C - Version 2.1-3 01:28:03.736 http://cunit.sourceforge.net/ 01:28:03.736 01:28:03.736 01:28:03.736 Suite: bdevio tests on: nvme3n1 01:28:03.736 Test: blockdev write read block ...passed 01:28:03.736 Test: blockdev write zeroes read block ...passed 01:28:03.736 Test: blockdev write zeroes read no split ...passed 01:28:03.736 Test: blockdev write zeroes read split ...passed 01:28:03.736 Test: blockdev write zeroes read split partial ...passed 01:28:03.736 Test: blockdev reset ...passed 01:28:03.736 Test: blockdev write read 8 blocks ...passed 01:28:03.736 Test: blockdev write read size > 128k ...passed 01:28:03.736 Test: blockdev write read invalid size ...passed 01:28:03.736 Test: blockdev write read offset + nbytes == size of blockdev ...passed 01:28:03.736 Test: blockdev write read offset + nbytes > size of blockdev ...passed 01:28:03.736 Test: blockdev write read max offset ...passed 01:28:03.736 Test: blockdev write read 2 blocks on overlapped address offset ...passed 01:28:03.736 Test: blockdev writev readv 8 blocks ...passed 01:28:03.736 Test: blockdev writev readv 30 x 1block ...passed 01:28:03.736 Test: blockdev writev readv block ...passed 01:28:03.736 Test: blockdev writev readv size > 128k ...passed 01:28:03.736 Test: blockdev writev readv size > 128k in two iovs ...passed 01:28:03.736 Test: blockdev comparev and writev ...passed 01:28:03.736 Test: blockdev nvme passthru rw ...passed 01:28:03.736 Test: blockdev nvme passthru vendor specific ...passed 01:28:03.736 Test: blockdev nvme admin passthru ...passed 01:28:03.736 Test: blockdev copy ...passed 01:28:03.736 Suite: bdevio tests on: nvme2n1 01:28:03.736 Test: blockdev write read block ...passed 01:28:03.736 Test: blockdev write zeroes read block ...passed 01:28:03.736 Test: blockdev write zeroes read no split ...passed 01:28:03.736 Test: blockdev write zeroes read split ...passed 01:28:03.736 Test: blockdev write zeroes read split partial ...passed 01:28:03.736 Test: blockdev reset ...passed 01:28:03.736 Test: blockdev write read 8 blocks ...passed 01:28:03.736 Test: blockdev write read size > 128k ...passed 01:28:03.736 Test: blockdev write read invalid size ...passed 01:28:03.736 Test: blockdev write read offset + nbytes == size of blockdev ...passed 01:28:03.736 Test: blockdev write read offset + nbytes > size of blockdev ...passed 01:28:03.736 Test: blockdev write read max offset ...passed 01:28:03.736 Test: blockdev write read 2 blocks on overlapped address offset ...passed 01:28:03.736 Test: blockdev writev readv 8 blocks ...passed 01:28:03.736 Test: blockdev writev readv 30 x 1block ...passed 01:28:03.736 Test: blockdev writev readv block ...passed 01:28:03.736 Test: blockdev writev readv size > 128k ...passed 01:28:03.736 Test: blockdev writev readv size > 128k in two iovs ...passed 01:28:03.736 Test: blockdev comparev and writev ...passed 01:28:03.736 Test: blockdev nvme passthru rw ...passed 01:28:03.736 Test: blockdev nvme passthru vendor specific ...passed 01:28:03.736 Test: blockdev nvme admin passthru ...passed 01:28:03.736 Test: blockdev copy ...passed 01:28:03.736 Suite: bdevio tests on: nvme1n1 01:28:03.736 Test: blockdev write read block ...passed 01:28:03.736 Test: blockdev write zeroes read block ...passed 01:28:04.112 Test: blockdev write zeroes read no split ...passed 01:28:04.112 Test: blockdev write zeroes read split ...passed 01:28:04.112 Test: blockdev write zeroes read split partial ...passed 01:28:04.112 Test: blockdev reset ...passed 01:28:04.112 Test: blockdev write read 8 blocks ...passed 01:28:04.112 Test: blockdev write read size > 128k ...passed 01:28:04.112 Test: blockdev write read invalid size ...passed 01:28:04.112 Test: blockdev write read offset + nbytes == size of blockdev ...passed 01:28:04.112 Test: blockdev write read offset + nbytes > size of blockdev ...passed 01:28:04.112 Test: blockdev write read max offset ...passed 01:28:04.112 Test: blockdev write read 2 blocks on overlapped address offset ...passed 01:28:04.112 Test: blockdev writev readv 8 blocks ...passed 01:28:04.112 Test: blockdev writev readv 30 x 1block ...passed 01:28:04.112 Test: blockdev writev readv block ...passed 01:28:04.112 Test: blockdev writev readv size > 128k ...passed 01:28:04.112 Test: blockdev writev readv size > 128k in two iovs ...passed 01:28:04.112 Test: blockdev comparev and writev ...passed 01:28:04.112 Test: blockdev nvme passthru rw ...passed 01:28:04.112 Test: blockdev nvme passthru vendor specific ...passed 01:28:04.112 Test: blockdev nvme admin passthru ...passed 01:28:04.112 Test: blockdev copy ...passed 01:28:04.112 Suite: bdevio tests on: nvme0n3 01:28:04.112 Test: blockdev write read block ...passed 01:28:04.112 Test: blockdev write zeroes read block ...passed 01:28:04.112 Test: blockdev write zeroes read no split ...passed 01:28:04.112 Test: blockdev write zeroes read split ...passed 01:28:04.112 Test: blockdev write zeroes read split partial ...passed 01:28:04.112 Test: blockdev reset ...passed 01:28:04.112 Test: blockdev write read 8 blocks ...passed 01:28:04.112 Test: blockdev write read size > 128k ...passed 01:28:04.112 Test: blockdev write read invalid size ...passed 01:28:04.112 Test: blockdev write read offset + nbytes == size of blockdev ...passed 01:28:04.112 Test: blockdev write read offset + nbytes > size of blockdev ...passed 01:28:04.112 Test: blockdev write read max offset ...passed 01:28:04.112 Test: blockdev write read 2 blocks on overlapped address offset ...passed 01:28:04.112 Test: blockdev writev readv 8 blocks ...passed 01:28:04.112 Test: blockdev writev readv 30 x 1block ...passed 01:28:04.112 Test: blockdev writev readv block ...passed 01:28:04.112 Test: blockdev writev readv size > 128k ...passed 01:28:04.112 Test: blockdev writev readv size > 128k in two iovs ...passed 01:28:04.112 Test: blockdev comparev and writev ...passed 01:28:04.112 Test: blockdev nvme passthru rw ...passed 01:28:04.112 Test: blockdev nvme passthru vendor specific ...passed 01:28:04.112 Test: blockdev nvme admin passthru ...passed 01:28:04.112 Test: blockdev copy ...passed 01:28:04.112 Suite: bdevio tests on: nvme0n2 01:28:04.112 Test: blockdev write read block ...passed 01:28:04.112 Test: blockdev write zeroes read block ...passed 01:28:04.112 Test: blockdev write zeroes read no split ...passed 01:28:04.112 Test: blockdev write zeroes read split ...passed 01:28:04.112 Test: blockdev write zeroes read split partial ...passed 01:28:04.112 Test: blockdev reset ...passed 01:28:04.112 Test: blockdev write read 8 blocks ...passed 01:28:04.112 Test: blockdev write read size > 128k ...passed 01:28:04.112 Test: blockdev write read invalid size ...passed 01:28:04.112 Test: blockdev write read offset + nbytes == size of blockdev ...passed 01:28:04.112 Test: blockdev write read offset + nbytes > size of blockdev ...passed 01:28:04.112 Test: blockdev write read max offset ...passed 01:28:04.112 Test: blockdev write read 2 blocks on overlapped address offset ...passed 01:28:04.112 Test: blockdev writev readv 8 blocks ...passed 01:28:04.112 Test: blockdev writev readv 30 x 1block ...passed 01:28:04.112 Test: blockdev writev readv block ...passed 01:28:04.112 Test: blockdev writev readv size > 128k ...passed 01:28:04.112 Test: blockdev writev readv size > 128k in two iovs ...passed 01:28:04.112 Test: blockdev comparev and writev ...passed 01:28:04.112 Test: blockdev nvme passthru rw ...passed 01:28:04.112 Test: blockdev nvme passthru vendor specific ...passed 01:28:04.112 Test: blockdev nvme admin passthru ...passed 01:28:04.112 Test: blockdev copy ...passed 01:28:04.112 Suite: bdevio tests on: nvme0n1 01:28:04.112 Test: blockdev write read block ...passed 01:28:04.112 Test: blockdev write zeroes read block ...passed 01:28:04.112 Test: blockdev write zeroes read no split ...passed 01:28:04.112 Test: blockdev write zeroes read split ...passed 01:28:04.112 Test: blockdev write zeroes read split partial ...passed 01:28:04.112 Test: blockdev reset ...passed 01:28:04.112 Test: blockdev write read 8 blocks ...passed 01:28:04.112 Test: blockdev write read size > 128k ...passed 01:28:04.112 Test: blockdev write read invalid size ...passed 01:28:04.112 Test: blockdev write read offset + nbytes == size of blockdev ...passed 01:28:04.112 Test: blockdev write read offset + nbytes > size of blockdev ...passed 01:28:04.112 Test: blockdev write read max offset ...passed 01:28:04.112 Test: blockdev write read 2 blocks on overlapped address offset ...passed 01:28:04.112 Test: blockdev writev readv 8 blocks ...passed 01:28:04.112 Test: blockdev writev readv 30 x 1block ...passed 01:28:04.112 Test: blockdev writev readv block ...passed 01:28:04.112 Test: blockdev writev readv size > 128k ...passed 01:28:04.112 Test: blockdev writev readv size > 128k in two iovs ...passed 01:28:04.112 Test: blockdev comparev and writev ...passed 01:28:04.112 Test: blockdev nvme passthru rw ...passed 01:28:04.112 Test: blockdev nvme passthru vendor specific ...passed 01:28:04.112 Test: blockdev nvme admin passthru ...passed 01:28:04.112 Test: blockdev copy ...passed 01:28:04.112 01:28:04.112 Run Summary: Type Total Ran Passed Failed Inactive 01:28:04.112 suites 6 6 n/a 0 0 01:28:04.112 tests 138 138 138 0 0 01:28:04.112 asserts 780 780 780 0 n/a 01:28:04.112 01:28:04.112 Elapsed time = 1.335 seconds 01:28:04.112 0 01:28:04.112 05:22:46 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 74046 01:28:04.113 05:22:46 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 74046 ']' 01:28:04.113 05:22:46 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 74046 01:28:04.113 05:22:46 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 01:28:04.113 05:22:46 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:28:04.113 05:22:46 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74046 01:28:04.372 05:22:46 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:28:04.372 05:22:46 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:28:04.372 05:22:46 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74046' 01:28:04.372 killing process with pid 74046 01:28:04.372 05:22:46 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 74046 01:28:04.372 05:22:46 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 74046 01:28:05.773 05:22:47 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 01:28:05.773 01:28:05.773 real 0m2.885s 01:28:05.773 user 0m6.838s 01:28:05.773 sys 0m0.523s 01:28:05.773 05:22:47 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 01:28:05.773 05:22:47 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 01:28:05.773 ************************************ 01:28:05.773 END TEST bdev_bounds 01:28:05.773 ************************************ 01:28:05.773 05:22:47 blockdev_xnvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 01:28:05.773 05:22:47 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 01:28:05.773 05:22:47 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 01:28:05.773 05:22:47 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 01:28:05.773 ************************************ 01:28:05.773 START TEST bdev_nbd 01:28:05.773 ************************************ 01:28:05.773 05:22:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 01:28:05.773 05:22:47 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 01:28:05.773 05:22:47 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 01:28:05.773 05:22:47 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:28:05.773 05:22:47 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 01:28:05.773 05:22:47 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 01:28:05.773 05:22:47 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 01:28:05.773 05:22:47 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 01:28:05.773 05:22:47 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 01:28:05.773 05:22:47 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 01:28:05.773 05:22:47 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 01:28:05.773 05:22:47 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 01:28:05.773 05:22:47 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 01:28:05.773 05:22:47 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 01:28:05.773 05:22:47 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 01:28:05.773 05:22:47 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 01:28:05.773 05:22:47 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=74111 01:28:05.773 05:22:47 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 01:28:05.773 05:22:47 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 01:28:05.773 05:22:47 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 74111 /var/tmp/spdk-nbd.sock 01:28:05.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 01:28:05.773 05:22:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 74111 ']' 01:28:05.773 05:22:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 01:28:05.773 05:22:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 01:28:05.773 05:22:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 01:28:05.773 05:22:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 01:28:05.773 05:22:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 01:28:05.773 [2024-12-09 05:22:48.066939] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:28:05.773 [2024-12-09 05:22:48.067090] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:28:06.033 [2024-12-09 05:22:48.261403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:28:06.033 [2024-12-09 05:22:48.394181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:28:06.602 05:22:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:28:06.602 05:22:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 01:28:06.602 05:22:48 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 01:28:06.602 05:22:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:28:06.602 05:22:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 01:28:06.602 05:22:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 01:28:06.603 05:22:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 01:28:06.603 05:22:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:28:06.603 05:22:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 01:28:06.603 05:22:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 01:28:06.603 05:22:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 01:28:06.603 05:22:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 01:28:06.603 05:22:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 01:28:06.603 05:22:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 01:28:06.603 05:22:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 01:28:06.862 05:22:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 01:28:06.862 05:22:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 01:28:06.862 05:22:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 01:28:06.862 05:22:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 01:28:06.862 05:22:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 01:28:06.862 05:22:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:28:06.862 05:22:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:28:06.862 05:22:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 01:28:06.862 05:22:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 01:28:06.862 05:22:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:28:06.862 05:22:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:28:06.862 05:22:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:28:06.862 1+0 records in 01:28:06.862 1+0 records out 01:28:06.862 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000801608 s, 5.1 MB/s 01:28:06.862 05:22:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:28:06.862 05:22:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 01:28:06.862 05:22:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:28:06.862 05:22:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:28:06.862 05:22:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 01:28:06.862 05:22:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 01:28:06.862 05:22:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 01:28:06.862 05:22:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 01:28:07.122 05:22:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 01:28:07.122 05:22:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 01:28:07.122 05:22:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 01:28:07.122 05:22:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 01:28:07.122 05:22:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 01:28:07.122 05:22:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:28:07.122 05:22:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:28:07.122 05:22:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 01:28:07.122 05:22:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 01:28:07.122 05:22:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:28:07.122 05:22:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:28:07.122 05:22:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:28:07.122 1+0 records in 01:28:07.122 1+0 records out 01:28:07.122 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000590846 s, 6.9 MB/s 01:28:07.122 05:22:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:28:07.122 05:22:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 01:28:07.122 05:22:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:28:07.122 05:22:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:28:07.122 05:22:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 01:28:07.122 05:22:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 01:28:07.122 05:22:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 01:28:07.122 05:22:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 01:28:07.381 05:22:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 01:28:07.381 05:22:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 01:28:07.381 05:22:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 01:28:07.381 05:22:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 01:28:07.381 05:22:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 01:28:07.381 05:22:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:28:07.381 05:22:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:28:07.381 05:22:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 01:28:07.381 05:22:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 01:28:07.381 05:22:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:28:07.381 05:22:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:28:07.381 05:22:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:28:07.381 1+0 records in 01:28:07.381 1+0 records out 01:28:07.381 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000704672 s, 5.8 MB/s 01:28:07.381 05:22:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:28:07.381 05:22:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 01:28:07.381 05:22:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:28:07.381 05:22:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:28:07.381 05:22:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 01:28:07.381 05:22:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 01:28:07.381 05:22:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 01:28:07.381 05:22:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 01:28:07.640 05:22:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 01:28:07.640 05:22:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 01:28:07.640 05:22:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 01:28:07.640 05:22:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 01:28:07.640 05:22:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 01:28:07.640 05:22:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:28:07.640 05:22:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:28:07.640 05:22:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 01:28:07.640 05:22:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 01:28:07.640 05:22:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:28:07.640 05:22:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:28:07.640 05:22:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:28:07.640 1+0 records in 01:28:07.640 1+0 records out 01:28:07.640 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000786459 s, 5.2 MB/s 01:28:07.640 05:22:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:28:07.640 05:22:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 01:28:07.640 05:22:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:28:07.640 05:22:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:28:07.640 05:22:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 01:28:07.640 05:22:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 01:28:07.640 05:22:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 01:28:07.640 05:22:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 01:28:07.900 05:22:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 01:28:07.900 05:22:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 01:28:07.900 05:22:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 01:28:07.900 05:22:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 01:28:07.900 05:22:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 01:28:07.900 05:22:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:28:07.900 05:22:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:28:07.900 05:22:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 01:28:07.900 05:22:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 01:28:07.900 05:22:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:28:07.900 05:22:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:28:07.900 05:22:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:28:07.900 1+0 records in 01:28:07.900 1+0 records out 01:28:07.900 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000847544 s, 4.8 MB/s 01:28:07.900 05:22:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:28:07.900 05:22:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 01:28:07.900 05:22:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:28:07.900 05:22:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:28:07.900 05:22:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 01:28:07.900 05:22:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 01:28:07.900 05:22:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 01:28:07.900 05:22:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 01:28:08.159 05:22:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 01:28:08.159 05:22:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 01:28:08.159 05:22:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 01:28:08.159 05:22:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 01:28:08.159 05:22:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 01:28:08.159 05:22:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:28:08.159 05:22:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:28:08.159 05:22:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 01:28:08.159 05:22:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 01:28:08.159 05:22:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:28:08.159 05:22:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:28:08.159 05:22:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:28:08.159 1+0 records in 01:28:08.159 1+0 records out 01:28:08.159 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000807831 s, 5.1 MB/s 01:28:08.159 05:22:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:28:08.159 05:22:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 01:28:08.159 05:22:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:28:08.159 05:22:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:28:08.159 05:22:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 01:28:08.159 05:22:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 01:28:08.159 05:22:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 01:28:08.159 05:22:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 01:28:08.419 05:22:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 01:28:08.419 { 01:28:08.419 "nbd_device": "/dev/nbd0", 01:28:08.419 "bdev_name": "nvme0n1" 01:28:08.419 }, 01:28:08.419 { 01:28:08.419 "nbd_device": "/dev/nbd1", 01:28:08.419 "bdev_name": "nvme0n2" 01:28:08.419 }, 01:28:08.419 { 01:28:08.419 "nbd_device": "/dev/nbd2", 01:28:08.419 "bdev_name": "nvme0n3" 01:28:08.419 }, 01:28:08.419 { 01:28:08.419 "nbd_device": "/dev/nbd3", 01:28:08.419 "bdev_name": "nvme1n1" 01:28:08.419 }, 01:28:08.419 { 01:28:08.419 "nbd_device": "/dev/nbd4", 01:28:08.419 "bdev_name": "nvme2n1" 01:28:08.419 }, 01:28:08.419 { 01:28:08.419 "nbd_device": "/dev/nbd5", 01:28:08.419 "bdev_name": "nvme3n1" 01:28:08.419 } 01:28:08.419 ]' 01:28:08.419 05:22:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 01:28:08.419 05:22:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 01:28:08.419 05:22:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 01:28:08.419 { 01:28:08.419 "nbd_device": "/dev/nbd0", 01:28:08.419 "bdev_name": "nvme0n1" 01:28:08.419 }, 01:28:08.419 { 01:28:08.419 "nbd_device": "/dev/nbd1", 01:28:08.419 "bdev_name": "nvme0n2" 01:28:08.419 }, 01:28:08.419 { 01:28:08.419 "nbd_device": "/dev/nbd2", 01:28:08.419 "bdev_name": "nvme0n3" 01:28:08.419 }, 01:28:08.419 { 01:28:08.419 "nbd_device": "/dev/nbd3", 01:28:08.419 "bdev_name": "nvme1n1" 01:28:08.419 }, 01:28:08.419 { 01:28:08.419 "nbd_device": "/dev/nbd4", 01:28:08.419 "bdev_name": "nvme2n1" 01:28:08.419 }, 01:28:08.419 { 01:28:08.419 "nbd_device": "/dev/nbd5", 01:28:08.419 "bdev_name": "nvme3n1" 01:28:08.419 } 01:28:08.419 ]' 01:28:08.419 05:22:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 01:28:08.419 05:22:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:28:08.419 05:22:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 01:28:08.419 05:22:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 01:28:08.419 05:22:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 01:28:08.419 05:22:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:28:08.419 05:22:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 01:28:08.677 05:22:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 01:28:08.677 05:22:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 01:28:08.677 05:22:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 01:28:08.677 05:22:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:28:08.677 05:22:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:28:08.677 05:22:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 01:28:08.677 05:22:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 01:28:08.677 05:22:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 01:28:08.677 05:22:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:28:08.677 05:22:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 01:28:08.936 05:22:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 01:28:08.936 05:22:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 01:28:08.936 05:22:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 01:28:08.936 05:22:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:28:08.936 05:22:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:28:08.936 05:22:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 01:28:08.936 05:22:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 01:28:08.936 05:22:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 01:28:08.936 05:22:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:28:08.936 05:22:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 01:28:09.194 05:22:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 01:28:09.194 05:22:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 01:28:09.194 05:22:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 01:28:09.194 05:22:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:28:09.194 05:22:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:28:09.194 05:22:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 01:28:09.194 05:22:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 01:28:09.194 05:22:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 01:28:09.194 05:22:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:28:09.194 05:22:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 01:28:09.194 05:22:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 01:28:09.194 05:22:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 01:28:09.194 05:22:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 01:28:09.194 05:22:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:28:09.194 05:22:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:28:09.194 05:22:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 01:28:09.194 05:22:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 01:28:09.194 05:22:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 01:28:09.194 05:22:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:28:09.194 05:22:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 01:28:09.452 05:22:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 01:28:09.452 05:22:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 01:28:09.452 05:22:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 01:28:09.452 05:22:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:28:09.452 05:22:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:28:09.452 05:22:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 01:28:09.452 05:22:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 01:28:09.452 05:22:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 01:28:09.452 05:22:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:28:09.452 05:22:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 01:28:09.710 05:22:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 01:28:09.710 05:22:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 01:28:09.710 05:22:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 01:28:09.710 05:22:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:28:09.710 05:22:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:28:09.710 05:22:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 01:28:09.710 05:22:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 01:28:09.710 05:22:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 01:28:09.710 05:22:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 01:28:09.710 05:22:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:28:09.710 05:22:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 01:28:09.968 05:22:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 01:28:09.968 05:22:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 01:28:09.968 05:22:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 01:28:09.968 05:22:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 01:28:09.968 05:22:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 01:28:09.968 05:22:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 01:28:09.968 05:22:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 01:28:09.968 05:22:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 01:28:09.968 05:22:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 01:28:09.968 05:22:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 01:28:09.968 05:22:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 01:28:09.968 05:22:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 01:28:09.968 05:22:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 01:28:09.968 05:22:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:28:09.968 05:22:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 01:28:09.968 05:22:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 01:28:09.968 05:22:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 01:28:09.968 05:22:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 01:28:09.968 05:22:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 01:28:09.968 05:22:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:28:09.968 05:22:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 01:28:09.968 05:22:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 01:28:09.968 05:22:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 01:28:09.968 05:22:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 01:28:09.968 05:22:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 01:28:09.968 05:22:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 01:28:09.968 05:22:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 01:28:09.968 05:22:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 01:28:10.227 /dev/nbd0 01:28:10.227 05:22:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 01:28:10.227 05:22:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 01:28:10.227 05:22:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 01:28:10.227 05:22:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 01:28:10.227 05:22:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:28:10.227 05:22:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:28:10.227 05:22:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 01:28:10.227 05:22:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 01:28:10.227 05:22:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:28:10.227 05:22:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:28:10.227 05:22:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:28:10.227 1+0 records in 01:28:10.227 1+0 records out 01:28:10.227 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000427944 s, 9.6 MB/s 01:28:10.227 05:22:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:28:10.227 05:22:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 01:28:10.227 05:22:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:28:10.227 05:22:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:28:10.227 05:22:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 01:28:10.227 05:22:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:28:10.227 05:22:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 01:28:10.227 05:22:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 /dev/nbd1 01:28:10.485 /dev/nbd1 01:28:10.485 05:22:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 01:28:10.485 05:22:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 01:28:10.485 05:22:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 01:28:10.485 05:22:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 01:28:10.485 05:22:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:28:10.485 05:22:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:28:10.485 05:22:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 01:28:10.485 05:22:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 01:28:10.485 05:22:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:28:10.485 05:22:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:28:10.485 05:22:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:28:10.485 1+0 records in 01:28:10.485 1+0 records out 01:28:10.485 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00045634 s, 9.0 MB/s 01:28:10.485 05:22:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:28:10.485 05:22:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 01:28:10.485 05:22:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:28:10.485 05:22:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:28:10.485 05:22:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 01:28:10.485 05:22:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:28:10.485 05:22:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 01:28:10.485 05:22:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 /dev/nbd10 01:28:10.744 /dev/nbd10 01:28:10.744 05:22:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 01:28:10.744 05:22:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 01:28:10.744 05:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 01:28:10.744 05:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 01:28:10.744 05:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:28:10.744 05:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:28:10.744 05:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 01:28:10.744 05:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 01:28:10.744 05:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:28:10.744 05:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:28:10.744 05:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:28:10.744 1+0 records in 01:28:10.744 1+0 records out 01:28:10.744 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000680776 s, 6.0 MB/s 01:28:10.744 05:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:28:10.744 05:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 01:28:10.744 05:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:28:10.744 05:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:28:10.744 05:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 01:28:10.744 05:22:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:28:10.744 05:22:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 01:28:10.744 05:22:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd11 01:28:11.002 /dev/nbd11 01:28:11.002 05:22:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 01:28:11.002 05:22:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 01:28:11.002 05:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 01:28:11.002 05:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 01:28:11.002 05:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:28:11.002 05:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:28:11.002 05:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 01:28:11.002 05:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 01:28:11.002 05:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:28:11.002 05:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:28:11.002 05:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:28:11.002 1+0 records in 01:28:11.002 1+0 records out 01:28:11.002 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000647995 s, 6.3 MB/s 01:28:11.002 05:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:28:11.002 05:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 01:28:11.002 05:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:28:11.002 05:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:28:11.002 05:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 01:28:11.002 05:22:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:28:11.002 05:22:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 01:28:11.002 05:22:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 01:28:11.260 /dev/nbd12 01:28:11.260 05:22:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 01:28:11.260 05:22:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 01:28:11.260 05:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 01:28:11.260 05:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 01:28:11.260 05:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:28:11.260 05:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:28:11.260 05:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 01:28:11.260 05:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 01:28:11.260 05:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:28:11.260 05:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:28:11.260 05:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:28:11.260 1+0 records in 01:28:11.260 1+0 records out 01:28:11.260 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000911304 s, 4.5 MB/s 01:28:11.260 05:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:28:11.260 05:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 01:28:11.260 05:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:28:11.260 05:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:28:11.260 05:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 01:28:11.260 05:22:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:28:11.260 05:22:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 01:28:11.260 05:22:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 01:28:11.519 /dev/nbd13 01:28:11.519 05:22:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 01:28:11.519 05:22:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 01:28:11.519 05:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 01:28:11.519 05:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 01:28:11.519 05:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:28:11.519 05:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:28:11.519 05:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 01:28:11.519 05:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 01:28:11.519 05:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:28:11.519 05:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:28:11.519 05:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:28:11.519 1+0 records in 01:28:11.519 1+0 records out 01:28:11.519 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000724978 s, 5.6 MB/s 01:28:11.519 05:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:28:11.519 05:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 01:28:11.519 05:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:28:11.519 05:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:28:11.519 05:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 01:28:11.519 05:22:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:28:11.519 05:22:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 01:28:11.519 05:22:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 01:28:11.519 05:22:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:28:11.519 05:22:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 01:28:11.778 05:22:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 01:28:11.778 { 01:28:11.778 "nbd_device": "/dev/nbd0", 01:28:11.778 "bdev_name": "nvme0n1" 01:28:11.778 }, 01:28:11.778 { 01:28:11.778 "nbd_device": "/dev/nbd1", 01:28:11.778 "bdev_name": "nvme0n2" 01:28:11.778 }, 01:28:11.778 { 01:28:11.778 "nbd_device": "/dev/nbd10", 01:28:11.778 "bdev_name": "nvme0n3" 01:28:11.778 }, 01:28:11.778 { 01:28:11.778 "nbd_device": "/dev/nbd11", 01:28:11.778 "bdev_name": "nvme1n1" 01:28:11.778 }, 01:28:11.778 { 01:28:11.778 "nbd_device": "/dev/nbd12", 01:28:11.778 "bdev_name": "nvme2n1" 01:28:11.778 }, 01:28:11.778 { 01:28:11.778 "nbd_device": "/dev/nbd13", 01:28:11.778 "bdev_name": "nvme3n1" 01:28:11.778 } 01:28:11.778 ]' 01:28:11.778 05:22:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 01:28:11.778 { 01:28:11.778 "nbd_device": "/dev/nbd0", 01:28:11.778 "bdev_name": "nvme0n1" 01:28:11.778 }, 01:28:11.778 { 01:28:11.778 "nbd_device": "/dev/nbd1", 01:28:11.778 "bdev_name": "nvme0n2" 01:28:11.778 }, 01:28:11.778 { 01:28:11.778 "nbd_device": "/dev/nbd10", 01:28:11.778 "bdev_name": "nvme0n3" 01:28:11.778 }, 01:28:11.778 { 01:28:11.778 "nbd_device": "/dev/nbd11", 01:28:11.779 "bdev_name": "nvme1n1" 01:28:11.779 }, 01:28:11.779 { 01:28:11.779 "nbd_device": "/dev/nbd12", 01:28:11.779 "bdev_name": "nvme2n1" 01:28:11.779 }, 01:28:11.779 { 01:28:11.779 "nbd_device": "/dev/nbd13", 01:28:11.779 "bdev_name": "nvme3n1" 01:28:11.779 } 01:28:11.779 ]' 01:28:11.779 05:22:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 01:28:11.779 05:22:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 01:28:11.779 /dev/nbd1 01:28:11.779 /dev/nbd10 01:28:11.779 /dev/nbd11 01:28:11.779 /dev/nbd12 01:28:11.779 /dev/nbd13' 01:28:11.779 05:22:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 01:28:11.779 /dev/nbd1 01:28:11.779 /dev/nbd10 01:28:11.779 /dev/nbd11 01:28:11.779 /dev/nbd12 01:28:11.779 /dev/nbd13' 01:28:11.779 05:22:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 01:28:11.779 05:22:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 01:28:11.779 05:22:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 01:28:11.779 05:22:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 01:28:11.779 05:22:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 01:28:11.779 05:22:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 01:28:11.779 05:22:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 01:28:11.779 05:22:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 01:28:11.779 05:22:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 01:28:11.779 05:22:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 01:28:11.779 05:22:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 01:28:11.779 05:22:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 01:28:11.779 256+0 records in 01:28:11.779 256+0 records out 01:28:11.779 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0126776 s, 82.7 MB/s 01:28:11.779 05:22:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 01:28:11.779 05:22:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 01:28:12.037 256+0 records in 01:28:12.037 256+0 records out 01:28:12.037 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.122238 s, 8.6 MB/s 01:28:12.037 05:22:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 01:28:12.037 05:22:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 01:28:12.037 256+0 records in 01:28:12.037 256+0 records out 01:28:12.037 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.127796 s, 8.2 MB/s 01:28:12.037 05:22:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 01:28:12.037 05:22:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 01:28:12.295 256+0 records in 01:28:12.295 256+0 records out 01:28:12.295 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.121914 s, 8.6 MB/s 01:28:12.295 05:22:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 01:28:12.295 05:22:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 01:28:12.295 256+0 records in 01:28:12.295 256+0 records out 01:28:12.295 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.133769 s, 7.8 MB/s 01:28:12.295 05:22:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 01:28:12.295 05:22:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 01:28:12.553 256+0 records in 01:28:12.553 256+0 records out 01:28:12.553 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.153216 s, 6.8 MB/s 01:28:12.553 05:22:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 01:28:12.553 05:22:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 01:28:12.553 256+0 records in 01:28:12.553 256+0 records out 01:28:12.553 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.128325 s, 8.2 MB/s 01:28:12.553 05:22:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 01:28:12.553 05:22:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 01:28:12.553 05:22:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 01:28:12.553 05:22:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 01:28:12.553 05:22:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 01:28:12.553 05:22:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 01:28:12.553 05:22:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 01:28:12.553 05:22:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 01:28:12.553 05:22:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 01:28:12.553 05:22:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 01:28:12.553 05:22:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 01:28:12.811 05:22:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 01:28:12.811 05:22:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 01:28:12.811 05:22:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 01:28:12.811 05:22:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 01:28:12.811 05:22:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 01:28:12.811 05:22:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 01:28:12.811 05:22:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 01:28:12.811 05:22:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 01:28:12.811 05:22:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 01:28:12.811 05:22:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 01:28:12.811 05:22:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:28:12.811 05:22:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 01:28:12.811 05:22:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 01:28:12.811 05:22:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 01:28:12.811 05:22:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:28:12.811 05:22:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 01:28:13.082 05:22:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 01:28:13.082 05:22:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 01:28:13.082 05:22:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 01:28:13.082 05:22:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:28:13.082 05:22:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:28:13.082 05:22:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 01:28:13.082 05:22:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 01:28:13.082 05:22:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 01:28:13.082 05:22:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:28:13.082 05:22:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 01:28:13.082 05:22:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 01:28:13.082 05:22:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 01:28:13.082 05:22:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 01:28:13.082 05:22:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:28:13.082 05:22:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:28:13.082 05:22:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 01:28:13.082 05:22:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 01:28:13.082 05:22:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 01:28:13.082 05:22:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:28:13.082 05:22:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 01:28:13.341 05:22:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 01:28:13.341 05:22:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 01:28:13.341 05:22:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 01:28:13.341 05:22:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:28:13.341 05:22:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:28:13.341 05:22:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 01:28:13.341 05:22:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 01:28:13.341 05:22:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 01:28:13.341 05:22:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:28:13.341 05:22:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 01:28:13.600 05:22:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 01:28:13.600 05:22:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 01:28:13.600 05:22:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 01:28:13.600 05:22:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:28:13.600 05:22:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:28:13.600 05:22:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 01:28:13.600 05:22:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 01:28:13.600 05:22:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 01:28:13.600 05:22:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:28:13.600 05:22:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 01:28:13.859 05:22:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 01:28:13.859 05:22:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 01:28:13.859 05:22:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 01:28:13.859 05:22:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:28:13.859 05:22:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:28:13.859 05:22:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 01:28:13.859 05:22:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 01:28:13.859 05:22:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 01:28:13.859 05:22:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:28:13.859 05:22:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 01:28:14.118 05:22:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 01:28:14.118 05:22:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 01:28:14.118 05:22:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 01:28:14.118 05:22:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:28:14.118 05:22:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:28:14.118 05:22:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 01:28:14.118 05:22:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 01:28:14.118 05:22:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 01:28:14.118 05:22:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 01:28:14.118 05:22:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:28:14.118 05:22:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 01:28:14.377 05:22:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 01:28:14.377 05:22:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 01:28:14.377 05:22:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 01:28:14.377 05:22:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 01:28:14.377 05:22:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 01:28:14.377 05:22:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 01:28:14.377 05:22:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 01:28:14.377 05:22:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 01:28:14.377 05:22:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 01:28:14.377 05:22:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 01:28:14.377 05:22:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 01:28:14.377 05:22:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 01:28:14.377 05:22:56 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 01:28:14.377 05:22:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:28:14.377 05:22:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 01:28:14.377 05:22:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 01:28:14.636 malloc_lvol_verify 01:28:14.636 05:22:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 01:28:14.895 657bc426-385d-4db8-9440-98a874b438e8 01:28:14.895 05:22:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 01:28:14.895 5633ff39-d7c2-4a8b-be71-9eaa198ac322 01:28:14.895 05:22:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 01:28:15.155 /dev/nbd0 01:28:15.155 05:22:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 01:28:15.155 05:22:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 01:28:15.155 05:22:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 01:28:15.155 05:22:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 01:28:15.155 05:22:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 01:28:15.155 mke2fs 1.47.0 (5-Feb-2023) 01:28:15.155 Discarding device blocks: 0/4096 done 01:28:15.155 Creating filesystem with 4096 1k blocks and 1024 inodes 01:28:15.155 01:28:15.155 Allocating group tables: 0/1 done 01:28:15.155 Writing inode tables: 0/1 done 01:28:15.155 Creating journal (1024 blocks): done 01:28:15.155 Writing superblocks and filesystem accounting information: 0/1 done 01:28:15.155 01:28:15.155 05:22:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 01:28:15.155 05:22:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:28:15.155 05:22:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 01:28:15.155 05:22:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 01:28:15.155 05:22:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 01:28:15.155 05:22:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:28:15.155 05:22:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 01:28:15.414 05:22:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 01:28:15.414 05:22:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 01:28:15.414 05:22:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 01:28:15.414 05:22:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:28:15.414 05:22:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:28:15.414 05:22:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 01:28:15.414 05:22:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 01:28:15.414 05:22:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 01:28:15.414 05:22:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 74111 01:28:15.414 05:22:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 74111 ']' 01:28:15.414 05:22:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 74111 01:28:15.414 05:22:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 01:28:15.414 05:22:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:28:15.414 05:22:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74111 01:28:15.414 killing process with pid 74111 01:28:15.414 05:22:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:28:15.414 05:22:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:28:15.415 05:22:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74111' 01:28:15.415 05:22:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 74111 01:28:15.415 05:22:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 74111 01:28:16.793 05:22:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 01:28:16.793 01:28:16.793 real 0m11.217s 01:28:16.793 user 0m14.094s 01:28:16.793 sys 0m4.948s 01:28:16.793 05:22:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 01:28:16.793 ************************************ 01:28:16.793 END TEST bdev_nbd 01:28:16.793 ************************************ 01:28:16.793 05:22:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 01:28:16.793 05:22:59 blockdev_xnvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 01:28:16.793 05:22:59 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = nvme ']' 01:28:16.793 05:22:59 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = gpt ']' 01:28:16.793 05:22:59 blockdev_xnvme -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 01:28:16.793 05:22:59 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:28:16.793 05:22:59 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 01:28:16.793 05:22:59 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 01:28:16.793 ************************************ 01:28:16.793 START TEST bdev_fio 01:28:16.793 ************************************ 01:28:16.793 05:22:59 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 01:28:16.793 05:22:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 01:28:16.793 05:22:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 01:28:16.793 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 01:28:16.793 05:22:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 01:28:16.793 05:22:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 01:28:16.793 05:22:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 01:28:16.793 05:22:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 01:28:16.793 05:22:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 01:28:16.793 05:22:59 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 01:28:16.793 05:22:59 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 01:28:16.793 05:22:59 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 01:28:16.793 05:22:59 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 01:28:16.793 05:22:59 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 01:28:16.793 05:22:59 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 01:28:16.793 05:22:59 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 01:28:16.793 05:22:59 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 01:28:16.793 05:22:59 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 01:28:17.052 05:22:59 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 01:28:17.052 05:22:59 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 01:28:17.052 05:22:59 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 01:28:17.052 05:22:59 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 01:28:17.052 05:22:59 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 01:28:17.052 05:22:59 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 01:28:17.052 05:22:59 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 01:28:17.052 05:22:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 01:28:17.052 05:22:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 01:28:17.052 05:22:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 01:28:17.052 05:22:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 01:28:17.052 05:22:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n2]' 01:28:17.052 05:22:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n2 01:28:17.052 05:22:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 01:28:17.052 05:22:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n3]' 01:28:17.052 05:22:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n3 01:28:17.052 05:22:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 01:28:17.052 05:22:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 01:28:17.052 05:22:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 01:28:17.052 05:22:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 01:28:17.052 05:22:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 01:28:17.052 05:22:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 01:28:17.052 05:22:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 01:28:17.052 05:22:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 01:28:17.052 05:22:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 01:28:17.052 05:22:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 01:28:17.052 05:22:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 01:28:17.052 05:22:59 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 01:28:17.052 05:22:59 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 01:28:17.052 05:22:59 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 01:28:17.052 ************************************ 01:28:17.052 START TEST bdev_fio_rw_verify 01:28:17.052 ************************************ 01:28:17.052 05:22:59 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 01:28:17.052 05:22:59 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 01:28:17.052 05:22:59 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 01:28:17.052 05:22:59 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:28:17.052 05:22:59 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 01:28:17.052 05:22:59 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:28:17.052 05:22:59 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 01:28:17.052 05:22:59 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 01:28:17.052 05:22:59 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:28:17.052 05:22:59 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 01:28:17.052 05:22:59 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:28:17.052 05:22:59 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:28:17.052 05:22:59 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 01:28:17.052 05:22:59 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 01:28:17.052 05:22:59 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 01:28:17.052 05:22:59 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 01:28:17.052 05:22:59 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 01:28:17.310 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 01:28:17.310 job_nvme0n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 01:28:17.310 job_nvme0n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 01:28:17.310 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 01:28:17.310 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 01:28:17.310 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 01:28:17.310 fio-3.35 01:28:17.310 Starting 6 threads 01:28:29.501 01:28:29.501 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=74523: Mon Dec 9 05:23:10 2024 01:28:29.501 read: IOPS=33.3k, BW=130MiB/s (136MB/s)(1300MiB/10001msec) 01:28:29.501 slat (usec): min=2, max=1585, avg= 7.88, stdev= 7.19 01:28:29.501 clat (usec): min=85, max=4550, avg=548.09, stdev=242.61 01:28:29.501 lat (usec): min=91, max=4579, avg=555.97, stdev=243.83 01:28:29.501 clat percentiles (usec): 01:28:29.501 | 50.000th=[ 537], 99.000th=[ 1270], 99.900th=[ 2114], 99.990th=[ 3818], 01:28:29.501 | 99.999th=[ 4490] 01:28:29.501 write: IOPS=33.6k, BW=131MiB/s (138MB/s)(1313MiB/10001msec); 0 zone resets 01:28:29.501 slat (usec): min=10, max=2598, avg=25.48, stdev=34.34 01:28:29.501 clat (usec): min=82, max=4338, avg=640.77, stdev=262.96 01:28:29.501 lat (usec): min=98, max=4405, avg=666.25, stdev=268.14 01:28:29.501 clat percentiles (usec): 01:28:29.501 | 50.000th=[ 619], 99.000th=[ 1483], 99.900th=[ 2180], 99.990th=[ 3359], 01:28:29.501 | 99.999th=[ 4178] 01:28:29.501 bw ( KiB/s): min=106793, max=157595, per=99.83%, avg=134165.37, stdev=2239.05, samples=114 01:28:29.501 iops : min=26698, max=39398, avg=33541.00, stdev=559.79, samples=114 01:28:29.501 lat (usec) : 100=0.01%, 250=6.59%, 500=29.98%, 750=41.40%, 1000=16.56% 01:28:29.501 lat (msec) : 2=5.30%, 4=0.16%, 10=0.01% 01:28:29.501 cpu : usr=56.08%, sys=29.03%, ctx=7699, majf=0, minf=27627 01:28:29.501 IO depths : 1=11.9%, 2=24.3%, 4=50.7%, 8=13.1%, 16=0.0%, 32=0.0%, >=64=0.0% 01:28:29.501 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:28:29.501 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:28:29.501 issued rwts: total=332797,336023,0,0 short=0,0,0,0 dropped=0,0,0,0 01:28:29.501 latency : target=0, window=0, percentile=100.00%, depth=8 01:28:29.501 01:28:29.501 Run status group 0 (all jobs): 01:28:29.501 READ: bw=130MiB/s (136MB/s), 130MiB/s-130MiB/s (136MB/s-136MB/s), io=1300MiB (1363MB), run=10001-10001msec 01:28:29.501 WRITE: bw=131MiB/s (138MB/s), 131MiB/s-131MiB/s (138MB/s-138MB/s), io=1313MiB (1376MB), run=10001-10001msec 01:28:29.760 ----------------------------------------------------- 01:28:29.760 Suppressions used: 01:28:29.760 count bytes template 01:28:29.760 6 48 /usr/src/fio/parse.c 01:28:29.760 2997 287712 /usr/src/fio/iolog.c 01:28:29.760 1 8 libtcmalloc_minimal.so 01:28:29.760 1 904 libcrypto.so 01:28:29.760 ----------------------------------------------------- 01:28:29.760 01:28:29.760 01:28:29.760 real 0m12.704s 01:28:29.760 user 0m35.765s 01:28:29.760 sys 0m17.915s 01:28:29.760 05:23:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 01:28:29.760 05:23:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 01:28:29.760 ************************************ 01:28:29.760 END TEST bdev_fio_rw_verify 01:28:29.760 ************************************ 01:28:29.760 05:23:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 01:28:29.760 05:23:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 01:28:29.760 05:23:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 01:28:29.760 05:23:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 01:28:29.760 05:23:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 01:28:29.760 05:23:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 01:28:29.760 05:23:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 01:28:29.760 05:23:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 01:28:29.760 05:23:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 01:28:29.760 05:23:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 01:28:29.760 05:23:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 01:28:29.760 05:23:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 01:28:29.760 05:23:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 01:28:29.760 05:23:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 01:28:29.760 05:23:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 01:28:29.760 05:23:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 01:28:29.760 05:23:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 01:28:29.761 05:23:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "02594689-49b4-4a0e-8a0e-4a524dee8196"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "02594689-49b4-4a0e-8a0e-4a524dee8196",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "13a1009d-cba8-49f6-8d8c-3653c5ba11f5"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "13a1009d-cba8-49f6-8d8c-3653c5ba11f5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "f25d4b29-3271-424d-974a-a08feb9a7c0e"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "f25d4b29-3271-424d-974a-a08feb9a7c0e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "99bb4bb8-eaa8-4f9b-9380-0685df77e375"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "99bb4bb8-eaa8-4f9b-9380-0685df77e375",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "db11173f-6660-4f8e-b848-87f4fd638f8d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "db11173f-6660-4f8e-b848-87f4fd638f8d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "6db0f66b-fd8c-41ce-853e-9549eec293be"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "6db0f66b-fd8c-41ce-853e-9549eec293be",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 01:28:29.761 05:23:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 01:28:29.761 05:23:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 01:28:29.761 /home/vagrant/spdk_repo/spdk 01:28:29.761 05:23:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 01:28:29.761 05:23:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 01:28:29.761 05:23:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 01:28:29.761 01:28:29.761 real 0m12.939s 01:28:29.761 user 0m35.884s 01:28:29.761 sys 0m18.040s 01:28:29.761 05:23:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 01:28:29.761 05:23:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 01:28:29.761 ************************************ 01:28:29.761 END TEST bdev_fio 01:28:29.761 ************************************ 01:28:30.020 05:23:12 blockdev_xnvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 01:28:30.020 05:23:12 blockdev_xnvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 01:28:30.020 05:23:12 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 01:28:30.020 05:23:12 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 01:28:30.020 05:23:12 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 01:28:30.020 ************************************ 01:28:30.020 START TEST bdev_verify 01:28:30.020 ************************************ 01:28:30.020 05:23:12 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 01:28:30.020 [2024-12-09 05:23:12.346703] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:28:30.020 [2024-12-09 05:23:12.346827] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74700 ] 01:28:30.277 [2024-12-09 05:23:12.537369] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 01:28:30.277 [2024-12-09 05:23:12.671696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:28:30.277 [2024-12-09 05:23:12.671722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:28:30.844 Running I/O for 5 seconds... 01:28:33.159 22592.00 IOPS, 88.25 MiB/s [2024-12-09T05:23:16.549Z] 23712.00 IOPS, 92.62 MiB/s [2024-12-09T05:23:17.484Z] 23754.33 IOPS, 92.79 MiB/s [2024-12-09T05:23:18.417Z] 23799.75 IOPS, 92.97 MiB/s [2024-12-09T05:23:18.417Z] 23520.00 IOPS, 91.88 MiB/s 01:28:35.961 Latency(us) 01:28:35.961 [2024-12-09T05:23:18.417Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:28:35.961 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 01:28:35.961 Verification LBA range: start 0x0 length 0x80000 01:28:35.961 nvme0n1 : 5.04 1779.38 6.95 0.00 0.00 71818.67 12844.00 65272.80 01:28:35.961 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 01:28:35.961 Verification LBA range: start 0x80000 length 0x80000 01:28:35.961 nvme0n1 : 5.06 1822.79 7.12 0.00 0.00 70111.30 9843.56 61903.88 01:28:35.961 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 01:28:35.961 Verification LBA range: start 0x0 length 0x80000 01:28:35.961 nvme0n2 : 5.05 1773.97 6.93 0.00 0.00 71926.49 14633.74 72010.64 01:28:35.961 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 01:28:35.961 Verification LBA range: start 0x80000 length 0x80000 01:28:35.961 nvme0n2 : 5.05 1825.30 7.13 0.00 0.00 69900.30 10791.07 63588.34 01:28:35.961 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 01:28:35.961 Verification LBA range: start 0x0 length 0x80000 01:28:35.961 nvme0n3 : 5.05 1773.46 6.93 0.00 0.00 71830.25 11001.63 74537.33 01:28:35.961 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 01:28:35.961 Verification LBA range: start 0x80000 length 0x80000 01:28:35.961 nvme0n3 : 5.03 1805.47 7.05 0.00 0.00 70549.11 12633.45 61482.77 01:28:35.961 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 01:28:35.961 Verification LBA range: start 0x0 length 0x20000 01:28:35.961 nvme1n1 : 5.07 1791.58 7.00 0.00 0.00 70978.50 9001.33 64009.46 01:28:35.961 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 01:28:35.961 Verification LBA range: start 0x20000 length 0x20000 01:28:35.961 nvme1n1 : 5.09 1812.20 7.08 0.00 0.00 70173.46 7632.71 65693.92 01:28:35.961 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 01:28:35.961 Verification LBA range: start 0x0 length 0xbd0bd 01:28:35.961 nvme2n1 : 5.07 2796.91 10.93 0.00 0.00 45320.60 5395.53 62746.11 01:28:35.961 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 01:28:35.961 Verification LBA range: start 0xbd0bd length 0xbd0bd 01:28:35.961 nvme2n1 : 5.07 2727.47 10.65 0.00 0.00 46504.93 5474.49 55587.16 01:28:35.961 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 01:28:35.961 Verification LBA range: start 0x0 length 0xa0000 01:28:35.961 nvme3n1 : 5.06 1769.03 6.91 0.00 0.00 71586.34 11264.82 72010.64 01:28:35.961 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 01:28:35.961 Verification LBA range: start 0xa0000 length 0xa0000 01:28:35.961 nvme3n1 : 5.08 1638.59 6.40 0.00 0.00 77322.98 8369.66 114543.24 01:28:35.961 [2024-12-09T05:23:18.417Z] =================================================================================================================== 01:28:35.961 [2024-12-09T05:23:18.417Z] Total : 23316.15 91.08 0.00 0.00 65474.25 5395.53 114543.24 01:28:37.340 01:28:37.340 real 0m7.346s 01:28:37.340 user 0m10.942s 01:28:37.340 sys 0m2.207s 01:28:37.341 05:23:19 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 01:28:37.341 05:23:19 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 01:28:37.341 ************************************ 01:28:37.341 END TEST bdev_verify 01:28:37.341 ************************************ 01:28:37.341 05:23:19 blockdev_xnvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 01:28:37.341 05:23:19 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 01:28:37.341 05:23:19 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 01:28:37.341 05:23:19 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 01:28:37.341 ************************************ 01:28:37.341 START TEST bdev_verify_big_io 01:28:37.341 ************************************ 01:28:37.341 05:23:19 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 01:28:37.341 [2024-12-09 05:23:19.764726] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:28:37.341 [2024-12-09 05:23:19.764840] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74800 ] 01:28:37.600 [2024-12-09 05:23:19.947024] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 01:28:37.859 [2024-12-09 05:23:20.066209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:28:37.859 [2024-12-09 05:23:20.066303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:28:38.429 Running I/O for 5 seconds... 01:28:44.450 2416.00 IOPS, 151.00 MiB/s [2024-12-09T05:23:26.906Z] 4542.50 IOPS, 283.91 MiB/s 01:28:44.450 Latency(us) 01:28:44.450 [2024-12-09T05:23:26.906Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:28:44.450 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 01:28:44.450 Verification LBA range: start 0x0 length 0x8000 01:28:44.450 nvme0n1 : 5.58 203.71 12.73 0.00 0.00 601692.23 22950.76 795064.85 01:28:44.450 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 01:28:44.450 Verification LBA range: start 0x8000 length 0x8000 01:28:44.450 nvme0n1 : 5.73 128.56 8.03 0.00 0.00 967806.56 23898.27 1192597.28 01:28:44.450 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 01:28:44.450 Verification LBA range: start 0x0 length 0x8000 01:28:44.450 nvme0n2 : 5.66 226.01 14.13 0.00 0.00 538698.37 4684.90 660308.10 01:28:44.450 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 01:28:44.450 Verification LBA range: start 0x8000 length 0x8000 01:28:44.450 nvme0n2 : 5.73 108.81 6.80 0.00 0.00 1124261.61 57271.62 1994399.97 01:28:44.450 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 01:28:44.450 Verification LBA range: start 0x0 length 0x8000 01:28:44.450 nvme0n3 : 5.66 212.14 13.26 0.00 0.00 558533.59 78327.36 562609.45 01:28:44.450 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 01:28:44.450 Verification LBA range: start 0x8000 length 0x8000 01:28:44.450 nvme0n3 : 5.73 122.88 7.68 0.00 0.00 972860.86 63167.23 2102205.38 01:28:44.450 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 01:28:44.450 Verification LBA range: start 0x0 length 0x2000 01:28:44.450 nvme1n1 : 5.67 193.42 12.09 0.00 0.00 602357.16 84222.97 528920.26 01:28:44.450 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 01:28:44.450 Verification LBA range: start 0x2000 length 0x2000 01:28:44.450 nvme1n1 : 5.74 133.86 8.37 0.00 0.00 867798.33 36215.88 1111743.23 01:28:44.450 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 01:28:44.450 Verification LBA range: start 0x0 length 0xbd0b 01:28:44.450 nvme2n1 : 5.72 218.22 13.64 0.00 0.00 520792.80 9106.61 603036.48 01:28:44.450 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 01:28:44.450 Verification LBA range: start 0xbd0b length 0xbd0b 01:28:44.450 nvme2n1 : 5.73 100.49 6.28 0.00 0.00 1122456.80 112016.55 2668183.75 01:28:44.450 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 01:28:44.450 Verification LBA range: start 0x0 length 0xa000 01:28:44.450 nvme3n1 : 5.73 220.69 13.79 0.00 0.00 505302.08 2013.46 629987.83 01:28:44.450 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 01:28:44.450 Verification LBA range: start 0xa000 length 0xa000 01:28:44.450 nvme3n1 : 5.75 139.24 8.70 0.00 0.00 795464.84 2368.77 1259975.66 01:28:44.450 [2024-12-09T05:23:26.906Z] =================================================================================================================== 01:28:44.450 [2024-12-09T05:23:26.906Z] Total : 2008.03 125.50 0.00 0.00 703310.56 2013.46 2668183.75 01:28:45.826 01:28:45.826 real 0m8.296s 01:28:45.826 user 0m14.976s 01:28:45.826 sys 0m0.556s 01:28:45.826 05:23:27 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 01:28:45.826 05:23:27 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 01:28:45.826 ************************************ 01:28:45.826 END TEST bdev_verify_big_io 01:28:45.826 ************************************ 01:28:45.826 05:23:28 blockdev_xnvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 01:28:45.826 05:23:28 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 01:28:45.826 05:23:28 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 01:28:45.826 05:23:28 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 01:28:45.826 ************************************ 01:28:45.826 START TEST bdev_write_zeroes 01:28:45.826 ************************************ 01:28:45.826 05:23:28 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 01:28:45.826 [2024-12-09 05:23:28.151393] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:28:45.826 [2024-12-09 05:23:28.151723] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74922 ] 01:28:46.084 [2024-12-09 05:23:28.340627] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:28:46.084 [2024-12-09 05:23:28.464323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:28:46.672 Running I/O for 1 seconds... 01:28:47.608 36640.00 IOPS, 143.12 MiB/s 01:28:47.608 Latency(us) 01:28:47.608 [2024-12-09T05:23:30.064Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:28:47.608 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 01:28:47.608 nvme0n1 : 1.03 5353.69 20.91 0.00 0.00 23887.58 9580.36 40005.91 01:28:47.608 Job: nvme0n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 01:28:47.608 nvme0n2 : 1.03 5346.61 20.89 0.00 0.00 23903.90 9685.64 39163.68 01:28:47.608 Job: nvme0n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 01:28:47.608 nvme0n3 : 1.03 5339.67 20.86 0.00 0.00 23918.52 9790.92 38532.01 01:28:47.608 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 01:28:47.608 nvme1n1 : 1.03 5332.94 20.83 0.00 0.00 23928.60 9896.20 37900.34 01:28:47.608 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 01:28:47.608 nvme2n1 : 1.04 9614.94 37.56 0.00 0.00 13252.25 6027.21 33899.75 01:28:47.608 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 01:28:47.608 nvme3n1 : 1.04 5310.83 20.75 0.00 0.00 23845.03 2974.12 33689.19 01:28:47.608 [2024-12-09T05:23:30.064Z] =================================================================================================================== 01:28:47.608 [2024-12-09T05:23:30.064Z] Total : 36298.68 141.79 0.00 0.00 21063.04 2974.12 40005.91 01:28:48.984 01:28:48.984 real 0m3.245s 01:28:48.984 user 0m2.429s 01:28:48.984 sys 0m0.633s 01:28:48.984 05:23:31 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 01:28:48.984 05:23:31 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 01:28:48.984 ************************************ 01:28:48.984 END TEST bdev_write_zeroes 01:28:48.984 ************************************ 01:28:48.984 05:23:31 blockdev_xnvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 01:28:48.984 05:23:31 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 01:28:48.984 05:23:31 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 01:28:48.984 05:23:31 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 01:28:48.984 ************************************ 01:28:48.984 START TEST bdev_json_nonenclosed 01:28:48.984 ************************************ 01:28:48.984 05:23:31 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 01:28:49.243 [2024-12-09 05:23:31.466127] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:28:49.243 [2024-12-09 05:23:31.466619] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74980 ] 01:28:49.243 [2024-12-09 05:23:31.651662] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:28:49.502 [2024-12-09 05:23:31.780970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:28:49.502 [2024-12-09 05:23:31.781092] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 01:28:49.502 [2024-12-09 05:23:31.781117] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 01:28:49.502 [2024-12-09 05:23:31.781130] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 01:28:49.761 01:28:49.761 real 0m0.751s 01:28:49.761 user 0m0.491s 01:28:49.761 sys 0m0.154s 01:28:49.761 05:23:32 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 01:28:49.761 05:23:32 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 01:28:49.761 ************************************ 01:28:49.761 END TEST bdev_json_nonenclosed 01:28:49.761 ************************************ 01:28:49.761 05:23:32 blockdev_xnvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 01:28:49.761 05:23:32 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 01:28:49.761 05:23:32 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 01:28:49.761 05:23:32 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 01:28:49.761 ************************************ 01:28:49.761 START TEST bdev_json_nonarray 01:28:49.761 ************************************ 01:28:49.761 05:23:32 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 01:28:50.020 [2024-12-09 05:23:32.293523] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:28:50.020 [2024-12-09 05:23:32.293636] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75007 ] 01:28:50.279 [2024-12-09 05:23:32.478938] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:28:50.279 [2024-12-09 05:23:32.611124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:28:50.279 [2024-12-09 05:23:32.611251] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 01:28:50.280 [2024-12-09 05:23:32.611278] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 01:28:50.280 [2024-12-09 05:23:32.611291] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 01:28:50.538 01:28:50.538 real 0m0.762s 01:28:50.538 user 0m0.498s 01:28:50.538 sys 0m0.158s 01:28:50.538 05:23:32 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 01:28:50.538 05:23:32 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 01:28:50.538 ************************************ 01:28:50.538 END TEST bdev_json_nonarray 01:28:50.538 ************************************ 01:28:50.797 05:23:33 blockdev_xnvme -- bdev/blockdev.sh@824 -- # [[ xnvme == bdev ]] 01:28:50.797 05:23:33 blockdev_xnvme -- bdev/blockdev.sh@832 -- # [[ xnvme == gpt ]] 01:28:50.797 05:23:33 blockdev_xnvme -- bdev/blockdev.sh@836 -- # [[ xnvme == crypto_sw ]] 01:28:50.797 05:23:33 blockdev_xnvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 01:28:50.797 05:23:33 blockdev_xnvme -- bdev/blockdev.sh@849 -- # cleanup 01:28:50.797 05:23:33 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 01:28:50.797 05:23:33 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 01:28:50.797 05:23:33 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 01:28:50.797 05:23:33 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 01:28:50.798 05:23:33 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 01:28:50.798 05:23:33 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 01:28:50.798 05:23:33 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 01:28:51.365 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:28:52.303 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 01:28:52.303 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 01:28:52.304 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 01:28:52.304 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 01:28:52.564 01:28:52.564 real 0m58.284s 01:28:52.564 user 1m35.225s 01:28:52.564 sys 0m31.273s 01:28:52.564 05:23:34 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 01:28:52.564 05:23:34 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 01:28:52.564 ************************************ 01:28:52.564 END TEST blockdev_xnvme 01:28:52.564 ************************************ 01:28:52.564 05:23:34 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 01:28:52.564 05:23:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:28:52.564 05:23:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:28:52.564 05:23:34 -- common/autotest_common.sh@10 -- # set +x 01:28:52.564 ************************************ 01:28:52.564 START TEST ublk 01:28:52.564 ************************************ 01:28:52.564 05:23:34 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 01:28:52.564 * Looking for test storage... 01:28:52.564 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 01:28:52.564 05:23:34 ublk -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:28:52.564 05:23:34 ublk -- common/autotest_common.sh@1693 -- # lcov --version 01:28:52.564 05:23:34 ublk -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:28:52.829 05:23:35 ublk -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:28:52.829 05:23:35 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:28:52.829 05:23:35 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 01:28:52.829 05:23:35 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 01:28:52.830 05:23:35 ublk -- scripts/common.sh@336 -- # IFS=.-: 01:28:52.830 05:23:35 ublk -- scripts/common.sh@336 -- # read -ra ver1 01:28:52.830 05:23:35 ublk -- scripts/common.sh@337 -- # IFS=.-: 01:28:52.830 05:23:35 ublk -- scripts/common.sh@337 -- # read -ra ver2 01:28:52.830 05:23:35 ublk -- scripts/common.sh@338 -- # local 'op=<' 01:28:52.830 05:23:35 ublk -- scripts/common.sh@340 -- # ver1_l=2 01:28:52.830 05:23:35 ublk -- scripts/common.sh@341 -- # ver2_l=1 01:28:52.830 05:23:35 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:28:52.830 05:23:35 ublk -- scripts/common.sh@344 -- # case "$op" in 01:28:52.830 05:23:35 ublk -- scripts/common.sh@345 -- # : 1 01:28:52.830 05:23:35 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 01:28:52.830 05:23:35 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:28:52.830 05:23:35 ublk -- scripts/common.sh@365 -- # decimal 1 01:28:52.830 05:23:35 ublk -- scripts/common.sh@353 -- # local d=1 01:28:52.830 05:23:35 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:28:52.830 05:23:35 ublk -- scripts/common.sh@355 -- # echo 1 01:28:52.830 05:23:35 ublk -- scripts/common.sh@365 -- # ver1[v]=1 01:28:52.830 05:23:35 ublk -- scripts/common.sh@366 -- # decimal 2 01:28:52.830 05:23:35 ublk -- scripts/common.sh@353 -- # local d=2 01:28:52.830 05:23:35 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:28:52.830 05:23:35 ublk -- scripts/common.sh@355 -- # echo 2 01:28:52.830 05:23:35 ublk -- scripts/common.sh@366 -- # ver2[v]=2 01:28:52.830 05:23:35 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:28:52.830 05:23:35 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:28:52.830 05:23:35 ublk -- scripts/common.sh@368 -- # return 0 01:28:52.830 05:23:35 ublk -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:28:52.830 05:23:35 ublk -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:28:52.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:28:52.830 --rc genhtml_branch_coverage=1 01:28:52.830 --rc genhtml_function_coverage=1 01:28:52.830 --rc genhtml_legend=1 01:28:52.830 --rc geninfo_all_blocks=1 01:28:52.830 --rc geninfo_unexecuted_blocks=1 01:28:52.830 01:28:52.830 ' 01:28:52.830 05:23:35 ublk -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:28:52.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:28:52.830 --rc genhtml_branch_coverage=1 01:28:52.830 --rc genhtml_function_coverage=1 01:28:52.830 --rc genhtml_legend=1 01:28:52.830 --rc geninfo_all_blocks=1 01:28:52.830 --rc geninfo_unexecuted_blocks=1 01:28:52.830 01:28:52.830 ' 01:28:52.830 05:23:35 ublk -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:28:52.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:28:52.830 --rc genhtml_branch_coverage=1 01:28:52.830 --rc genhtml_function_coverage=1 01:28:52.830 --rc genhtml_legend=1 01:28:52.830 --rc geninfo_all_blocks=1 01:28:52.830 --rc geninfo_unexecuted_blocks=1 01:28:52.830 01:28:52.830 ' 01:28:52.830 05:23:35 ublk -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:28:52.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:28:52.830 --rc genhtml_branch_coverage=1 01:28:52.830 --rc genhtml_function_coverage=1 01:28:52.830 --rc genhtml_legend=1 01:28:52.830 --rc geninfo_all_blocks=1 01:28:52.830 --rc geninfo_unexecuted_blocks=1 01:28:52.830 01:28:52.830 ' 01:28:52.830 05:23:35 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 01:28:52.830 05:23:35 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 01:28:52.830 05:23:35 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 01:28:52.830 05:23:35 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 01:28:52.830 05:23:35 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 01:28:52.830 05:23:35 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 01:28:52.830 05:23:35 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 01:28:52.830 05:23:35 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 01:28:52.830 05:23:35 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 01:28:52.830 05:23:35 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 01:28:52.830 05:23:35 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 01:28:52.830 05:23:35 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 01:28:52.830 05:23:35 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 01:28:52.830 05:23:35 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 01:28:52.830 05:23:35 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 01:28:52.830 05:23:35 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 01:28:52.830 05:23:35 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 01:28:52.830 05:23:35 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 01:28:52.830 05:23:35 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 01:28:52.830 05:23:35 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 01:28:52.830 05:23:35 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:28:52.830 05:23:35 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 01:28:52.830 05:23:35 ublk -- common/autotest_common.sh@10 -- # set +x 01:28:52.830 ************************************ 01:28:52.830 START TEST test_save_ublk_config 01:28:52.830 ************************************ 01:28:52.830 05:23:35 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 01:28:52.830 05:23:35 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 01:28:52.830 05:23:35 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=75302 01:28:52.830 05:23:35 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 01:28:52.830 05:23:35 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 75302 01:28:52.830 05:23:35 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 01:28:52.830 05:23:35 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 75302 ']' 01:28:52.830 05:23:35 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:28:52.830 05:23:35 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 01:28:52.830 05:23:35 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:28:52.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:28:52.830 05:23:35 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 01:28:52.830 05:23:35 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 01:28:52.830 [2024-12-09 05:23:35.229905] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:28:52.831 [2024-12-09 05:23:35.230012] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75302 ] 01:28:53.091 [2024-12-09 05:23:35.409935] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:28:53.091 [2024-12-09 05:23:35.537714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:28:54.468 05:23:36 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:28:54.468 05:23:36 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 01:28:54.468 05:23:36 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 01:28:54.468 05:23:36 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 01:28:54.468 05:23:36 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:54.468 05:23:36 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 01:28:54.468 [2024-12-09 05:23:36.598500] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 01:28:54.468 [2024-12-09 05:23:36.599830] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 01:28:54.468 malloc0 01:28:54.468 [2024-12-09 05:23:36.686749] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 01:28:54.468 [2024-12-09 05:23:36.686857] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 01:28:54.468 [2024-12-09 05:23:36.686872] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 01:28:54.468 [2024-12-09 05:23:36.686889] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 01:28:54.468 [2024-12-09 05:23:36.694714] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 01:28:54.468 [2024-12-09 05:23:36.694752] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 01:28:54.468 [2024-12-09 05:23:36.702508] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 01:28:54.468 [2024-12-09 05:23:36.702627] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 01:28:54.468 [2024-12-09 05:23:36.719505] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 01:28:54.468 0 01:28:54.468 05:23:36 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:54.468 05:23:36 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 01:28:54.468 05:23:36 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:54.468 05:23:36 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 01:28:54.727 05:23:37 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:54.727 05:23:37 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 01:28:54.727 "subsystems": [ 01:28:54.727 { 01:28:54.727 "subsystem": "fsdev", 01:28:54.727 "config": [ 01:28:54.727 { 01:28:54.727 "method": "fsdev_set_opts", 01:28:54.727 "params": { 01:28:54.727 "fsdev_io_pool_size": 65535, 01:28:54.727 "fsdev_io_cache_size": 256 01:28:54.727 } 01:28:54.727 } 01:28:54.727 ] 01:28:54.727 }, 01:28:54.727 { 01:28:54.727 "subsystem": "keyring", 01:28:54.727 "config": [] 01:28:54.727 }, 01:28:54.727 { 01:28:54.727 "subsystem": "iobuf", 01:28:54.727 "config": [ 01:28:54.727 { 01:28:54.727 "method": "iobuf_set_options", 01:28:54.727 "params": { 01:28:54.727 "small_pool_count": 8192, 01:28:54.727 "large_pool_count": 1024, 01:28:54.727 "small_bufsize": 8192, 01:28:54.727 "large_bufsize": 135168, 01:28:54.727 "enable_numa": false 01:28:54.727 } 01:28:54.727 } 01:28:54.727 ] 01:28:54.727 }, 01:28:54.727 { 01:28:54.727 "subsystem": "sock", 01:28:54.727 "config": [ 01:28:54.727 { 01:28:54.727 "method": "sock_set_default_impl", 01:28:54.727 "params": { 01:28:54.727 "impl_name": "posix" 01:28:54.727 } 01:28:54.727 }, 01:28:54.727 { 01:28:54.727 "method": "sock_impl_set_options", 01:28:54.727 "params": { 01:28:54.727 "impl_name": "ssl", 01:28:54.727 "recv_buf_size": 4096, 01:28:54.727 "send_buf_size": 4096, 01:28:54.727 "enable_recv_pipe": true, 01:28:54.727 "enable_quickack": false, 01:28:54.727 "enable_placement_id": 0, 01:28:54.727 "enable_zerocopy_send_server": true, 01:28:54.727 "enable_zerocopy_send_client": false, 01:28:54.727 "zerocopy_threshold": 0, 01:28:54.727 "tls_version": 0, 01:28:54.727 "enable_ktls": false 01:28:54.727 } 01:28:54.727 }, 01:28:54.727 { 01:28:54.727 "method": "sock_impl_set_options", 01:28:54.727 "params": { 01:28:54.727 "impl_name": "posix", 01:28:54.727 "recv_buf_size": 2097152, 01:28:54.727 "send_buf_size": 2097152, 01:28:54.727 "enable_recv_pipe": true, 01:28:54.727 "enable_quickack": false, 01:28:54.727 "enable_placement_id": 0, 01:28:54.727 "enable_zerocopy_send_server": true, 01:28:54.727 "enable_zerocopy_send_client": false, 01:28:54.727 "zerocopy_threshold": 0, 01:28:54.727 "tls_version": 0, 01:28:54.727 "enable_ktls": false 01:28:54.727 } 01:28:54.727 } 01:28:54.727 ] 01:28:54.727 }, 01:28:54.727 { 01:28:54.727 "subsystem": "vmd", 01:28:54.727 "config": [] 01:28:54.727 }, 01:28:54.727 { 01:28:54.727 "subsystem": "accel", 01:28:54.727 "config": [ 01:28:54.727 { 01:28:54.727 "method": "accel_set_options", 01:28:54.727 "params": { 01:28:54.727 "small_cache_size": 128, 01:28:54.727 "large_cache_size": 16, 01:28:54.727 "task_count": 2048, 01:28:54.727 "sequence_count": 2048, 01:28:54.727 "buf_count": 2048 01:28:54.727 } 01:28:54.727 } 01:28:54.727 ] 01:28:54.727 }, 01:28:54.727 { 01:28:54.727 "subsystem": "bdev", 01:28:54.727 "config": [ 01:28:54.727 { 01:28:54.727 "method": "bdev_set_options", 01:28:54.727 "params": { 01:28:54.727 "bdev_io_pool_size": 65535, 01:28:54.727 "bdev_io_cache_size": 256, 01:28:54.727 "bdev_auto_examine": true, 01:28:54.727 "iobuf_small_cache_size": 128, 01:28:54.727 "iobuf_large_cache_size": 16 01:28:54.727 } 01:28:54.727 }, 01:28:54.727 { 01:28:54.727 "method": "bdev_raid_set_options", 01:28:54.727 "params": { 01:28:54.727 "process_window_size_kb": 1024, 01:28:54.727 "process_max_bandwidth_mb_sec": 0 01:28:54.727 } 01:28:54.727 }, 01:28:54.727 { 01:28:54.727 "method": "bdev_iscsi_set_options", 01:28:54.727 "params": { 01:28:54.727 "timeout_sec": 30 01:28:54.727 } 01:28:54.727 }, 01:28:54.727 { 01:28:54.727 "method": "bdev_nvme_set_options", 01:28:54.727 "params": { 01:28:54.727 "action_on_timeout": "none", 01:28:54.727 "timeout_us": 0, 01:28:54.727 "timeout_admin_us": 0, 01:28:54.727 "keep_alive_timeout_ms": 10000, 01:28:54.727 "arbitration_burst": 0, 01:28:54.727 "low_priority_weight": 0, 01:28:54.727 "medium_priority_weight": 0, 01:28:54.727 "high_priority_weight": 0, 01:28:54.727 "nvme_adminq_poll_period_us": 10000, 01:28:54.727 "nvme_ioq_poll_period_us": 0, 01:28:54.727 "io_queue_requests": 0, 01:28:54.727 "delay_cmd_submit": true, 01:28:54.727 "transport_retry_count": 4, 01:28:54.727 "bdev_retry_count": 3, 01:28:54.727 "transport_ack_timeout": 0, 01:28:54.727 "ctrlr_loss_timeout_sec": 0, 01:28:54.727 "reconnect_delay_sec": 0, 01:28:54.727 "fast_io_fail_timeout_sec": 0, 01:28:54.727 "disable_auto_failback": false, 01:28:54.727 "generate_uuids": false, 01:28:54.727 "transport_tos": 0, 01:28:54.727 "nvme_error_stat": false, 01:28:54.727 "rdma_srq_size": 0, 01:28:54.727 "io_path_stat": false, 01:28:54.727 "allow_accel_sequence": false, 01:28:54.727 "rdma_max_cq_size": 0, 01:28:54.727 "rdma_cm_event_timeout_ms": 0, 01:28:54.727 "dhchap_digests": [ 01:28:54.727 "sha256", 01:28:54.727 "sha384", 01:28:54.727 "sha512" 01:28:54.727 ], 01:28:54.727 "dhchap_dhgroups": [ 01:28:54.727 "null", 01:28:54.727 "ffdhe2048", 01:28:54.727 "ffdhe3072", 01:28:54.727 "ffdhe4096", 01:28:54.727 "ffdhe6144", 01:28:54.727 "ffdhe8192" 01:28:54.727 ] 01:28:54.727 } 01:28:54.727 }, 01:28:54.727 { 01:28:54.727 "method": "bdev_nvme_set_hotplug", 01:28:54.727 "params": { 01:28:54.727 "period_us": 100000, 01:28:54.727 "enable": false 01:28:54.727 } 01:28:54.727 }, 01:28:54.727 { 01:28:54.727 "method": "bdev_malloc_create", 01:28:54.727 "params": { 01:28:54.727 "name": "malloc0", 01:28:54.727 "num_blocks": 8192, 01:28:54.727 "block_size": 4096, 01:28:54.727 "physical_block_size": 4096, 01:28:54.727 "uuid": "e7cb5b01-ab5c-4551-b202-bc0e0f5a0b89", 01:28:54.727 "optimal_io_boundary": 0, 01:28:54.727 "md_size": 0, 01:28:54.727 "dif_type": 0, 01:28:54.727 "dif_is_head_of_md": false, 01:28:54.727 "dif_pi_format": 0 01:28:54.727 } 01:28:54.727 }, 01:28:54.727 { 01:28:54.727 "method": "bdev_wait_for_examine" 01:28:54.727 } 01:28:54.727 ] 01:28:54.727 }, 01:28:54.727 { 01:28:54.727 "subsystem": "scsi", 01:28:54.727 "config": null 01:28:54.727 }, 01:28:54.727 { 01:28:54.727 "subsystem": "scheduler", 01:28:54.727 "config": [ 01:28:54.727 { 01:28:54.727 "method": "framework_set_scheduler", 01:28:54.727 "params": { 01:28:54.727 "name": "static" 01:28:54.727 } 01:28:54.727 } 01:28:54.727 ] 01:28:54.727 }, 01:28:54.727 { 01:28:54.727 "subsystem": "vhost_scsi", 01:28:54.727 "config": [] 01:28:54.727 }, 01:28:54.727 { 01:28:54.727 "subsystem": "vhost_blk", 01:28:54.727 "config": [] 01:28:54.727 }, 01:28:54.727 { 01:28:54.727 "subsystem": "ublk", 01:28:54.727 "config": [ 01:28:54.727 { 01:28:54.727 "method": "ublk_create_target", 01:28:54.727 "params": { 01:28:54.727 "cpumask": "1" 01:28:54.727 } 01:28:54.727 }, 01:28:54.727 { 01:28:54.727 "method": "ublk_start_disk", 01:28:54.727 "params": { 01:28:54.727 "bdev_name": "malloc0", 01:28:54.728 "ublk_id": 0, 01:28:54.728 "num_queues": 1, 01:28:54.728 "queue_depth": 128 01:28:54.728 } 01:28:54.728 } 01:28:54.728 ] 01:28:54.728 }, 01:28:54.728 { 01:28:54.728 "subsystem": "nbd", 01:28:54.728 "config": [] 01:28:54.728 }, 01:28:54.728 { 01:28:54.728 "subsystem": "nvmf", 01:28:54.728 "config": [ 01:28:54.728 { 01:28:54.728 "method": "nvmf_set_config", 01:28:54.728 "params": { 01:28:54.728 "discovery_filter": "match_any", 01:28:54.728 "admin_cmd_passthru": { 01:28:54.728 "identify_ctrlr": false 01:28:54.728 }, 01:28:54.728 "dhchap_digests": [ 01:28:54.728 "sha256", 01:28:54.728 "sha384", 01:28:54.728 "sha512" 01:28:54.728 ], 01:28:54.728 "dhchap_dhgroups": [ 01:28:54.728 "null", 01:28:54.728 "ffdhe2048", 01:28:54.728 "ffdhe3072", 01:28:54.728 "ffdhe4096", 01:28:54.728 "ffdhe6144", 01:28:54.728 "ffdhe8192" 01:28:54.728 ] 01:28:54.728 } 01:28:54.728 }, 01:28:54.728 { 01:28:54.728 "method": "nvmf_set_max_subsystems", 01:28:54.728 "params": { 01:28:54.728 "max_subsystems": 1024 01:28:54.728 } 01:28:54.728 }, 01:28:54.728 { 01:28:54.728 "method": "nvmf_set_crdt", 01:28:54.728 "params": { 01:28:54.728 "crdt1": 0, 01:28:54.728 "crdt2": 0, 01:28:54.728 "crdt3": 0 01:28:54.728 } 01:28:54.728 } 01:28:54.728 ] 01:28:54.728 }, 01:28:54.728 { 01:28:54.728 "subsystem": "iscsi", 01:28:54.728 "config": [ 01:28:54.728 { 01:28:54.728 "method": "iscsi_set_options", 01:28:54.728 "params": { 01:28:54.728 "node_base": "iqn.2016-06.io.spdk", 01:28:54.728 "max_sessions": 128, 01:28:54.728 "max_connections_per_session": 2, 01:28:54.728 "max_queue_depth": 64, 01:28:54.728 "default_time2wait": 2, 01:28:54.728 "default_time2retain": 20, 01:28:54.728 "first_burst_length": 8192, 01:28:54.728 "immediate_data": true, 01:28:54.728 "allow_duplicated_isid": false, 01:28:54.728 "error_recovery_level": 0, 01:28:54.728 "nop_timeout": 60, 01:28:54.728 "nop_in_interval": 30, 01:28:54.728 "disable_chap": false, 01:28:54.728 "require_chap": false, 01:28:54.728 "mutual_chap": false, 01:28:54.728 "chap_group": 0, 01:28:54.728 "max_large_datain_per_connection": 64, 01:28:54.728 "max_r2t_per_connection": 4, 01:28:54.728 "pdu_pool_size": 36864, 01:28:54.728 "immediate_data_pool_size": 16384, 01:28:54.728 "data_out_pool_size": 2048 01:28:54.728 } 01:28:54.728 } 01:28:54.728 ] 01:28:54.728 } 01:28:54.728 ] 01:28:54.728 }' 01:28:54.728 05:23:37 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 75302 01:28:54.728 05:23:37 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 75302 ']' 01:28:54.728 05:23:37 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 75302 01:28:54.728 05:23:37 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 01:28:54.728 05:23:37 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:28:54.728 05:23:37 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75302 01:28:54.728 05:23:37 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:28:54.728 05:23:37 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:28:54.728 killing process with pid 75302 01:28:54.728 05:23:37 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75302' 01:28:54.728 05:23:37 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 75302 01:28:54.728 05:23:37 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 75302 01:28:56.101 [2024-12-09 05:23:38.541162] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 01:28:56.359 [2024-12-09 05:23:38.579500] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 01:28:56.359 [2024-12-09 05:23:38.579646] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 01:28:56.359 [2024-12-09 05:23:38.588499] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 01:28:56.359 [2024-12-09 05:23:38.588569] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 01:28:56.359 [2024-12-09 05:23:38.588588] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 01:28:56.359 [2024-12-09 05:23:38.588619] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 01:28:56.359 [2024-12-09 05:23:38.588788] ublk.c: 766:_ublk_fini_done: *DEBUG*: 01:28:58.889 05:23:40 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=75380 01:28:58.889 05:23:40 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 75380 01:28:58.889 05:23:40 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 75380 ']' 01:28:58.889 05:23:40 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:28:58.889 05:23:40 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 01:28:58.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:28:58.889 05:23:40 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:28:58.889 05:23:40 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 01:28:58.889 05:23:40 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 01:28:58.889 05:23:40 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 01:28:58.890 05:23:40 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 01:28:58.890 "subsystems": [ 01:28:58.890 { 01:28:58.890 "subsystem": "fsdev", 01:28:58.890 "config": [ 01:28:58.890 { 01:28:58.890 "method": "fsdev_set_opts", 01:28:58.890 "params": { 01:28:58.890 "fsdev_io_pool_size": 65535, 01:28:58.890 "fsdev_io_cache_size": 256 01:28:58.890 } 01:28:58.890 } 01:28:58.890 ] 01:28:58.890 }, 01:28:58.890 { 01:28:58.890 "subsystem": "keyring", 01:28:58.890 "config": [] 01:28:58.890 }, 01:28:58.890 { 01:28:58.890 "subsystem": "iobuf", 01:28:58.890 "config": [ 01:28:58.890 { 01:28:58.890 "method": "iobuf_set_options", 01:28:58.890 "params": { 01:28:58.890 "small_pool_count": 8192, 01:28:58.890 "large_pool_count": 1024, 01:28:58.890 "small_bufsize": 8192, 01:28:58.890 "large_bufsize": 135168, 01:28:58.890 "enable_numa": false 01:28:58.890 } 01:28:58.890 } 01:28:58.890 ] 01:28:58.890 }, 01:28:58.890 { 01:28:58.890 "subsystem": "sock", 01:28:58.890 "config": [ 01:28:58.890 { 01:28:58.890 "method": "sock_set_default_impl", 01:28:58.890 "params": { 01:28:58.890 "impl_name": "posix" 01:28:58.890 } 01:28:58.890 }, 01:28:58.890 { 01:28:58.890 "method": "sock_impl_set_options", 01:28:58.890 "params": { 01:28:58.890 "impl_name": "ssl", 01:28:58.890 "recv_buf_size": 4096, 01:28:58.890 "send_buf_size": 4096, 01:28:58.890 "enable_recv_pipe": true, 01:28:58.890 "enable_quickack": false, 01:28:58.890 "enable_placement_id": 0, 01:28:58.890 "enable_zerocopy_send_server": true, 01:28:58.890 "enable_zerocopy_send_client": false, 01:28:58.890 "zerocopy_threshold": 0, 01:28:58.890 "tls_version": 0, 01:28:58.890 "enable_ktls": false 01:28:58.890 } 01:28:58.890 }, 01:28:58.890 { 01:28:58.890 "method": "sock_impl_set_options", 01:28:58.890 "params": { 01:28:58.890 "impl_name": "posix", 01:28:58.890 "recv_buf_size": 2097152, 01:28:58.890 "send_buf_size": 2097152, 01:28:58.890 "enable_recv_pipe": true, 01:28:58.890 "enable_quickack": false, 01:28:58.890 "enable_placement_id": 0, 01:28:58.890 "enable_zerocopy_send_server": true, 01:28:58.890 "enable_zerocopy_send_client": false, 01:28:58.890 "zerocopy_threshold": 0, 01:28:58.890 "tls_version": 0, 01:28:58.890 "enable_ktls": false 01:28:58.890 } 01:28:58.890 } 01:28:58.890 ] 01:28:58.890 }, 01:28:58.890 { 01:28:58.890 "subsystem": "vmd", 01:28:58.890 "config": [] 01:28:58.890 }, 01:28:58.890 { 01:28:58.890 "subsystem": "accel", 01:28:58.890 "config": [ 01:28:58.890 { 01:28:58.890 "method": "accel_set_options", 01:28:58.890 "params": { 01:28:58.890 "small_cache_size": 128, 01:28:58.890 "large_cache_size": 16, 01:28:58.890 "task_count": 2048, 01:28:58.890 "sequence_count": 2048, 01:28:58.890 "buf_count": 2048 01:28:58.890 } 01:28:58.890 } 01:28:58.890 ] 01:28:58.890 }, 01:28:58.890 { 01:28:58.890 "subsystem": "bdev", 01:28:58.890 "config": [ 01:28:58.890 { 01:28:58.890 "method": "bdev_set_options", 01:28:58.890 "params": { 01:28:58.890 "bdev_io_pool_size": 65535, 01:28:58.890 "bdev_io_cache_size": 256, 01:28:58.890 "bdev_auto_examine": true, 01:28:58.890 "iobuf_small_cache_size": 128, 01:28:58.890 "iobuf_large_cache_size": 16 01:28:58.890 } 01:28:58.890 }, 01:28:58.890 { 01:28:58.890 "method": "bdev_raid_set_options", 01:28:58.890 "params": { 01:28:58.890 "process_window_size_kb": 1024, 01:28:58.890 "process_max_bandwidth_mb_sec": 0 01:28:58.890 } 01:28:58.890 }, 01:28:58.890 { 01:28:58.890 "method": "bdev_iscsi_set_options", 01:28:58.890 "params": { 01:28:58.890 "timeout_sec": 30 01:28:58.890 } 01:28:58.890 }, 01:28:58.890 { 01:28:58.890 "method": "bdev_nvme_set_options", 01:28:58.890 "params": { 01:28:58.890 "action_on_timeout": "none", 01:28:58.890 "timeout_us": 0, 01:28:58.890 "timeout_admin_us": 0, 01:28:58.890 "keep_alive_timeout_ms": 10000, 01:28:58.890 "arbitration_burst": 0, 01:28:58.890 "low_priority_weight": 0, 01:28:58.890 "medium_priority_weight": 0, 01:28:58.890 "high_priority_weight": 0, 01:28:58.890 "nvme_adminq_poll_period_us": 10000, 01:28:58.890 "nvme_ioq_poll_period_us": 0, 01:28:58.890 "io_queue_requests": 0, 01:28:58.890 "delay_cmd_submit": true, 01:28:58.890 "transport_retry_count": 4, 01:28:58.890 "bdev_retry_count": 3, 01:28:58.890 "transport_ack_timeout": 0, 01:28:58.890 "ctrlr_loss_timeout_sec": 0, 01:28:58.890 "reconnect_delay_sec": 0, 01:28:58.890 "fast_io_fail_timeout_sec": 0, 01:28:58.890 "disable_auto_failback": false, 01:28:58.890 "generate_uuids": false, 01:28:58.890 "transport_tos": 0, 01:28:58.890 "nvme_error_stat": false, 01:28:58.890 "rdma_srq_size": 0, 01:28:58.890 "io_path_stat": false, 01:28:58.890 "allow_accel_sequence": false, 01:28:58.890 "rdma_max_cq_size": 0, 01:28:58.890 "rdma_cm_event_timeout_ms": 0, 01:28:58.890 "dhchap_digests": [ 01:28:58.890 "sha256", 01:28:58.890 "sha384", 01:28:58.890 "sha512" 01:28:58.890 ], 01:28:58.890 "dhchap_dhgroups": [ 01:28:58.890 "null", 01:28:58.890 "ffdhe2048", 01:28:58.890 "ffdhe3072", 01:28:58.890 "ffdhe4096", 01:28:58.890 "ffdhe6144", 01:28:58.890 "ffdhe8192" 01:28:58.890 ] 01:28:58.890 } 01:28:58.890 }, 01:28:58.890 { 01:28:58.890 "method": "bdev_nvme_set_hotplug", 01:28:58.890 "params": { 01:28:58.890 "period_us": 100000, 01:28:58.890 "enable": false 01:28:58.890 } 01:28:58.890 }, 01:28:58.890 { 01:28:58.890 "method": "bdev_malloc_create", 01:28:58.890 "params": { 01:28:58.890 "name": "malloc0", 01:28:58.890 "num_blocks": 8192, 01:28:58.890 "block_size": 4096, 01:28:58.890 "physical_block_size": 4096, 01:28:58.890 "uuid": "e7cb5b01-ab5c-4551-b202-bc0e0f5a0b89", 01:28:58.890 "optimal_io_boundary": 0, 01:28:58.890 "md_size": 0, 01:28:58.890 "dif_type": 0, 01:28:58.890 "dif_is_head_of_md": false, 01:28:58.890 "dif_pi_format": 0 01:28:58.890 } 01:28:58.890 }, 01:28:58.890 { 01:28:58.890 "method": "bdev_wait_for_examine" 01:28:58.890 } 01:28:58.890 ] 01:28:58.890 }, 01:28:58.890 { 01:28:58.890 "subsystem": "scsi", 01:28:58.890 "config": null 01:28:58.890 }, 01:28:58.890 { 01:28:58.890 "subsystem": "scheduler", 01:28:58.890 "config": [ 01:28:58.890 { 01:28:58.890 "method": "framework_set_scheduler", 01:28:58.890 "params": { 01:28:58.890 "name": "static" 01:28:58.890 } 01:28:58.890 } 01:28:58.890 ] 01:28:58.890 }, 01:28:58.890 { 01:28:58.890 "subsystem": "vhost_scsi", 01:28:58.890 "config": [] 01:28:58.890 }, 01:28:58.890 { 01:28:58.890 "subsystem": "vhost_blk", 01:28:58.890 "config": [] 01:28:58.890 }, 01:28:58.890 { 01:28:58.890 "subsystem": "ublk", 01:28:58.890 "config": [ 01:28:58.890 { 01:28:58.890 "method": "ublk_create_target", 01:28:58.890 "params": { 01:28:58.890 "cpumask": "1" 01:28:58.890 } 01:28:58.890 }, 01:28:58.890 { 01:28:58.890 "method": "ublk_start_disk", 01:28:58.890 "params": { 01:28:58.890 "bdev_name": "malloc0", 01:28:58.890 "ublk_id": 0, 01:28:58.890 "num_queues": 1, 01:28:58.890 "queue_depth": 128 01:28:58.890 } 01:28:58.890 } 01:28:58.890 ] 01:28:58.890 }, 01:28:58.890 { 01:28:58.890 "subsystem": "nbd", 01:28:58.890 "config": [] 01:28:58.890 }, 01:28:58.890 { 01:28:58.890 "subsystem": "nvmf", 01:28:58.890 "config": [ 01:28:58.890 { 01:28:58.890 "method": "nvmf_set_config", 01:28:58.890 "params": { 01:28:58.890 "discovery_filter": "match_any", 01:28:58.890 "admin_cmd_passthru": { 01:28:58.890 "identify_ctrlr": false 01:28:58.890 }, 01:28:58.890 "dhchap_digests": [ 01:28:58.890 "sha256", 01:28:58.890 "sha384", 01:28:58.890 "sha512" 01:28:58.890 ], 01:28:58.890 "dhchap_dhgroups": [ 01:28:58.890 "null", 01:28:58.890 "ffdhe2048", 01:28:58.890 "ffdhe3072", 01:28:58.890 "ffdhe4096", 01:28:58.890 "ffdhe6144", 01:28:58.890 "ffdhe8192" 01:28:58.890 ] 01:28:58.890 } 01:28:58.890 }, 01:28:58.890 { 01:28:58.890 "method": "nvmf_set_max_subsystems", 01:28:58.890 "params": { 01:28:58.890 "max_subsystems": 1024 01:28:58.890 } 01:28:58.890 }, 01:28:58.890 { 01:28:58.890 "method": "nvmf_set_crdt", 01:28:58.890 "params": { 01:28:58.890 "crdt1": 0, 01:28:58.890 "crdt2": 0, 01:28:58.891 "crdt3": 0 01:28:58.891 } 01:28:58.891 } 01:28:58.891 ] 01:28:58.891 }, 01:28:58.891 { 01:28:58.891 "subsystem": "iscsi", 01:28:58.891 "config": [ 01:28:58.891 { 01:28:58.891 "method": "iscsi_set_options", 01:28:58.891 "params": { 01:28:58.891 "node_base": "iqn.2016-06.io.spdk", 01:28:58.891 "max_sessions": 128, 01:28:58.891 "max_connections_per_session": 2, 01:28:58.891 "max_queue_depth": 64, 01:28:58.891 "default_time2wait": 2, 01:28:58.891 "default_time2retain": 20, 01:28:58.891 "first_burst_length": 8192, 01:28:58.891 "immediate_data": true, 01:28:58.891 "allow_duplicated_isid": false, 01:28:58.891 "error_recovery_level": 0, 01:28:58.891 "nop_timeout": 60, 01:28:58.891 "nop_in_interval": 30, 01:28:58.891 "disable_chap": false, 01:28:58.891 "require_chap": false, 01:28:58.891 "mutual_chap": false, 01:28:58.891 "chap_group": 0, 01:28:58.891 "max_large_datain_per_connection": 64, 01:28:58.891 "max_r2t_per_connection": 4, 01:28:58.891 "pdu_pool_size": 36864, 01:28:58.891 "immediate_data_pool_size": 16384, 01:28:58.891 "data_out_pool_size": 2048 01:28:58.891 } 01:28:58.891 } 01:28:58.891 ] 01:28:58.891 } 01:28:58.891 ] 01:28:58.891 }' 01:28:58.891 [2024-12-09 05:23:41.053270] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:28:58.891 [2024-12-09 05:23:41.053397] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75380 ] 01:28:58.891 [2024-12-09 05:23:41.238408] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:28:59.149 [2024-12-09 05:23:41.365283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:29:00.087 [2024-12-09 05:23:42.517482] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 01:29:00.087 [2024-12-09 05:23:42.518726] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 01:29:00.087 [2024-12-09 05:23:42.525622] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 01:29:00.087 [2024-12-09 05:23:42.525742] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 01:29:00.087 [2024-12-09 05:23:42.525756] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 01:29:00.087 [2024-12-09 05:23:42.525766] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 01:29:00.087 [2024-12-09 05:23:42.534583] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 01:29:00.087 [2024-12-09 05:23:42.534608] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 01:29:00.087 [2024-12-09 05:23:42.541493] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 01:29:00.087 [2024-12-09 05:23:42.541604] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 01:29:00.346 [2024-12-09 05:23:42.558494] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 01:29:00.346 05:23:42 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:29:00.346 05:23:42 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 01:29:00.346 05:23:42 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 01:29:00.346 05:23:42 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:00.346 05:23:42 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 01:29:00.346 05:23:42 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 01:29:00.346 05:23:42 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:00.346 05:23:42 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 01:29:00.346 05:23:42 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 01:29:00.346 05:23:42 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 75380 01:29:00.346 05:23:42 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 75380 ']' 01:29:00.346 05:23:42 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 75380 01:29:00.346 05:23:42 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 01:29:00.346 05:23:42 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:29:00.346 05:23:42 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75380 01:29:00.346 05:23:42 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:29:00.346 killing process with pid 75380 01:29:00.346 05:23:42 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:29:00.346 05:23:42 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75380' 01:29:00.346 05:23:42 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 75380 01:29:00.346 05:23:42 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 75380 01:29:02.251 [2024-12-09 05:23:44.298328] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 01:29:02.251 [2024-12-09 05:23:44.339514] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 01:29:02.251 [2024-12-09 05:23:44.339662] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 01:29:02.251 [2024-12-09 05:23:44.347493] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 01:29:02.251 [2024-12-09 05:23:44.347551] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 01:29:02.251 [2024-12-09 05:23:44.347561] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 01:29:02.251 [2024-12-09 05:23:44.347603] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 01:29:02.251 [2024-12-09 05:23:44.347768] ublk.c: 766:_ublk_fini_done: *DEBUG*: 01:29:04.155 05:23:46 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 01:29:04.155 01:29:04.155 real 0m11.226s 01:29:04.155 user 0m8.166s 01:29:04.155 sys 0m3.769s 01:29:04.155 05:23:46 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 01:29:04.155 05:23:46 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 01:29:04.155 ************************************ 01:29:04.155 END TEST test_save_ublk_config 01:29:04.155 ************************************ 01:29:04.155 05:23:46 ublk -- ublk/ublk.sh@139 -- # spdk_pid=75470 01:29:04.155 05:23:46 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 01:29:04.155 05:23:46 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 01:29:04.155 05:23:46 ublk -- ublk/ublk.sh@141 -- # waitforlisten 75470 01:29:04.155 05:23:46 ublk -- common/autotest_common.sh@835 -- # '[' -z 75470 ']' 01:29:04.155 05:23:46 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:29:04.155 05:23:46 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 01:29:04.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:29:04.155 05:23:46 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:29:04.155 05:23:46 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 01:29:04.155 05:23:46 ublk -- common/autotest_common.sh@10 -- # set +x 01:29:04.155 [2024-12-09 05:23:46.523345] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:29:04.155 [2024-12-09 05:23:46.523504] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75470 ] 01:29:04.414 [2024-12-09 05:23:46.712200] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 01:29:04.414 [2024-12-09 05:23:46.841003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:29:04.414 [2024-12-09 05:23:46.841047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:29:05.792 05:23:47 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:29:05.792 05:23:47 ublk -- common/autotest_common.sh@868 -- # return 0 01:29:05.792 05:23:47 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 01:29:05.792 05:23:47 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:29:05.792 05:23:47 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 01:29:05.792 05:23:47 ublk -- common/autotest_common.sh@10 -- # set +x 01:29:05.792 ************************************ 01:29:05.792 START TEST test_create_ublk 01:29:05.792 ************************************ 01:29:05.792 05:23:47 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 01:29:05.792 05:23:47 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 01:29:05.792 05:23:47 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:05.792 05:23:47 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 01:29:05.792 [2024-12-09 05:23:47.868510] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 01:29:05.792 [2024-12-09 05:23:47.871540] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 01:29:05.792 05:23:47 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:05.792 05:23:47 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 01:29:05.792 05:23:47 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 01:29:05.792 05:23:47 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:05.792 05:23:47 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 01:29:05.792 05:23:48 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:05.792 05:23:48 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 01:29:05.792 05:23:48 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 01:29:05.792 05:23:48 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:05.792 05:23:48 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 01:29:05.792 [2024-12-09 05:23:48.213653] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 01:29:05.792 [2024-12-09 05:23:48.214172] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 01:29:05.792 [2024-12-09 05:23:48.214195] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 01:29:05.792 [2024-12-09 05:23:48.214204] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 01:29:05.792 [2024-12-09 05:23:48.221522] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 01:29:05.792 [2024-12-09 05:23:48.221551] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 01:29:05.792 [2024-12-09 05:23:48.229496] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 01:29:05.792 [2024-12-09 05:23:48.230117] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 01:29:06.106 [2024-12-09 05:23:48.252520] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 01:29:06.106 05:23:48 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:06.106 05:23:48 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 01:29:06.106 05:23:48 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 01:29:06.106 05:23:48 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 01:29:06.106 05:23:48 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:06.106 05:23:48 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 01:29:06.106 05:23:48 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:06.106 05:23:48 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 01:29:06.106 { 01:29:06.106 "ublk_device": "/dev/ublkb0", 01:29:06.106 "id": 0, 01:29:06.106 "queue_depth": 512, 01:29:06.106 "num_queues": 4, 01:29:06.106 "bdev_name": "Malloc0" 01:29:06.106 } 01:29:06.106 ]' 01:29:06.106 05:23:48 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 01:29:06.107 05:23:48 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 01:29:06.107 05:23:48 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 01:29:06.107 05:23:48 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 01:29:06.107 05:23:48 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 01:29:06.107 05:23:48 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 01:29:06.107 05:23:48 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 01:29:06.107 05:23:48 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 01:29:06.107 05:23:48 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 01:29:06.107 05:23:48 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 01:29:06.107 05:23:48 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 01:29:06.107 05:23:48 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 01:29:06.107 05:23:48 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 01:29:06.107 05:23:48 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 01:29:06.107 05:23:48 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 01:29:06.107 05:23:48 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 01:29:06.107 05:23:48 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 01:29:06.107 05:23:48 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 01:29:06.107 05:23:48 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 01:29:06.107 05:23:48 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 01:29:06.107 05:23:48 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 01:29:06.107 05:23:48 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 01:29:06.373 fio: verification read phase will never start because write phase uses all of runtime 01:29:06.373 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 01:29:06.373 fio-3.35 01:29:06.373 Starting 1 process 01:29:16.368 01:29:16.368 fio_test: (groupid=0, jobs=1): err= 0: pid=75519: Mon Dec 9 05:23:58 2024 01:29:16.368 write: IOPS=13.3k, BW=52.1MiB/s (54.6MB/s)(521MiB/10001msec); 0 zone resets 01:29:16.368 clat (usec): min=46, max=4119, avg=74.20, stdev=115.17 01:29:16.368 lat (usec): min=46, max=4121, avg=74.66, stdev=115.18 01:29:16.368 clat percentiles (usec): 01:29:16.368 | 1.00th=[ 51], 5.00th=[ 61], 10.00th=[ 62], 20.00th=[ 64], 01:29:16.368 | 30.00th=[ 65], 40.00th=[ 67], 50.00th=[ 69], 60.00th=[ 71], 01:29:16.368 | 70.00th=[ 73], 80.00th=[ 74], 90.00th=[ 77], 95.00th=[ 82], 01:29:16.368 | 99.00th=[ 94], 99.50th=[ 101], 99.90th=[ 2474], 99.95th=[ 2999], 01:29:16.368 | 99.99th=[ 3687] 01:29:16.368 bw ( KiB/s): min=50768, max=60328, per=100.00%, avg=53384.00, stdev=1910.10, samples=19 01:29:16.368 iops : min=12692, max=15082, avg=13346.00, stdev=477.53, samples=19 01:29:16.368 lat (usec) : 50=0.79%, 100=98.68%, 250=0.25%, 500=0.01%, 750=0.02% 01:29:16.368 lat (usec) : 1000=0.02% 01:29:16.368 lat (msec) : 2=0.09%, 4=0.14%, 10=0.01% 01:29:16.368 cpu : usr=2.52%, sys=10.21%, ctx=133336, majf=0, minf=795 01:29:16.368 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:29:16.368 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:29:16.368 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:29:16.368 issued rwts: total=0,133334,0,0 short=0,0,0,0 dropped=0,0,0,0 01:29:16.368 latency : target=0, window=0, percentile=100.00%, depth=1 01:29:16.368 01:29:16.368 Run status group 0 (all jobs): 01:29:16.368 WRITE: bw=52.1MiB/s (54.6MB/s), 52.1MiB/s-52.1MiB/s (54.6MB/s-54.6MB/s), io=521MiB (546MB), run=10001-10001msec 01:29:16.368 01:29:16.368 Disk stats (read/write): 01:29:16.368 ublkb0: ios=0/131913, merge=0/0, ticks=0/8579, in_queue=8579, util=99.12% 01:29:16.368 05:23:58 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 01:29:16.368 05:23:58 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:16.368 05:23:58 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 01:29:16.368 [2024-12-09 05:23:58.755109] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 01:29:16.368 [2024-12-09 05:23:58.786047] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 01:29:16.368 [2024-12-09 05:23:58.786966] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 01:29:16.368 [2024-12-09 05:23:58.797519] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 01:29:16.368 [2024-12-09 05:23:58.797862] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 01:29:16.368 [2024-12-09 05:23:58.797881] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 01:29:16.368 05:23:58 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:16.368 05:23:58 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 01:29:16.368 05:23:58 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 01:29:16.368 05:23:58 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 01:29:16.368 05:23:58 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 01:29:16.368 05:23:58 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:29:16.368 05:23:58 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 01:29:16.368 05:23:58 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:29:16.368 05:23:58 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 01:29:16.368 05:23:58 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:16.368 05:23:58 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 01:29:16.368 [2024-12-09 05:23:58.813591] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 01:29:16.628 request: 01:29:16.628 { 01:29:16.628 "ublk_id": 0, 01:29:16.628 "method": "ublk_stop_disk", 01:29:16.628 "req_id": 1 01:29:16.628 } 01:29:16.628 Got JSON-RPC error response 01:29:16.628 response: 01:29:16.628 { 01:29:16.628 "code": -19, 01:29:16.628 "message": "No such device" 01:29:16.628 } 01:29:16.628 05:23:58 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 01:29:16.628 05:23:58 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 01:29:16.628 05:23:58 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:29:16.628 05:23:58 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:29:16.628 05:23:58 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:29:16.628 05:23:58 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 01:29:16.628 05:23:58 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:16.628 05:23:58 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 01:29:16.628 [2024-12-09 05:23:58.840574] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 01:29:16.628 [2024-12-09 05:23:58.851474] ublk.c: 766:_ublk_fini_done: *DEBUG*: 01:29:16.628 [2024-12-09 05:23:58.851525] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 01:29:16.628 05:23:58 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:16.628 05:23:58 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 01:29:16.628 05:23:58 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:16.628 05:23:58 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 01:29:17.197 05:23:59 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:17.197 05:23:59 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 01:29:17.197 05:23:59 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 01:29:17.197 05:23:59 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:17.197 05:23:59 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 01:29:17.456 05:23:59 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:17.456 05:23:59 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 01:29:17.456 05:23:59 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 01:29:17.456 05:23:59 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 01:29:17.456 05:23:59 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 01:29:17.456 05:23:59 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:17.456 05:23:59 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 01:29:17.456 05:23:59 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:17.456 05:23:59 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 01:29:17.456 05:23:59 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 01:29:17.456 ************************************ 01:29:17.456 END TEST test_create_ublk 01:29:17.456 ************************************ 01:29:17.456 05:23:59 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 01:29:17.456 01:29:17.456 real 0m11.896s 01:29:17.456 user 0m0.631s 01:29:17.456 sys 0m1.155s 01:29:17.456 05:23:59 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 01:29:17.456 05:23:59 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 01:29:17.456 05:23:59 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 01:29:17.456 05:23:59 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:29:17.456 05:23:59 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 01:29:17.456 05:23:59 ublk -- common/autotest_common.sh@10 -- # set +x 01:29:17.456 ************************************ 01:29:17.456 START TEST test_create_multi_ublk 01:29:17.456 ************************************ 01:29:17.456 05:23:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 01:29:17.456 05:23:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 01:29:17.456 05:23:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:17.456 05:23:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 01:29:17.456 [2024-12-09 05:23:59.842481] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 01:29:17.456 [2024-12-09 05:23:59.845357] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 01:29:17.456 05:23:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:17.456 05:23:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 01:29:17.457 05:23:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 01:29:17.457 05:23:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 01:29:17.457 05:23:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 01:29:17.457 05:23:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:17.457 05:23:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 01:29:17.716 05:24:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:17.716 05:24:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 01:29:17.716 05:24:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 01:29:17.716 05:24:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:17.716 05:24:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 01:29:17.716 [2024-12-09 05:24:00.167674] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 01:29:17.716 [2024-12-09 05:24:00.168199] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 01:29:17.716 [2024-12-09 05:24:00.168217] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 01:29:17.716 [2024-12-09 05:24:00.168234] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 01:29:17.975 [2024-12-09 05:24:00.176921] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 01:29:17.975 [2024-12-09 05:24:00.176954] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 01:29:17.975 [2024-12-09 05:24:00.183498] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 01:29:17.975 [2024-12-09 05:24:00.184195] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 01:29:17.975 [2024-12-09 05:24:00.197587] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 01:29:17.975 05:24:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:17.975 05:24:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 01:29:17.975 05:24:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 01:29:17.975 05:24:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 01:29:17.975 05:24:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:17.976 05:24:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 01:29:18.235 05:24:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:18.235 05:24:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 01:29:18.235 05:24:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 01:29:18.235 05:24:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:18.235 05:24:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 01:29:18.235 [2024-12-09 05:24:00.527670] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 01:29:18.235 [2024-12-09 05:24:00.528197] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 01:29:18.235 [2024-12-09 05:24:00.528218] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 01:29:18.235 [2024-12-09 05:24:00.528227] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 01:29:18.235 [2024-12-09 05:24:00.536907] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 01:29:18.235 [2024-12-09 05:24:00.536933] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 01:29:18.235 [2024-12-09 05:24:00.543504] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 01:29:18.235 [2024-12-09 05:24:00.544153] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 01:29:18.235 [2024-12-09 05:24:00.550557] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 01:29:18.235 05:24:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:18.235 05:24:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 01:29:18.235 05:24:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 01:29:18.235 05:24:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 01:29:18.235 05:24:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:18.235 05:24:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 01:29:18.495 05:24:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:18.495 05:24:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 01:29:18.495 05:24:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 01:29:18.495 05:24:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:18.495 05:24:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 01:29:18.495 [2024-12-09 05:24:00.874662] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 01:29:18.495 [2024-12-09 05:24:00.875236] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 01:29:18.495 [2024-12-09 05:24:00.875255] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 01:29:18.495 [2024-12-09 05:24:00.875267] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 01:29:18.495 [2024-12-09 05:24:00.882522] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 01:29:18.495 [2024-12-09 05:24:00.882556] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 01:29:18.495 [2024-12-09 05:24:00.890504] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 01:29:18.495 [2024-12-09 05:24:00.891190] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 01:29:18.495 [2024-12-09 05:24:00.894110] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 01:29:18.495 05:24:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:18.495 05:24:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 01:29:18.495 05:24:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 01:29:18.495 05:24:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 01:29:18.495 05:24:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:18.495 05:24:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 01:29:19.064 05:24:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:19.064 05:24:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 01:29:19.064 05:24:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 01:29:19.064 05:24:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:19.064 05:24:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 01:29:19.064 [2024-12-09 05:24:01.222661] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 01:29:19.064 [2024-12-09 05:24:01.223206] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 01:29:19.064 [2024-12-09 05:24:01.223229] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 01:29:19.064 [2024-12-09 05:24:01.223238] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 01:29:19.064 [2024-12-09 05:24:01.230515] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 01:29:19.064 [2024-12-09 05:24:01.230543] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 01:29:19.064 [2024-12-09 05:24:01.238506] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 01:29:19.064 [2024-12-09 05:24:01.239157] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 01:29:19.064 [2024-12-09 05:24:01.242087] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 01:29:19.064 05:24:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:19.064 05:24:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 01:29:19.064 05:24:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 01:29:19.064 05:24:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:19.064 05:24:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 01:29:19.064 05:24:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:19.064 05:24:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 01:29:19.064 { 01:29:19.064 "ublk_device": "/dev/ublkb0", 01:29:19.064 "id": 0, 01:29:19.064 "queue_depth": 512, 01:29:19.064 "num_queues": 4, 01:29:19.064 "bdev_name": "Malloc0" 01:29:19.064 }, 01:29:19.064 { 01:29:19.064 "ublk_device": "/dev/ublkb1", 01:29:19.064 "id": 1, 01:29:19.064 "queue_depth": 512, 01:29:19.064 "num_queues": 4, 01:29:19.064 "bdev_name": "Malloc1" 01:29:19.064 }, 01:29:19.064 { 01:29:19.064 "ublk_device": "/dev/ublkb2", 01:29:19.064 "id": 2, 01:29:19.064 "queue_depth": 512, 01:29:19.064 "num_queues": 4, 01:29:19.064 "bdev_name": "Malloc2" 01:29:19.064 }, 01:29:19.064 { 01:29:19.064 "ublk_device": "/dev/ublkb3", 01:29:19.064 "id": 3, 01:29:19.064 "queue_depth": 512, 01:29:19.064 "num_queues": 4, 01:29:19.064 "bdev_name": "Malloc3" 01:29:19.064 } 01:29:19.064 ]' 01:29:19.064 05:24:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 01:29:19.064 05:24:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 01:29:19.064 05:24:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 01:29:19.064 05:24:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 01:29:19.064 05:24:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 01:29:19.064 05:24:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 01:29:19.064 05:24:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 01:29:19.064 05:24:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 01:29:19.064 05:24:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 01:29:19.064 05:24:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 01:29:19.064 05:24:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 01:29:19.064 05:24:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 01:29:19.064 05:24:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 01:29:19.064 05:24:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 01:29:19.324 05:24:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 01:29:19.324 05:24:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 01:29:19.324 05:24:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 01:29:19.324 05:24:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 01:29:19.324 05:24:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 01:29:19.324 05:24:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 01:29:19.324 05:24:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 01:29:19.324 05:24:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 01:29:19.324 05:24:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 01:29:19.324 05:24:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 01:29:19.324 05:24:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 01:29:19.324 05:24:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 01:29:19.324 05:24:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 01:29:19.583 05:24:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 01:29:19.583 05:24:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 01:29:19.583 05:24:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 01:29:19.583 05:24:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 01:29:19.583 05:24:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 01:29:19.583 05:24:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 01:29:19.583 05:24:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 01:29:19.583 05:24:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 01:29:19.583 05:24:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 01:29:19.583 05:24:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 01:29:19.583 05:24:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 01:29:19.583 05:24:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 01:29:19.583 05:24:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 01:29:19.843 05:24:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 01:29:19.843 05:24:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 01:29:19.843 05:24:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 01:29:19.843 05:24:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 01:29:19.843 05:24:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 01:29:19.843 05:24:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 01:29:19.843 05:24:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 01:29:19.843 05:24:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 01:29:19.843 05:24:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 01:29:19.843 05:24:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:19.843 05:24:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 01:29:19.843 [2024-12-09 05:24:02.161632] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 01:29:19.843 [2024-12-09 05:24:02.200552] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 01:29:19.843 [2024-12-09 05:24:02.201424] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 01:29:19.843 [2024-12-09 05:24:02.209550] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 01:29:19.843 [2024-12-09 05:24:02.209853] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 01:29:19.843 [2024-12-09 05:24:02.209872] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 01:29:19.843 05:24:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:19.843 05:24:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 01:29:19.843 05:24:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 01:29:19.843 05:24:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:19.843 05:24:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 01:29:19.843 [2024-12-09 05:24:02.223573] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 01:29:19.843 [2024-12-09 05:24:02.262530] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 01:29:19.843 [2024-12-09 05:24:02.263360] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 01:29:19.843 [2024-12-09 05:24:02.270504] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 01:29:19.843 [2024-12-09 05:24:02.270783] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 01:29:19.843 [2024-12-09 05:24:02.270804] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 01:29:19.843 05:24:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:19.843 05:24:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 01:29:19.843 05:24:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 01:29:19.843 05:24:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:19.843 05:24:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 01:29:19.843 [2024-12-09 05:24:02.286605] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 01:29:20.103 [2024-12-09 05:24:02.326537] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 01:29:20.103 [2024-12-09 05:24:02.327295] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 01:29:20.103 [2024-12-09 05:24:02.334514] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 01:29:20.103 [2024-12-09 05:24:02.334800] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 01:29:20.103 [2024-12-09 05:24:02.334814] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 01:29:20.103 05:24:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:20.103 05:24:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 01:29:20.103 05:24:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 01:29:20.103 05:24:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:20.103 05:24:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 01:29:20.103 [2024-12-09 05:24:02.350574] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 01:29:20.103 [2024-12-09 05:24:02.382022] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 01:29:20.103 [2024-12-09 05:24:02.382816] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 01:29:20.103 [2024-12-09 05:24:02.389498] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 01:29:20.103 [2024-12-09 05:24:02.389783] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 01:29:20.103 [2024-12-09 05:24:02.389797] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 01:29:20.103 05:24:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:20.103 05:24:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 01:29:20.362 [2024-12-09 05:24:02.596554] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 01:29:20.362 [2024-12-09 05:24:02.604483] ublk.c: 766:_ublk_fini_done: *DEBUG*: 01:29:20.362 [2024-12-09 05:24:02.604519] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 01:29:20.362 05:24:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 01:29:20.362 05:24:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 01:29:20.362 05:24:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 01:29:20.362 05:24:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:20.362 05:24:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 01:29:21.298 05:24:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:21.298 05:24:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 01:29:21.298 05:24:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 01:29:21.298 05:24:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:21.298 05:24:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 01:29:21.556 05:24:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:21.556 05:24:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 01:29:21.556 05:24:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 01:29:21.556 05:24:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:21.556 05:24:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 01:29:21.815 05:24:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:21.816 05:24:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 01:29:21.816 05:24:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 01:29:21.816 05:24:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:21.816 05:24:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 01:29:22.384 05:24:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:22.384 05:24:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 01:29:22.384 05:24:04 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 01:29:22.384 05:24:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:22.384 05:24:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 01:29:22.384 05:24:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:22.384 05:24:04 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 01:29:22.384 05:24:04 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 01:29:22.384 05:24:04 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 01:29:22.384 05:24:04 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 01:29:22.384 05:24:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:22.384 05:24:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 01:29:22.384 05:24:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:22.384 05:24:04 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 01:29:22.384 05:24:04 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 01:29:22.384 05:24:04 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 01:29:22.384 ************************************ 01:29:22.384 END TEST test_create_multi_ublk 01:29:22.384 ************************************ 01:29:22.384 01:29:22.384 real 0m4.872s 01:29:22.384 user 0m1.043s 01:29:22.384 sys 0m0.257s 01:29:22.384 05:24:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 01:29:22.384 05:24:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 01:29:22.384 05:24:04 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 01:29:22.384 05:24:04 ublk -- ublk/ublk.sh@147 -- # cleanup 01:29:22.384 05:24:04 ublk -- ublk/ublk.sh@130 -- # killprocess 75470 01:29:22.384 05:24:04 ublk -- common/autotest_common.sh@954 -- # '[' -z 75470 ']' 01:29:22.384 05:24:04 ublk -- common/autotest_common.sh@958 -- # kill -0 75470 01:29:22.384 05:24:04 ublk -- common/autotest_common.sh@959 -- # uname 01:29:22.384 05:24:04 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:29:22.384 05:24:04 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75470 01:29:22.384 05:24:04 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:29:22.384 killing process with pid 75470 01:29:22.384 05:24:04 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:29:22.384 05:24:04 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75470' 01:29:22.384 05:24:04 ublk -- common/autotest_common.sh@973 -- # kill 75470 01:29:22.384 05:24:04 ublk -- common/autotest_common.sh@978 -- # wait 75470 01:29:23.762 [2024-12-09 05:24:06.043420] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 01:29:23.762 [2024-12-09 05:24:06.043499] ublk.c: 766:_ublk_fini_done: *DEBUG*: 01:29:25.140 01:29:25.140 real 0m32.624s 01:29:25.140 user 0m45.036s 01:29:25.140 sys 0m11.896s 01:29:25.140 05:24:07 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 01:29:25.140 05:24:07 ublk -- common/autotest_common.sh@10 -- # set +x 01:29:25.140 ************************************ 01:29:25.140 END TEST ublk 01:29:25.140 ************************************ 01:29:25.140 05:24:07 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 01:29:25.140 05:24:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:29:25.140 05:24:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:29:25.140 05:24:07 -- common/autotest_common.sh@10 -- # set +x 01:29:25.140 ************************************ 01:29:25.140 START TEST ublk_recovery 01:29:25.140 ************************************ 01:29:25.140 05:24:07 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 01:29:25.400 * Looking for test storage... 01:29:25.400 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 01:29:25.400 05:24:07 ublk_recovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:29:25.400 05:24:07 ublk_recovery -- common/autotest_common.sh@1693 -- # lcov --version 01:29:25.400 05:24:07 ublk_recovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:29:25.400 05:24:07 ublk_recovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:29:25.400 05:24:07 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:29:25.400 05:24:07 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 01:29:25.400 05:24:07 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 01:29:25.400 05:24:07 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 01:29:25.400 05:24:07 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 01:29:25.400 05:24:07 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 01:29:25.400 05:24:07 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 01:29:25.400 05:24:07 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 01:29:25.400 05:24:07 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 01:29:25.400 05:24:07 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 01:29:25.400 05:24:07 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:29:25.400 05:24:07 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 01:29:25.400 05:24:07 ublk_recovery -- scripts/common.sh@345 -- # : 1 01:29:25.400 05:24:07 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 01:29:25.400 05:24:07 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:29:25.400 05:24:07 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 01:29:25.400 05:24:07 ublk_recovery -- scripts/common.sh@353 -- # local d=1 01:29:25.400 05:24:07 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:29:25.400 05:24:07 ublk_recovery -- scripts/common.sh@355 -- # echo 1 01:29:25.400 05:24:07 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 01:29:25.400 05:24:07 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 01:29:25.400 05:24:07 ublk_recovery -- scripts/common.sh@353 -- # local d=2 01:29:25.400 05:24:07 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:29:25.400 05:24:07 ublk_recovery -- scripts/common.sh@355 -- # echo 2 01:29:25.400 05:24:07 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 01:29:25.400 05:24:07 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:29:25.400 05:24:07 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:29:25.400 05:24:07 ublk_recovery -- scripts/common.sh@368 -- # return 0 01:29:25.401 05:24:07 ublk_recovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:29:25.401 05:24:07 ublk_recovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:29:25.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:29:25.401 --rc genhtml_branch_coverage=1 01:29:25.401 --rc genhtml_function_coverage=1 01:29:25.401 --rc genhtml_legend=1 01:29:25.401 --rc geninfo_all_blocks=1 01:29:25.401 --rc geninfo_unexecuted_blocks=1 01:29:25.401 01:29:25.401 ' 01:29:25.401 05:24:07 ublk_recovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:29:25.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:29:25.401 --rc genhtml_branch_coverage=1 01:29:25.401 --rc genhtml_function_coverage=1 01:29:25.401 --rc genhtml_legend=1 01:29:25.401 --rc geninfo_all_blocks=1 01:29:25.401 --rc geninfo_unexecuted_blocks=1 01:29:25.401 01:29:25.401 ' 01:29:25.401 05:24:07 ublk_recovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:29:25.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:29:25.401 --rc genhtml_branch_coverage=1 01:29:25.401 --rc genhtml_function_coverage=1 01:29:25.401 --rc genhtml_legend=1 01:29:25.401 --rc geninfo_all_blocks=1 01:29:25.401 --rc geninfo_unexecuted_blocks=1 01:29:25.401 01:29:25.401 ' 01:29:25.401 05:24:07 ublk_recovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:29:25.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:29:25.401 --rc genhtml_branch_coverage=1 01:29:25.401 --rc genhtml_function_coverage=1 01:29:25.401 --rc genhtml_legend=1 01:29:25.401 --rc geninfo_all_blocks=1 01:29:25.401 --rc geninfo_unexecuted_blocks=1 01:29:25.401 01:29:25.401 ' 01:29:25.401 05:24:07 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 01:29:25.401 05:24:07 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 01:29:25.401 05:24:07 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 01:29:25.401 05:24:07 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 01:29:25.401 05:24:07 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 01:29:25.401 05:24:07 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 01:29:25.401 05:24:07 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 01:29:25.401 05:24:07 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 01:29:25.401 05:24:07 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 01:29:25.401 05:24:07 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 01:29:25.401 05:24:07 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=75903 01:29:25.401 05:24:07 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 01:29:25.401 05:24:07 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 01:29:25.401 05:24:07 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 75903 01:29:25.401 05:24:07 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 75903 ']' 01:29:25.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:29:25.401 05:24:07 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:29:25.401 05:24:07 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 01:29:25.401 05:24:07 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:29:25.401 05:24:07 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 01:29:25.401 05:24:07 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 01:29:25.661 [2024-12-09 05:24:07.926597] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:29:25.661 [2024-12-09 05:24:07.926752] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75903 ] 01:29:25.920 [2024-12-09 05:24:08.117068] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 01:29:25.920 [2024-12-09 05:24:08.241279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:29:25.920 [2024-12-09 05:24:08.241322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:29:26.857 05:24:09 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:29:26.857 05:24:09 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 01:29:26.857 05:24:09 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 01:29:26.857 05:24:09 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:26.857 05:24:09 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 01:29:26.857 [2024-12-09 05:24:09.260490] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 01:29:26.857 [2024-12-09 05:24:09.263765] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 01:29:26.857 05:24:09 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:26.857 05:24:09 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 01:29:26.857 05:24:09 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:26.857 05:24:09 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 01:29:27.150 malloc0 01:29:27.150 05:24:09 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:27.150 05:24:09 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 01:29:27.150 05:24:09 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:27.150 05:24:09 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 01:29:27.150 [2024-12-09 05:24:09.441686] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 01:29:27.150 [2024-12-09 05:24:09.441833] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 01:29:27.150 [2024-12-09 05:24:09.441850] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 01:29:27.150 [2024-12-09 05:24:09.441863] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 01:29:27.150 [2024-12-09 05:24:09.450632] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 01:29:27.150 [2024-12-09 05:24:09.450660] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 01:29:27.150 [2024-12-09 05:24:09.457501] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 01:29:27.150 [2024-12-09 05:24:09.457687] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 01:29:27.150 [2024-12-09 05:24:09.472584] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 01:29:27.150 1 01:29:27.150 05:24:09 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:27.150 05:24:09 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 01:29:28.109 05:24:10 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=75940 01:29:28.109 05:24:10 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 01:29:28.109 05:24:10 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 01:29:28.369 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 01:29:28.369 fio-3.35 01:29:28.369 Starting 1 process 01:29:33.639 05:24:15 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 75903 01:29:33.639 05:24:15 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 01:29:38.914 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 75903 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 01:29:38.914 05:24:20 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=76051 01:29:38.914 05:24:20 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 01:29:38.914 05:24:20 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 01:29:38.914 05:24:20 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 76051 01:29:38.914 05:24:20 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 76051 ']' 01:29:38.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:29:38.914 05:24:20 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:29:38.914 05:24:20 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 01:29:38.914 05:24:20 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:29:38.914 05:24:20 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 01:29:38.914 05:24:20 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 01:29:38.914 [2024-12-09 05:24:20.626030] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:29:38.914 [2024-12-09 05:24:20.626209] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76051 ] 01:29:38.914 [2024-12-09 05:24:20.815828] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 01:29:38.914 [2024-12-09 05:24:20.950007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:29:38.914 [2024-12-09 05:24:20.950050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:29:39.853 05:24:21 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:29:39.854 05:24:21 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 01:29:39.854 05:24:21 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 01:29:39.854 05:24:21 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:39.854 05:24:21 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 01:29:39.854 [2024-12-09 05:24:21.950490] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 01:29:39.854 [2024-12-09 05:24:21.953673] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 01:29:39.854 05:24:21 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:39.854 05:24:21 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 01:29:39.854 05:24:21 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:39.854 05:24:21 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 01:29:39.854 malloc0 01:29:39.854 05:24:22 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:39.854 05:24:22 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 01:29:39.854 05:24:22 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:39.854 05:24:22 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 01:29:39.854 [2024-12-09 05:24:22.120688] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 01:29:39.854 [2024-12-09 05:24:22.120742] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 01:29:39.854 [2024-12-09 05:24:22.120756] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 01:29:39.854 [2024-12-09 05:24:22.128515] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 01:29:39.854 [2024-12-09 05:24:22.128543] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 01:29:39.854 1 01:29:39.854 05:24:22 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:39.854 05:24:22 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 75940 01:29:40.791 [2024-12-09 05:24:23.126969] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 01:29:40.791 [2024-12-09 05:24:23.135505] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 01:29:40.791 [2024-12-09 05:24:23.135541] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 01:29:41.728 [2024-12-09 05:24:24.133951] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 01:29:41.728 [2024-12-09 05:24:24.135509] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 01:29:41.728 [2024-12-09 05:24:24.135532] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 01:29:43.104 [2024-12-09 05:24:25.133940] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 01:29:43.104 [2024-12-09 05:24:25.139485] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 01:29:43.104 [2024-12-09 05:24:25.139502] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 01:29:43.104 [2024-12-09 05:24:25.139515] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 01:29:43.104 [2024-12-09 05:24:25.139620] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 01:30:05.119 [2024-12-09 05:24:45.859494] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 01:30:05.119 [2024-12-09 05:24:45.867085] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 01:30:05.119 [2024-12-09 05:24:45.874812] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 01:30:05.119 [2024-12-09 05:24:45.874836] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 01:30:31.663 01:30:31.663 fio_test: (groupid=0, jobs=1): err= 0: pid=75947: Mon Dec 9 05:25:10 2024 01:30:31.663 read: IOPS=12.4k, BW=48.4MiB/s (50.7MB/s)(2902MiB/60003msec) 01:30:31.663 slat (nsec): min=1985, max=872447, avg=7093.95, stdev=2731.03 01:30:31.663 clat (usec): min=1001, max=30396k, avg=5369.59, stdev=295014.89 01:30:31.663 lat (usec): min=1054, max=30396k, avg=5376.68, stdev=295014.91 01:30:31.663 clat percentiles (usec): 01:30:31.663 | 1.00th=[ 1942], 5.00th=[ 2147], 10.00th=[ 2212], 20.00th=[ 2278], 01:30:31.663 | 30.00th=[ 2311], 40.00th=[ 2311], 50.00th=[ 2343], 60.00th=[ 2376], 01:30:31.663 | 70.00th=[ 2409], 80.00th=[ 2442], 90.00th=[ 2933], 95.00th=[ 3818], 01:30:31.663 | 99.00th=[ 5407], 99.50th=[ 5932], 99.90th=[ 7308], 99.95th=[ 8455], 01:30:31.663 | 99.99th=[14615] 01:30:31.663 bw ( KiB/s): min=37352, max=105056, per=100.00%, avg=99375.32, stdev=12156.42, samples=59 01:30:31.663 iops : min= 9338, max=26264, avg=24843.86, stdev=3039.11, samples=59 01:30:31.663 write: IOPS=12.4k, BW=48.3MiB/s (50.6MB/s)(2896MiB/60003msec); 0 zone resets 01:30:31.663 slat (usec): min=2, max=2199, avg= 7.11, stdev= 3.42 01:30:31.663 clat (usec): min=992, max=30396k, avg=4963.71, stdev=268795.30 01:30:31.663 lat (usec): min=998, max=30396k, avg=4970.82, stdev=268795.32 01:30:31.663 clat percentiles (usec): 01:30:31.663 | 1.00th=[ 1958], 5.00th=[ 2114], 10.00th=[ 2278], 20.00th=[ 2376], 01:30:31.663 | 30.00th=[ 2409], 40.00th=[ 2442], 50.00th=[ 2442], 60.00th=[ 2474], 01:30:31.663 | 70.00th=[ 2507], 80.00th=[ 2573], 90.00th=[ 2900], 95.00th=[ 3785], 01:30:31.663 | 99.00th=[ 5473], 99.50th=[ 6063], 99.90th=[ 7439], 99.95th=[ 8586], 01:30:31.663 | 99.99th=[14615] 01:30:31.663 bw ( KiB/s): min=38224, max=104696, per=100.00%, avg=99242.58, stdev=11874.87, samples=59 01:30:31.663 iops : min= 9556, max=26174, avg=24810.64, stdev=2968.72, samples=59 01:30:31.663 lat (usec) : 1000=0.01% 01:30:31.663 lat (msec) : 2=1.60%, 4=94.23%, 10=4.15%, 20=0.02%, >=2000=0.01% 01:30:31.663 cpu : usr=6.50%, sys=17.53%, ctx=63484, majf=0, minf=13 01:30:31.663 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 01:30:31.663 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:30:31.663 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:30:31.663 issued rwts: total=742798,741427,0,0 short=0,0,0,0 dropped=0,0,0,0 01:30:31.663 latency : target=0, window=0, percentile=100.00%, depth=128 01:30:31.663 01:30:31.663 Run status group 0 (all jobs): 01:30:31.663 READ: bw=48.4MiB/s (50.7MB/s), 48.4MiB/s-48.4MiB/s (50.7MB/s-50.7MB/s), io=2902MiB (3043MB), run=60003-60003msec 01:30:31.663 WRITE: bw=48.3MiB/s (50.6MB/s), 48.3MiB/s-48.3MiB/s (50.6MB/s-50.6MB/s), io=2896MiB (3037MB), run=60003-60003msec 01:30:31.663 01:30:31.663 Disk stats (read/write): 01:30:31.663 ublkb1: ios=740712/739476, merge=0/0, ticks=3919702/3538779, in_queue=7458482, util=99.89% 01:30:31.663 05:25:10 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 01:30:31.663 05:25:10 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:31.663 05:25:10 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 01:30:31.663 [2024-12-09 05:25:10.761908] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 01:30:31.663 [2024-12-09 05:25:10.799521] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 01:30:31.663 [2024-12-09 05:25:10.799799] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 01:30:31.663 [2024-12-09 05:25:10.808595] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 01:30:31.663 [2024-12-09 05:25:10.812666] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 01:30:31.663 [2024-12-09 05:25:10.812684] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 01:30:31.663 05:25:10 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:31.663 05:25:10 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 01:30:31.663 05:25:10 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:31.663 05:25:10 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 01:30:31.663 [2024-12-09 05:25:10.816701] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 01:30:31.663 [2024-12-09 05:25:10.824227] ublk.c: 766:_ublk_fini_done: *DEBUG*: 01:30:31.663 [2024-12-09 05:25:10.824271] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 01:30:31.663 05:25:10 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:31.663 05:25:10 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 01:30:31.663 05:25:10 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 01:30:31.663 05:25:10 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 76051 01:30:31.663 05:25:10 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 76051 ']' 01:30:31.663 05:25:10 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 76051 01:30:31.663 05:25:10 ublk_recovery -- common/autotest_common.sh@959 -- # uname 01:30:31.663 05:25:10 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:30:31.663 05:25:10 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76051 01:30:31.663 killing process with pid 76051 01:30:31.663 05:25:10 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:30:31.663 05:25:10 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:30:31.663 05:25:10 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76051' 01:30:31.663 05:25:10 ublk_recovery -- common/autotest_common.sh@973 -- # kill 76051 01:30:31.663 05:25:10 ublk_recovery -- common/autotest_common.sh@978 -- # wait 76051 01:30:31.663 [2024-12-09 05:25:12.477550] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 01:30:31.663 [2024-12-09 05:25:12.477639] ublk.c: 766:_ublk_fini_done: *DEBUG*: 01:30:31.663 01:30:31.663 real 1m6.445s 01:30:31.663 user 1m51.456s 01:30:31.663 sys 0m24.725s 01:30:31.663 05:25:14 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 01:30:31.663 ************************************ 01:30:31.663 END TEST ublk_recovery 01:30:31.663 ************************************ 01:30:31.663 05:25:14 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 01:30:31.663 05:25:14 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 01:30:31.663 05:25:14 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 01:30:31.663 05:25:14 -- spdk/autotest.sh@260 -- # timing_exit lib 01:30:31.663 05:25:14 -- common/autotest_common.sh@732 -- # xtrace_disable 01:30:31.663 05:25:14 -- common/autotest_common.sh@10 -- # set +x 01:30:31.923 05:25:14 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 01:30:31.923 05:25:14 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 01:30:31.923 05:25:14 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 01:30:31.923 05:25:14 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 01:30:31.923 05:25:14 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 01:30:31.923 05:25:14 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 01:30:31.923 05:25:14 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 01:30:31.923 05:25:14 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 01:30:31.923 05:25:14 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 01:30:31.923 05:25:14 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 01:30:31.923 05:25:14 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 01:30:31.923 05:25:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:30:31.923 05:25:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:30:31.923 05:25:14 -- common/autotest_common.sh@10 -- # set +x 01:30:31.923 ************************************ 01:30:31.923 START TEST ftl 01:30:31.923 ************************************ 01:30:31.923 05:25:14 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 01:30:31.923 * Looking for test storage... 01:30:31.923 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 01:30:31.923 05:25:14 ftl -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:30:31.923 05:25:14 ftl -- common/autotest_common.sh@1693 -- # lcov --version 01:30:31.923 05:25:14 ftl -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:30:31.923 05:25:14 ftl -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:30:31.923 05:25:14 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:30:31.923 05:25:14 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 01:30:31.923 05:25:14 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 01:30:31.923 05:25:14 ftl -- scripts/common.sh@336 -- # IFS=.-: 01:30:31.923 05:25:14 ftl -- scripts/common.sh@336 -- # read -ra ver1 01:30:31.923 05:25:14 ftl -- scripts/common.sh@337 -- # IFS=.-: 01:30:31.923 05:25:14 ftl -- scripts/common.sh@337 -- # read -ra ver2 01:30:31.923 05:25:14 ftl -- scripts/common.sh@338 -- # local 'op=<' 01:30:31.923 05:25:14 ftl -- scripts/common.sh@340 -- # ver1_l=2 01:30:31.923 05:25:14 ftl -- scripts/common.sh@341 -- # ver2_l=1 01:30:31.923 05:25:14 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:30:31.923 05:25:14 ftl -- scripts/common.sh@344 -- # case "$op" in 01:30:31.923 05:25:14 ftl -- scripts/common.sh@345 -- # : 1 01:30:31.923 05:25:14 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 01:30:31.923 05:25:14 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:30:32.182 05:25:14 ftl -- scripts/common.sh@365 -- # decimal 1 01:30:32.182 05:25:14 ftl -- scripts/common.sh@353 -- # local d=1 01:30:32.182 05:25:14 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:30:32.182 05:25:14 ftl -- scripts/common.sh@355 -- # echo 1 01:30:32.182 05:25:14 ftl -- scripts/common.sh@365 -- # ver1[v]=1 01:30:32.182 05:25:14 ftl -- scripts/common.sh@366 -- # decimal 2 01:30:32.182 05:25:14 ftl -- scripts/common.sh@353 -- # local d=2 01:30:32.182 05:25:14 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:30:32.182 05:25:14 ftl -- scripts/common.sh@355 -- # echo 2 01:30:32.182 05:25:14 ftl -- scripts/common.sh@366 -- # ver2[v]=2 01:30:32.182 05:25:14 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:30:32.182 05:25:14 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:30:32.182 05:25:14 ftl -- scripts/common.sh@368 -- # return 0 01:30:32.182 05:25:14 ftl -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:30:32.182 05:25:14 ftl -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:30:32.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:30:32.182 --rc genhtml_branch_coverage=1 01:30:32.182 --rc genhtml_function_coverage=1 01:30:32.182 --rc genhtml_legend=1 01:30:32.182 --rc geninfo_all_blocks=1 01:30:32.182 --rc geninfo_unexecuted_blocks=1 01:30:32.182 01:30:32.182 ' 01:30:32.182 05:25:14 ftl -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:30:32.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:30:32.182 --rc genhtml_branch_coverage=1 01:30:32.182 --rc genhtml_function_coverage=1 01:30:32.182 --rc genhtml_legend=1 01:30:32.182 --rc geninfo_all_blocks=1 01:30:32.183 --rc geninfo_unexecuted_blocks=1 01:30:32.183 01:30:32.183 ' 01:30:32.183 05:25:14 ftl -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:30:32.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:30:32.183 --rc genhtml_branch_coverage=1 01:30:32.183 --rc genhtml_function_coverage=1 01:30:32.183 --rc genhtml_legend=1 01:30:32.183 --rc geninfo_all_blocks=1 01:30:32.183 --rc geninfo_unexecuted_blocks=1 01:30:32.183 01:30:32.183 ' 01:30:32.183 05:25:14 ftl -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:30:32.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:30:32.183 --rc genhtml_branch_coverage=1 01:30:32.183 --rc genhtml_function_coverage=1 01:30:32.183 --rc genhtml_legend=1 01:30:32.183 --rc geninfo_all_blocks=1 01:30:32.183 --rc geninfo_unexecuted_blocks=1 01:30:32.183 01:30:32.183 ' 01:30:32.183 05:25:14 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 01:30:32.183 05:25:14 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 01:30:32.183 05:25:14 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 01:30:32.183 05:25:14 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 01:30:32.183 05:25:14 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 01:30:32.183 05:25:14 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 01:30:32.183 05:25:14 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:30:32.183 05:25:14 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 01:30:32.183 05:25:14 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 01:30:32.183 05:25:14 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:30:32.183 05:25:14 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:30:32.183 05:25:14 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 01:30:32.183 05:25:14 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 01:30:32.183 05:25:14 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 01:30:32.183 05:25:14 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 01:30:32.183 05:25:14 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 01:30:32.183 05:25:14 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 01:30:32.183 05:25:14 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:30:32.183 05:25:14 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:30:32.183 05:25:14 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 01:30:32.183 05:25:14 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 01:30:32.183 05:25:14 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 01:30:32.183 05:25:14 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 01:30:32.183 05:25:14 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 01:30:32.183 05:25:14 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 01:30:32.183 05:25:14 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 01:30:32.183 05:25:14 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 01:30:32.183 05:25:14 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:30:32.183 05:25:14 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:30:32.183 05:25:14 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:30:32.183 05:25:14 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 01:30:32.183 05:25:14 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 01:30:32.183 05:25:14 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 01:30:32.183 05:25:14 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 01:30:32.183 05:25:14 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 01:30:32.756 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:30:33.024 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 01:30:33.024 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 01:30:33.024 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 01:30:33.024 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 01:30:33.024 05:25:15 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=76861 01:30:33.024 05:25:15 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 01:30:33.024 05:25:15 ftl -- ftl/ftl.sh@38 -- # waitforlisten 76861 01:30:33.024 05:25:15 ftl -- common/autotest_common.sh@835 -- # '[' -z 76861 ']' 01:30:33.024 05:25:15 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:30:33.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:30:33.024 05:25:15 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 01:30:33.024 05:25:15 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:30:33.024 05:25:15 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 01:30:33.024 05:25:15 ftl -- common/autotest_common.sh@10 -- # set +x 01:30:33.024 [2024-12-09 05:25:15.455739] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:30:33.024 [2024-12-09 05:25:15.455995] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76861 ] 01:30:33.282 [2024-12-09 05:25:15.642138] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:30:33.543 [2024-12-09 05:25:15.771548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:30:34.111 05:25:16 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:30:34.111 05:25:16 ftl -- common/autotest_common.sh@868 -- # return 0 01:30:34.111 05:25:16 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 01:30:34.111 05:25:16 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 01:30:35.558 05:25:17 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 01:30:35.558 05:25:17 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 01:30:35.846 05:25:18 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 01:30:35.846 05:25:18 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 01:30:35.846 05:25:18 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 01:30:36.105 05:25:18 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 01:30:36.105 05:25:18 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 01:30:36.105 05:25:18 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 01:30:36.105 05:25:18 ftl -- ftl/ftl.sh@50 -- # break 01:30:36.105 05:25:18 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 01:30:36.105 05:25:18 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 01:30:36.105 05:25:18 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 01:30:36.105 05:25:18 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 01:30:36.105 05:25:18 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 01:30:36.105 05:25:18 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 01:30:36.105 05:25:18 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 01:30:36.105 05:25:18 ftl -- ftl/ftl.sh@63 -- # break 01:30:36.105 05:25:18 ftl -- ftl/ftl.sh@66 -- # killprocess 76861 01:30:36.105 05:25:18 ftl -- common/autotest_common.sh@954 -- # '[' -z 76861 ']' 01:30:36.105 05:25:18 ftl -- common/autotest_common.sh@958 -- # kill -0 76861 01:30:36.105 05:25:18 ftl -- common/autotest_common.sh@959 -- # uname 01:30:36.105 05:25:18 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:30:36.105 05:25:18 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76861 01:30:36.105 killing process with pid 76861 01:30:36.105 05:25:18 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:30:36.105 05:25:18 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:30:36.105 05:25:18 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76861' 01:30:36.105 05:25:18 ftl -- common/autotest_common.sh@973 -- # kill 76861 01:30:36.105 05:25:18 ftl -- common/autotest_common.sh@978 -- # wait 76861 01:30:39.392 05:25:21 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 01:30:39.392 05:25:21 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 01:30:39.392 05:25:21 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 01:30:39.392 05:25:21 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 01:30:39.392 05:25:21 ftl -- common/autotest_common.sh@10 -- # set +x 01:30:39.392 ************************************ 01:30:39.392 START TEST ftl_fio_basic 01:30:39.392 ************************************ 01:30:39.392 05:25:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 01:30:39.392 * Looking for test storage... 01:30:39.392 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 01:30:39.392 05:25:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:30:39.392 05:25:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lcov --version 01:30:39.392 05:25:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:30:39.392 05:25:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:30:39.392 05:25:21 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:30:39.392 05:25:21 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 01:30:39.392 05:25:21 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 01:30:39.392 05:25:21 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 01:30:39.392 05:25:21 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 01:30:39.392 05:25:21 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 01:30:39.392 05:25:21 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 01:30:39.392 05:25:21 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 01:30:39.392 05:25:21 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 01:30:39.392 05:25:21 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 01:30:39.392 05:25:21 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:30:39.392 05:25:21 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 01:30:39.392 05:25:21 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 01:30:39.392 05:25:21 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 01:30:39.392 05:25:21 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:30:39.392 05:25:21 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 01:30:39.392 05:25:21 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 01:30:39.392 05:25:21 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:30:39.392 05:25:21 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 01:30:39.392 05:25:21 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 01:30:39.392 05:25:21 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 01:30:39.392 05:25:21 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 01:30:39.392 05:25:21 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:30:39.392 05:25:21 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 01:30:39.393 05:25:21 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 01:30:39.393 05:25:21 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:30:39.393 05:25:21 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:30:39.393 05:25:21 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 01:30:39.393 05:25:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:30:39.393 05:25:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:30:39.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:30:39.393 --rc genhtml_branch_coverage=1 01:30:39.393 --rc genhtml_function_coverage=1 01:30:39.393 --rc genhtml_legend=1 01:30:39.393 --rc geninfo_all_blocks=1 01:30:39.393 --rc geninfo_unexecuted_blocks=1 01:30:39.393 01:30:39.393 ' 01:30:39.393 05:25:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:30:39.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:30:39.393 --rc genhtml_branch_coverage=1 01:30:39.393 --rc genhtml_function_coverage=1 01:30:39.393 --rc genhtml_legend=1 01:30:39.393 --rc geninfo_all_blocks=1 01:30:39.393 --rc geninfo_unexecuted_blocks=1 01:30:39.393 01:30:39.393 ' 01:30:39.393 05:25:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:30:39.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:30:39.393 --rc genhtml_branch_coverage=1 01:30:39.393 --rc genhtml_function_coverage=1 01:30:39.393 --rc genhtml_legend=1 01:30:39.393 --rc geninfo_all_blocks=1 01:30:39.393 --rc geninfo_unexecuted_blocks=1 01:30:39.393 01:30:39.393 ' 01:30:39.393 05:25:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:30:39.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:30:39.393 --rc genhtml_branch_coverage=1 01:30:39.393 --rc genhtml_function_coverage=1 01:30:39.393 --rc genhtml_legend=1 01:30:39.393 --rc geninfo_all_blocks=1 01:30:39.393 --rc geninfo_unexecuted_blocks=1 01:30:39.393 01:30:39.393 ' 01:30:39.393 05:25:21 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 01:30:39.393 05:25:21 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 01:30:39.393 05:25:21 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 01:30:39.393 05:25:21 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 01:30:39.393 05:25:21 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 01:30:39.393 05:25:21 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 01:30:39.393 05:25:21 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:30:39.393 05:25:21 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 01:30:39.393 05:25:21 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 01:30:39.393 05:25:21 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:30:39.393 05:25:21 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:30:39.393 05:25:21 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 01:30:39.393 05:25:21 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 01:30:39.393 05:25:21 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 01:30:39.393 05:25:21 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 01:30:39.393 05:25:21 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 01:30:39.393 05:25:21 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 01:30:39.393 05:25:21 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:30:39.393 05:25:21 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:30:39.393 05:25:21 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 01:30:39.393 05:25:21 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 01:30:39.393 05:25:21 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 01:30:39.393 05:25:21 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 01:30:39.393 05:25:21 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 01:30:39.393 05:25:21 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 01:30:39.393 05:25:21 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 01:30:39.393 05:25:21 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 01:30:39.393 05:25:21 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:30:39.393 05:25:21 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:30:39.393 05:25:21 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 01:30:39.393 05:25:21 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 01:30:39.393 05:25:21 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 01:30:39.393 05:25:21 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 01:30:39.393 05:25:21 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:30:39.393 05:25:21 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 01:30:39.393 05:25:21 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 01:30:39.393 05:25:21 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 01:30:39.393 05:25:21 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 01:30:39.393 05:25:21 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 01:30:39.393 05:25:21 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 01:30:39.393 05:25:21 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 01:30:39.393 05:25:21 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 01:30:39.393 05:25:21 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 01:30:39.393 05:25:21 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 01:30:39.393 05:25:21 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 01:30:39.393 05:25:21 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 01:30:39.393 05:25:21 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=77011 01:30:39.393 05:25:21 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 77011 01:30:39.393 05:25:21 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 01:30:39.393 05:25:21 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 77011 ']' 01:30:39.393 05:25:21 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:30:39.393 05:25:21 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 01:30:39.393 05:25:21 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:30:39.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:30:39.393 05:25:21 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 01:30:39.393 05:25:21 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 01:30:39.393 [2024-12-09 05:25:21.575961] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:30:39.393 [2024-12-09 05:25:21.576087] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77011 ] 01:30:39.393 [2024-12-09 05:25:21.765600] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 01:30:39.652 [2024-12-09 05:25:21.903548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:30:39.652 [2024-12-09 05:25:21.903685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:30:39.652 [2024-12-09 05:25:21.903723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:30:40.587 05:25:22 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:30:40.587 05:25:22 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 01:30:40.587 05:25:22 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 01:30:40.587 05:25:22 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 01:30:40.587 05:25:22 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 01:30:40.587 05:25:22 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 01:30:40.587 05:25:22 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 01:30:40.587 05:25:22 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 01:30:40.845 05:25:23 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 01:30:40.845 05:25:23 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 01:30:40.845 05:25:23 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 01:30:40.845 05:25:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 01:30:40.845 05:25:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 01:30:40.845 05:25:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 01:30:40.845 05:25:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 01:30:40.845 05:25:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 01:30:41.105 05:25:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 01:30:41.105 { 01:30:41.105 "name": "nvme0n1", 01:30:41.105 "aliases": [ 01:30:41.105 "d915df33-baf4-4497-a4e7-7b36cf4cc9e4" 01:30:41.105 ], 01:30:41.105 "product_name": "NVMe disk", 01:30:41.105 "block_size": 4096, 01:30:41.105 "num_blocks": 1310720, 01:30:41.105 "uuid": "d915df33-baf4-4497-a4e7-7b36cf4cc9e4", 01:30:41.105 "numa_id": -1, 01:30:41.105 "assigned_rate_limits": { 01:30:41.105 "rw_ios_per_sec": 0, 01:30:41.105 "rw_mbytes_per_sec": 0, 01:30:41.105 "r_mbytes_per_sec": 0, 01:30:41.105 "w_mbytes_per_sec": 0 01:30:41.105 }, 01:30:41.105 "claimed": false, 01:30:41.105 "zoned": false, 01:30:41.105 "supported_io_types": { 01:30:41.105 "read": true, 01:30:41.105 "write": true, 01:30:41.105 "unmap": true, 01:30:41.105 "flush": true, 01:30:41.105 "reset": true, 01:30:41.105 "nvme_admin": true, 01:30:41.105 "nvme_io": true, 01:30:41.105 "nvme_io_md": false, 01:30:41.105 "write_zeroes": true, 01:30:41.105 "zcopy": false, 01:30:41.105 "get_zone_info": false, 01:30:41.105 "zone_management": false, 01:30:41.105 "zone_append": false, 01:30:41.105 "compare": true, 01:30:41.105 "compare_and_write": false, 01:30:41.105 "abort": true, 01:30:41.105 "seek_hole": false, 01:30:41.105 "seek_data": false, 01:30:41.105 "copy": true, 01:30:41.105 "nvme_iov_md": false 01:30:41.105 }, 01:30:41.105 "driver_specific": { 01:30:41.105 "nvme": [ 01:30:41.105 { 01:30:41.105 "pci_address": "0000:00:11.0", 01:30:41.105 "trid": { 01:30:41.105 "trtype": "PCIe", 01:30:41.105 "traddr": "0000:00:11.0" 01:30:41.105 }, 01:30:41.105 "ctrlr_data": { 01:30:41.105 "cntlid": 0, 01:30:41.105 "vendor_id": "0x1b36", 01:30:41.105 "model_number": "QEMU NVMe Ctrl", 01:30:41.105 "serial_number": "12341", 01:30:41.105 "firmware_revision": "8.0.0", 01:30:41.105 "subnqn": "nqn.2019-08.org.qemu:12341", 01:30:41.105 "oacs": { 01:30:41.105 "security": 0, 01:30:41.105 "format": 1, 01:30:41.105 "firmware": 0, 01:30:41.105 "ns_manage": 1 01:30:41.105 }, 01:30:41.105 "multi_ctrlr": false, 01:30:41.105 "ana_reporting": false 01:30:41.105 }, 01:30:41.105 "vs": { 01:30:41.105 "nvme_version": "1.4" 01:30:41.105 }, 01:30:41.105 "ns_data": { 01:30:41.105 "id": 1, 01:30:41.105 "can_share": false 01:30:41.105 } 01:30:41.105 } 01:30:41.105 ], 01:30:41.105 "mp_policy": "active_passive" 01:30:41.105 } 01:30:41.105 } 01:30:41.105 ]' 01:30:41.105 05:25:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 01:30:41.105 05:25:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 01:30:41.105 05:25:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 01:30:41.105 05:25:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 01:30:41.105 05:25:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 01:30:41.105 05:25:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 01:30:41.105 05:25:23 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 01:30:41.105 05:25:23 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 01:30:41.105 05:25:23 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 01:30:41.105 05:25:23 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 01:30:41.105 05:25:23 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 01:30:41.365 05:25:23 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 01:30:41.365 05:25:23 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 01:30:41.623 05:25:23 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=0275fb6c-3c7d-4495-ac1f-1c75ccc3ca12 01:30:41.623 05:25:23 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 0275fb6c-3c7d-4495-ac1f-1c75ccc3ca12 01:30:41.882 05:25:24 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=d0066893-c898-4a4c-947b-b84fdee7ec13 01:30:41.882 05:25:24 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 d0066893-c898-4a4c-947b-b84fdee7ec13 01:30:41.882 05:25:24 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 01:30:41.882 05:25:24 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 01:30:41.882 05:25:24 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=d0066893-c898-4a4c-947b-b84fdee7ec13 01:30:41.882 05:25:24 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 01:30:41.882 05:25:24 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size d0066893-c898-4a4c-947b-b84fdee7ec13 01:30:41.882 05:25:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=d0066893-c898-4a4c-947b-b84fdee7ec13 01:30:41.882 05:25:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 01:30:41.882 05:25:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 01:30:41.882 05:25:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 01:30:41.882 05:25:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d0066893-c898-4a4c-947b-b84fdee7ec13 01:30:42.158 05:25:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 01:30:42.158 { 01:30:42.158 "name": "d0066893-c898-4a4c-947b-b84fdee7ec13", 01:30:42.158 "aliases": [ 01:30:42.158 "lvs/nvme0n1p0" 01:30:42.158 ], 01:30:42.158 "product_name": "Logical Volume", 01:30:42.158 "block_size": 4096, 01:30:42.158 "num_blocks": 26476544, 01:30:42.158 "uuid": "d0066893-c898-4a4c-947b-b84fdee7ec13", 01:30:42.158 "assigned_rate_limits": { 01:30:42.158 "rw_ios_per_sec": 0, 01:30:42.158 "rw_mbytes_per_sec": 0, 01:30:42.158 "r_mbytes_per_sec": 0, 01:30:42.158 "w_mbytes_per_sec": 0 01:30:42.158 }, 01:30:42.158 "claimed": false, 01:30:42.158 "zoned": false, 01:30:42.158 "supported_io_types": { 01:30:42.158 "read": true, 01:30:42.158 "write": true, 01:30:42.158 "unmap": true, 01:30:42.158 "flush": false, 01:30:42.158 "reset": true, 01:30:42.158 "nvme_admin": false, 01:30:42.158 "nvme_io": false, 01:30:42.158 "nvme_io_md": false, 01:30:42.158 "write_zeroes": true, 01:30:42.158 "zcopy": false, 01:30:42.158 "get_zone_info": false, 01:30:42.158 "zone_management": false, 01:30:42.158 "zone_append": false, 01:30:42.158 "compare": false, 01:30:42.158 "compare_and_write": false, 01:30:42.158 "abort": false, 01:30:42.158 "seek_hole": true, 01:30:42.158 "seek_data": true, 01:30:42.158 "copy": false, 01:30:42.158 "nvme_iov_md": false 01:30:42.158 }, 01:30:42.158 "driver_specific": { 01:30:42.158 "lvol": { 01:30:42.158 "lvol_store_uuid": "0275fb6c-3c7d-4495-ac1f-1c75ccc3ca12", 01:30:42.158 "base_bdev": "nvme0n1", 01:30:42.158 "thin_provision": true, 01:30:42.158 "num_allocated_clusters": 0, 01:30:42.159 "snapshot": false, 01:30:42.159 "clone": false, 01:30:42.159 "esnap_clone": false 01:30:42.159 } 01:30:42.159 } 01:30:42.159 } 01:30:42.159 ]' 01:30:42.159 05:25:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 01:30:42.159 05:25:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 01:30:42.159 05:25:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 01:30:42.159 05:25:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 01:30:42.159 05:25:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 01:30:42.159 05:25:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 01:30:42.159 05:25:24 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 01:30:42.159 05:25:24 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 01:30:42.159 05:25:24 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 01:30:42.418 05:25:24 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 01:30:42.418 05:25:24 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 01:30:42.418 05:25:24 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size d0066893-c898-4a4c-947b-b84fdee7ec13 01:30:42.418 05:25:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=d0066893-c898-4a4c-947b-b84fdee7ec13 01:30:42.418 05:25:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 01:30:42.418 05:25:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 01:30:42.418 05:25:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 01:30:42.418 05:25:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d0066893-c898-4a4c-947b-b84fdee7ec13 01:30:42.677 05:25:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 01:30:42.677 { 01:30:42.677 "name": "d0066893-c898-4a4c-947b-b84fdee7ec13", 01:30:42.677 "aliases": [ 01:30:42.677 "lvs/nvme0n1p0" 01:30:42.677 ], 01:30:42.677 "product_name": "Logical Volume", 01:30:42.677 "block_size": 4096, 01:30:42.677 "num_blocks": 26476544, 01:30:42.677 "uuid": "d0066893-c898-4a4c-947b-b84fdee7ec13", 01:30:42.677 "assigned_rate_limits": { 01:30:42.677 "rw_ios_per_sec": 0, 01:30:42.677 "rw_mbytes_per_sec": 0, 01:30:42.677 "r_mbytes_per_sec": 0, 01:30:42.677 "w_mbytes_per_sec": 0 01:30:42.677 }, 01:30:42.677 "claimed": false, 01:30:42.678 "zoned": false, 01:30:42.678 "supported_io_types": { 01:30:42.678 "read": true, 01:30:42.678 "write": true, 01:30:42.678 "unmap": true, 01:30:42.678 "flush": false, 01:30:42.678 "reset": true, 01:30:42.678 "nvme_admin": false, 01:30:42.678 "nvme_io": false, 01:30:42.678 "nvme_io_md": false, 01:30:42.678 "write_zeroes": true, 01:30:42.678 "zcopy": false, 01:30:42.678 "get_zone_info": false, 01:30:42.678 "zone_management": false, 01:30:42.678 "zone_append": false, 01:30:42.678 "compare": false, 01:30:42.678 "compare_and_write": false, 01:30:42.678 "abort": false, 01:30:42.678 "seek_hole": true, 01:30:42.678 "seek_data": true, 01:30:42.678 "copy": false, 01:30:42.678 "nvme_iov_md": false 01:30:42.678 }, 01:30:42.678 "driver_specific": { 01:30:42.678 "lvol": { 01:30:42.678 "lvol_store_uuid": "0275fb6c-3c7d-4495-ac1f-1c75ccc3ca12", 01:30:42.678 "base_bdev": "nvme0n1", 01:30:42.678 "thin_provision": true, 01:30:42.678 "num_allocated_clusters": 0, 01:30:42.678 "snapshot": false, 01:30:42.678 "clone": false, 01:30:42.678 "esnap_clone": false 01:30:42.678 } 01:30:42.678 } 01:30:42.678 } 01:30:42.678 ]' 01:30:42.678 05:25:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 01:30:42.678 05:25:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 01:30:42.678 05:25:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 01:30:42.678 05:25:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 01:30:42.678 05:25:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 01:30:42.678 05:25:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 01:30:42.678 05:25:24 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 01:30:42.678 05:25:24 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 01:30:42.937 05:25:25 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 01:30:42.937 05:25:25 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 01:30:42.937 05:25:25 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 01:30:42.937 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 01:30:42.937 05:25:25 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size d0066893-c898-4a4c-947b-b84fdee7ec13 01:30:42.937 05:25:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=d0066893-c898-4a4c-947b-b84fdee7ec13 01:30:42.937 05:25:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 01:30:42.937 05:25:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 01:30:42.937 05:25:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 01:30:42.937 05:25:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d0066893-c898-4a4c-947b-b84fdee7ec13 01:30:42.937 05:25:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 01:30:42.937 { 01:30:42.937 "name": "d0066893-c898-4a4c-947b-b84fdee7ec13", 01:30:42.937 "aliases": [ 01:30:42.937 "lvs/nvme0n1p0" 01:30:42.937 ], 01:30:42.937 "product_name": "Logical Volume", 01:30:42.937 "block_size": 4096, 01:30:42.937 "num_blocks": 26476544, 01:30:42.937 "uuid": "d0066893-c898-4a4c-947b-b84fdee7ec13", 01:30:42.937 "assigned_rate_limits": { 01:30:42.937 "rw_ios_per_sec": 0, 01:30:42.937 "rw_mbytes_per_sec": 0, 01:30:42.937 "r_mbytes_per_sec": 0, 01:30:42.937 "w_mbytes_per_sec": 0 01:30:42.937 }, 01:30:42.937 "claimed": false, 01:30:42.937 "zoned": false, 01:30:42.937 "supported_io_types": { 01:30:42.937 "read": true, 01:30:42.937 "write": true, 01:30:42.937 "unmap": true, 01:30:42.937 "flush": false, 01:30:42.937 "reset": true, 01:30:42.937 "nvme_admin": false, 01:30:42.937 "nvme_io": false, 01:30:42.937 "nvme_io_md": false, 01:30:42.937 "write_zeroes": true, 01:30:42.937 "zcopy": false, 01:30:42.937 "get_zone_info": false, 01:30:42.937 "zone_management": false, 01:30:42.937 "zone_append": false, 01:30:42.937 "compare": false, 01:30:42.938 "compare_and_write": false, 01:30:42.938 "abort": false, 01:30:42.938 "seek_hole": true, 01:30:42.938 "seek_data": true, 01:30:42.938 "copy": false, 01:30:42.938 "nvme_iov_md": false 01:30:42.938 }, 01:30:42.938 "driver_specific": { 01:30:42.938 "lvol": { 01:30:42.938 "lvol_store_uuid": "0275fb6c-3c7d-4495-ac1f-1c75ccc3ca12", 01:30:42.938 "base_bdev": "nvme0n1", 01:30:42.938 "thin_provision": true, 01:30:42.938 "num_allocated_clusters": 0, 01:30:42.938 "snapshot": false, 01:30:42.938 "clone": false, 01:30:42.938 "esnap_clone": false 01:30:42.938 } 01:30:42.938 } 01:30:42.938 } 01:30:42.938 ]' 01:30:42.938 05:25:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 01:30:43.200 05:25:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 01:30:43.200 05:25:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 01:30:43.200 05:25:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 01:30:43.200 05:25:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 01:30:43.201 05:25:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 01:30:43.201 05:25:25 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 01:30:43.201 05:25:25 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 01:30:43.201 05:25:25 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d d0066893-c898-4a4c-947b-b84fdee7ec13 -c nvc0n1p0 --l2p_dram_limit 60 01:30:43.201 [2024-12-09 05:25:25.616994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:30:43.201 [2024-12-09 05:25:25.617060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 01:30:43.201 [2024-12-09 05:25:25.617084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 01:30:43.201 [2024-12-09 05:25:25.617098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:30:43.201 [2024-12-09 05:25:25.617232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:30:43.201 [2024-12-09 05:25:25.617250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 01:30:43.201 [2024-12-09 05:25:25.617270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 01:30:43.201 [2024-12-09 05:25:25.617283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:30:43.201 [2024-12-09 05:25:25.617365] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 01:30:43.201 [2024-12-09 05:25:25.618498] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 01:30:43.201 [2024-12-09 05:25:25.618547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:30:43.201 [2024-12-09 05:25:25.618562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 01:30:43.201 [2024-12-09 05:25:25.618581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.215 ms 01:30:43.201 [2024-12-09 05:25:25.618594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:30:43.201 [2024-12-09 05:25:25.618755] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID e21b221b-a6ff-4cd9-a4f3-70d0f937218d 01:30:43.201 [2024-12-09 05:25:25.621418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:30:43.201 [2024-12-09 05:25:25.621478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 01:30:43.201 [2024-12-09 05:25:25.621495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 01:30:43.201 [2024-12-09 05:25:25.621511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:30:43.201 [2024-12-09 05:25:25.635306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:30:43.201 [2024-12-09 05:25:25.635359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 01:30:43.201 [2024-12-09 05:25:25.635376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.699 ms 01:30:43.201 [2024-12-09 05:25:25.635401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:30:43.201 [2024-12-09 05:25:25.635574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:30:43.201 [2024-12-09 05:25:25.635596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 01:30:43.201 [2024-12-09 05:25:25.635610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.131 ms 01:30:43.201 [2024-12-09 05:25:25.635632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:30:43.201 [2024-12-09 05:25:25.635743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:30:43.201 [2024-12-09 05:25:25.635762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 01:30:43.201 [2024-12-09 05:25:25.635776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 01:30:43.201 [2024-12-09 05:25:25.635793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:30:43.201 [2024-12-09 05:25:25.635846] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 01:30:43.201 [2024-12-09 05:25:25.641804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:30:43.201 [2024-12-09 05:25:25.641844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 01:30:43.201 [2024-12-09 05:25:25.641886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.983 ms 01:30:43.201 [2024-12-09 05:25:25.641898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:30:43.201 [2024-12-09 05:25:25.641960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:30:43.201 [2024-12-09 05:25:25.641974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 01:30:43.201 [2024-12-09 05:25:25.641992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 01:30:43.201 [2024-12-09 05:25:25.642004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:30:43.201 [2024-12-09 05:25:25.642096] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 01:30:43.201 [2024-12-09 05:25:25.642293] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 01:30:43.201 [2024-12-09 05:25:25.642325] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 01:30:43.201 [2024-12-09 05:25:25.642342] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 01:30:43.201 [2024-12-09 05:25:25.642361] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 01:30:43.201 [2024-12-09 05:25:25.642376] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 01:30:43.201 [2024-12-09 05:25:25.642394] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 01:30:43.201 [2024-12-09 05:25:25.642407] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 01:30:43.201 [2024-12-09 05:25:25.642422] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 01:30:43.201 [2024-12-09 05:25:25.642435] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 01:30:43.201 [2024-12-09 05:25:25.642473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:30:43.201 [2024-12-09 05:25:25.642486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 01:30:43.201 [2024-12-09 05:25:25.642504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.364 ms 01:30:43.201 [2024-12-09 05:25:25.642516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:30:43.201 [2024-12-09 05:25:25.642616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:30:43.201 [2024-12-09 05:25:25.642630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 01:30:43.201 [2024-12-09 05:25:25.642647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 01:30:43.201 [2024-12-09 05:25:25.642660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:30:43.201 [2024-12-09 05:25:25.642793] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 01:30:43.201 [2024-12-09 05:25:25.642819] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 01:30:43.201 [2024-12-09 05:25:25.642837] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 01:30:43.201 [2024-12-09 05:25:25.642851] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:30:43.201 [2024-12-09 05:25:25.642868] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 01:30:43.201 [2024-12-09 05:25:25.642880] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 01:30:43.201 [2024-12-09 05:25:25.642895] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 01:30:43.201 [2024-12-09 05:25:25.642907] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 01:30:43.201 [2024-12-09 05:25:25.642924] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 01:30:43.201 [2024-12-09 05:25:25.642935] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 01:30:43.201 [2024-12-09 05:25:25.642972] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 01:30:43.201 [2024-12-09 05:25:25.642984] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 01:30:43.201 [2024-12-09 05:25:25.643010] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 01:30:43.201 [2024-12-09 05:25:25.643022] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 01:30:43.201 [2024-12-09 05:25:25.643038] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 01:30:43.201 [2024-12-09 05:25:25.643049] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:30:43.201 [2024-12-09 05:25:25.643069] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 01:30:43.201 [2024-12-09 05:25:25.643080] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 01:30:43.201 [2024-12-09 05:25:25.643095] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:30:43.201 [2024-12-09 05:25:25.643107] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 01:30:43.201 [2024-12-09 05:25:25.643122] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 01:30:43.201 [2024-12-09 05:25:25.643133] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:30:43.201 [2024-12-09 05:25:25.643147] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 01:30:43.201 [2024-12-09 05:25:25.643159] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 01:30:43.201 [2024-12-09 05:25:25.643180] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:30:43.201 [2024-12-09 05:25:25.643192] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 01:30:43.201 [2024-12-09 05:25:25.643207] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 01:30:43.201 [2024-12-09 05:25:25.643219] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:30:43.201 [2024-12-09 05:25:25.643234] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 01:30:43.201 [2024-12-09 05:25:25.643246] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 01:30:43.201 [2024-12-09 05:25:25.643261] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:30:43.201 [2024-12-09 05:25:25.643272] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 01:30:43.201 [2024-12-09 05:25:25.643290] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 01:30:43.201 [2024-12-09 05:25:25.643321] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 01:30:43.201 [2024-12-09 05:25:25.643338] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 01:30:43.201 [2024-12-09 05:25:25.643349] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 01:30:43.201 [2024-12-09 05:25:25.643364] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 01:30:43.201 [2024-12-09 05:25:25.643376] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 01:30:43.201 [2024-12-09 05:25:25.643391] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 01:30:43.201 [2024-12-09 05:25:25.643403] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:30:43.201 [2024-12-09 05:25:25.643420] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 01:30:43.201 [2024-12-09 05:25:25.643432] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 01:30:43.201 [2024-12-09 05:25:25.643447] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:30:43.201 [2024-12-09 05:25:25.643458] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 01:30:43.201 [2024-12-09 05:25:25.643491] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 01:30:43.201 [2024-12-09 05:25:25.643504] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 01:30:43.201 [2024-12-09 05:25:25.643520] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:30:43.201 [2024-12-09 05:25:25.643533] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 01:30:43.201 [2024-12-09 05:25:25.643552] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 01:30:43.201 [2024-12-09 05:25:25.643563] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 01:30:43.201 [2024-12-09 05:25:25.643579] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 01:30:43.201 [2024-12-09 05:25:25.643590] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 01:30:43.201 [2024-12-09 05:25:25.643606] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 01:30:43.201 [2024-12-09 05:25:25.643625] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 01:30:43.201 [2024-12-09 05:25:25.643644] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 01:30:43.201 [2024-12-09 05:25:25.643659] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 01:30:43.201 [2024-12-09 05:25:25.643678] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 01:30:43.201 [2024-12-09 05:25:25.643691] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 01:30:43.201 [2024-12-09 05:25:25.643709] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 01:30:43.201 [2024-12-09 05:25:25.643721] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 01:30:43.202 [2024-12-09 05:25:25.643737] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 01:30:43.202 [2024-12-09 05:25:25.643750] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 01:30:43.202 [2024-12-09 05:25:25.643767] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 01:30:43.202 [2024-12-09 05:25:25.643780] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 01:30:43.202 [2024-12-09 05:25:25.643802] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 01:30:43.202 [2024-12-09 05:25:25.643815] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 01:30:43.202 [2024-12-09 05:25:25.643831] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 01:30:43.202 [2024-12-09 05:25:25.643844] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 01:30:43.202 [2024-12-09 05:25:25.643860] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 01:30:43.202 [2024-12-09 05:25:25.643872] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 01:30:43.202 [2024-12-09 05:25:25.643900] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 01:30:43.202 [2024-12-09 05:25:25.643914] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 01:30:43.202 [2024-12-09 05:25:25.643934] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 01:30:43.202 [2024-12-09 05:25:25.643947] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 01:30:43.202 [2024-12-09 05:25:25.643967] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 01:30:43.202 [2024-12-09 05:25:25.643982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:30:43.202 [2024-12-09 05:25:25.644003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 01:30:43.202 [2024-12-09 05:25:25.644017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.259 ms 01:30:43.202 [2024-12-09 05:25:25.644034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:30:43.202 [2024-12-09 05:25:25.644121] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 01:30:43.202 [2024-12-09 05:25:25.644144] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 01:30:48.478 [2024-12-09 05:25:30.182213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:30:48.478 [2024-12-09 05:25:30.182349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 01:30:48.478 [2024-12-09 05:25:30.182373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4545.455 ms 01:30:48.478 [2024-12-09 05:25:30.182394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:30:48.478 [2024-12-09 05:25:30.228668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:30:48.478 [2024-12-09 05:25:30.228771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 01:30:48.478 [2024-12-09 05:25:30.228794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.866 ms 01:30:48.478 [2024-12-09 05:25:30.228812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:30:48.478 [2024-12-09 05:25:30.228984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:30:48.478 [2024-12-09 05:25:30.229005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 01:30:48.478 [2024-12-09 05:25:30.229020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 01:30:48.478 [2024-12-09 05:25:30.229040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:30:48.478 [2024-12-09 05:25:30.292344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:30:48.478 [2024-12-09 05:25:30.292440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 01:30:48.478 [2024-12-09 05:25:30.292459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.338 ms 01:30:48.478 [2024-12-09 05:25:30.292491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:30:48.478 [2024-12-09 05:25:30.292549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:30:48.478 [2024-12-09 05:25:30.292567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 01:30:48.478 [2024-12-09 05:25:30.292580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 01:30:48.478 [2024-12-09 05:25:30.292597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:30:48.478 [2024-12-09 05:25:30.293498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:30:48.478 [2024-12-09 05:25:30.293527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 01:30:48.478 [2024-12-09 05:25:30.293547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.796 ms 01:30:48.478 [2024-12-09 05:25:30.293564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:30:48.478 [2024-12-09 05:25:30.293712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:30:48.478 [2024-12-09 05:25:30.293733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 01:30:48.478 [2024-12-09 05:25:30.293747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 01:30:48.478 [2024-12-09 05:25:30.293767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:30:48.478 [2024-12-09 05:25:30.320708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:30:48.478 [2024-12-09 05:25:30.320783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 01:30:48.478 [2024-12-09 05:25:30.320800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.945 ms 01:30:48.478 [2024-12-09 05:25:30.320817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:30:48.478 [2024-12-09 05:25:30.334725] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 01:30:48.478 [2024-12-09 05:25:30.361118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:30:48.478 [2024-12-09 05:25:30.361181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 01:30:48.478 [2024-12-09 05:25:30.361210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.227 ms 01:30:48.478 [2024-12-09 05:25:30.361224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:30:48.478 [2024-12-09 05:25:30.462047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:30:48.478 [2024-12-09 05:25:30.462111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 01:30:48.478 [2024-12-09 05:25:30.462143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 100.904 ms 01:30:48.478 [2024-12-09 05:25:30.462157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:30:48.478 [2024-12-09 05:25:30.462432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:30:48.478 [2024-12-09 05:25:30.462452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 01:30:48.478 [2024-12-09 05:25:30.462488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.203 ms 01:30:48.478 [2024-12-09 05:25:30.462502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:30:48.478 [2024-12-09 05:25:30.499681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:30:48.478 [2024-12-09 05:25:30.499732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 01:30:48.478 [2024-12-09 05:25:30.499755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.148 ms 01:30:48.478 [2024-12-09 05:25:30.499768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:30:48.478 [2024-12-09 05:25:30.536184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:30:48.478 [2024-12-09 05:25:30.536232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 01:30:48.478 [2024-12-09 05:25:30.536270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.406 ms 01:30:48.478 [2024-12-09 05:25:30.536283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:30:48.478 [2024-12-09 05:25:30.537150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:30:48.478 [2024-12-09 05:25:30.537186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 01:30:48.478 [2024-12-09 05:25:30.537206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.807 ms 01:30:48.478 [2024-12-09 05:25:30.537219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:30:48.478 [2024-12-09 05:25:30.642337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:30:48.478 [2024-12-09 05:25:30.642389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 01:30:48.479 [2024-12-09 05:25:30.642437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 105.199 ms 01:30:48.479 [2024-12-09 05:25:30.642451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:30:48.479 [2024-12-09 05:25:30.681440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:30:48.479 [2024-12-09 05:25:30.681496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 01:30:48.479 [2024-12-09 05:25:30.681519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.921 ms 01:30:48.479 [2024-12-09 05:25:30.681550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:30:48.479 [2024-12-09 05:25:30.716983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:30:48.479 [2024-12-09 05:25:30.717030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 01:30:48.479 [2024-12-09 05:25:30.717051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.427 ms 01:30:48.479 [2024-12-09 05:25:30.717081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:30:48.479 [2024-12-09 05:25:30.753487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:30:48.479 [2024-12-09 05:25:30.753532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 01:30:48.479 [2024-12-09 05:25:30.753569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.398 ms 01:30:48.479 [2024-12-09 05:25:30.753581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:30:48.479 [2024-12-09 05:25:30.753657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:30:48.479 [2024-12-09 05:25:30.753672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 01:30:48.479 [2024-12-09 05:25:30.753702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 01:30:48.479 [2024-12-09 05:25:30.753715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:30:48.479 [2024-12-09 05:25:30.753907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:30:48.479 [2024-12-09 05:25:30.753926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 01:30:48.479 [2024-12-09 05:25:30.753944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 01:30:48.479 [2024-12-09 05:25:30.753957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:30:48.479 [2024-12-09 05:25:30.755597] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 5146.322 ms, result 0 01:30:48.479 { 01:30:48.479 "name": "ftl0", 01:30:48.479 "uuid": "e21b221b-a6ff-4cd9-a4f3-70d0f937218d" 01:30:48.479 } 01:30:48.479 05:25:30 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 01:30:48.479 05:25:30 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 01:30:48.479 05:25:30 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:30:48.479 05:25:30 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 01:30:48.479 05:25:30 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:30:48.479 05:25:30 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:30:48.479 05:25:30 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 01:30:48.737 05:25:30 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 01:30:48.737 [ 01:30:48.737 { 01:30:48.737 "name": "ftl0", 01:30:48.737 "aliases": [ 01:30:48.737 "e21b221b-a6ff-4cd9-a4f3-70d0f937218d" 01:30:48.737 ], 01:30:48.737 "product_name": "FTL disk", 01:30:48.737 "block_size": 4096, 01:30:48.737 "num_blocks": 20971520, 01:30:48.737 "uuid": "e21b221b-a6ff-4cd9-a4f3-70d0f937218d", 01:30:48.737 "assigned_rate_limits": { 01:30:48.737 "rw_ios_per_sec": 0, 01:30:48.737 "rw_mbytes_per_sec": 0, 01:30:48.737 "r_mbytes_per_sec": 0, 01:30:48.737 "w_mbytes_per_sec": 0 01:30:48.737 }, 01:30:48.737 "claimed": false, 01:30:48.737 "zoned": false, 01:30:48.737 "supported_io_types": { 01:30:48.737 "read": true, 01:30:48.737 "write": true, 01:30:48.737 "unmap": true, 01:30:48.737 "flush": true, 01:30:48.737 "reset": false, 01:30:48.737 "nvme_admin": false, 01:30:48.737 "nvme_io": false, 01:30:48.737 "nvme_io_md": false, 01:30:48.737 "write_zeroes": true, 01:30:48.737 "zcopy": false, 01:30:48.737 "get_zone_info": false, 01:30:48.737 "zone_management": false, 01:30:48.737 "zone_append": false, 01:30:48.737 "compare": false, 01:30:48.737 "compare_and_write": false, 01:30:48.737 "abort": false, 01:30:48.737 "seek_hole": false, 01:30:48.737 "seek_data": false, 01:30:48.737 "copy": false, 01:30:48.737 "nvme_iov_md": false 01:30:48.737 }, 01:30:48.737 "driver_specific": { 01:30:48.737 "ftl": { 01:30:48.737 "base_bdev": "d0066893-c898-4a4c-947b-b84fdee7ec13", 01:30:48.737 "cache": "nvc0n1p0" 01:30:48.737 } 01:30:48.737 } 01:30:48.737 } 01:30:48.737 ] 01:30:48.737 05:25:31 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 01:30:48.737 05:25:31 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 01:30:48.737 05:25:31 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 01:30:48.995 05:25:31 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 01:30:48.995 05:25:31 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 01:30:49.254 [2024-12-09 05:25:31.562864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:30:49.254 [2024-12-09 05:25:31.562914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 01:30:49.254 [2024-12-09 05:25:31.562932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 01:30:49.254 [2024-12-09 05:25:31.562963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:30:49.254 [2024-12-09 05:25:31.563015] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 01:30:49.254 [2024-12-09 05:25:31.567969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:30:49.254 [2024-12-09 05:25:31.568006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 01:30:49.254 [2024-12-09 05:25:31.568030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.931 ms 01:30:49.254 [2024-12-09 05:25:31.568044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:30:49.254 [2024-12-09 05:25:31.568689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:30:49.254 [2024-12-09 05:25:31.568716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 01:30:49.254 [2024-12-09 05:25:31.568736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.589 ms 01:30:49.254 [2024-12-09 05:25:31.568749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:30:49.254 [2024-12-09 05:25:31.571293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:30:49.254 [2024-12-09 05:25:31.571314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 01:30:49.254 [2024-12-09 05:25:31.571332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.505 ms 01:30:49.254 [2024-12-09 05:25:31.571344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:30:49.254 [2024-12-09 05:25:31.576388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:30:49.254 [2024-12-09 05:25:31.576426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 01:30:49.254 [2024-12-09 05:25:31.576446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.013 ms 01:30:49.254 [2024-12-09 05:25:31.576470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:30:49.254 [2024-12-09 05:25:31.614044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:30:49.254 [2024-12-09 05:25:31.614086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 01:30:49.254 [2024-12-09 05:25:31.614128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.540 ms 01:30:49.254 [2024-12-09 05:25:31.614141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:30:49.254 [2024-12-09 05:25:31.637401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:30:49.255 [2024-12-09 05:25:31.637444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 01:30:49.255 [2024-12-09 05:25:31.637497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.230 ms 01:30:49.255 [2024-12-09 05:25:31.637511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:30:49.255 [2024-12-09 05:25:31.637751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:30:49.255 [2024-12-09 05:25:31.637772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 01:30:49.255 [2024-12-09 05:25:31.637791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.176 ms 01:30:49.255 [2024-12-09 05:25:31.637804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:30:49.255 [2024-12-09 05:25:31.674243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:30:49.255 [2024-12-09 05:25:31.674286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 01:30:49.255 [2024-12-09 05:25:31.674307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.455 ms 01:30:49.255 [2024-12-09 05:25:31.674320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:30:49.255 [2024-12-09 05:25:31.709877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:30:49.255 [2024-12-09 05:25:31.709925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 01:30:49.255 [2024-12-09 05:25:31.709962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.549 ms 01:30:49.255 [2024-12-09 05:25:31.709974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:30:49.515 [2024-12-09 05:25:31.744724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:30:49.515 [2024-12-09 05:25:31.744768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 01:30:49.515 [2024-12-09 05:25:31.744804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.744 ms 01:30:49.515 [2024-12-09 05:25:31.744816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:30:49.515 [2024-12-09 05:25:31.779408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:30:49.515 [2024-12-09 05:25:31.779454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 01:30:49.515 [2024-12-09 05:25:31.779502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.488 ms 01:30:49.515 [2024-12-09 05:25:31.779515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:30:49.515 [2024-12-09 05:25:31.779579] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 01:30:49.515 [2024-12-09 05:25:31.779600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 01:30:49.515 [2024-12-09 05:25:31.779619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 01:30:49.515 [2024-12-09 05:25:31.779632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 01:30:49.515 [2024-12-09 05:25:31.779649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 01:30:49.515 [2024-12-09 05:25:31.779663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 01:30:49.515 [2024-12-09 05:25:31.779680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 01:30:49.515 [2024-12-09 05:25:31.779693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 01:30:49.515 [2024-12-09 05:25:31.779714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 01:30:49.515 [2024-12-09 05:25:31.779728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 01:30:49.515 [2024-12-09 05:25:31.779745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 01:30:49.515 [2024-12-09 05:25:31.779758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 01:30:49.515 [2024-12-09 05:25:31.779775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 01:30:49.515 [2024-12-09 05:25:31.779787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 01:30:49.515 [2024-12-09 05:25:31.779804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 01:30:49.515 [2024-12-09 05:25:31.779817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 01:30:49.515 [2024-12-09 05:25:31.779834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 01:30:49.515 [2024-12-09 05:25:31.779847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 01:30:49.515 [2024-12-09 05:25:31.779864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 01:30:49.515 [2024-12-09 05:25:31.779877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 01:30:49.515 [2024-12-09 05:25:31.779897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 01:30:49.515 [2024-12-09 05:25:31.779910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 01:30:49.515 [2024-12-09 05:25:31.779927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 01:30:49.515 [2024-12-09 05:25:31.779940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 01:30:49.515 [2024-12-09 05:25:31.779960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 01:30:49.515 [2024-12-09 05:25:31.779974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 01:30:49.515 [2024-12-09 05:25:31.779991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 01:30:49.515 [2024-12-09 05:25:31.780004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 01:30:49.515 [2024-12-09 05:25:31.780021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 01:30:49.515 [2024-12-09 05:25:31.780034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 01:30:49.515 [2024-12-09 05:25:31.780051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 01:30:49.515 [2024-12-09 05:25:31.780064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 01:30:49.515 [2024-12-09 05:25:31.780081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 01:30:49.515 [2024-12-09 05:25:31.780094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 01:30:49.515 [2024-12-09 05:25:31.780111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 01:30:49.515 [2024-12-09 05:25:31.780124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 01:30:49.515 [2024-12-09 05:25:31.780145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 01:30:49.515 [2024-12-09 05:25:31.780159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 01:30:49.515 [2024-12-09 05:25:31.780175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 01:30:49.515 [2024-12-09 05:25:31.780188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 01:30:49.515 [2024-12-09 05:25:31.780208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 01:30:49.515 [2024-12-09 05:25:31.780222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 01:30:49.515 [2024-12-09 05:25:31.780239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 01:30:49.515 [2024-12-09 05:25:31.780253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 01:30:49.515 [2024-12-09 05:25:31.780270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 01:30:49.515 [2024-12-09 05:25:31.780283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 01:30:49.515 [2024-12-09 05:25:31.780301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 01:30:49.515 [2024-12-09 05:25:31.780315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 01:30:49.515 [2024-12-09 05:25:31.780332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 01:30:49.515 [2024-12-09 05:25:31.780346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 01:30:49.515 [2024-12-09 05:25:31.780363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 01:30:49.515 [2024-12-09 05:25:31.780376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 01:30:49.515 [2024-12-09 05:25:31.780393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 01:30:49.515 [2024-12-09 05:25:31.780407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 01:30:49.515 [2024-12-09 05:25:31.780424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 01:30:49.515 [2024-12-09 05:25:31.780437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 01:30:49.515 [2024-12-09 05:25:31.780457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 01:30:49.515 [2024-12-09 05:25:31.780480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 01:30:49.515 [2024-12-09 05:25:31.780498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 01:30:49.515 [2024-12-09 05:25:31.780512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 01:30:49.515 [2024-12-09 05:25:31.780528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 01:30:49.515 [2024-12-09 05:25:31.780542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 01:30:49.515 [2024-12-09 05:25:31.780559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 01:30:49.516 [2024-12-09 05:25:31.780572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 01:30:49.516 [2024-12-09 05:25:31.780589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 01:30:49.516 [2024-12-09 05:25:31.780603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 01:30:49.516 [2024-12-09 05:25:31.780619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 01:30:49.516 [2024-12-09 05:25:31.780633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 01:30:49.516 [2024-12-09 05:25:31.780654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 01:30:49.516 [2024-12-09 05:25:31.780668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 01:30:49.516 [2024-12-09 05:25:31.780685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 01:30:49.516 [2024-12-09 05:25:31.780698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 01:30:49.516 [2024-12-09 05:25:31.780720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 01:30:49.516 [2024-12-09 05:25:31.780733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 01:30:49.516 [2024-12-09 05:25:31.780750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 01:30:49.516 [2024-12-09 05:25:31.780763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 01:30:49.516 [2024-12-09 05:25:31.780779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 01:30:49.516 [2024-12-09 05:25:31.780792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 01:30:49.516 [2024-12-09 05:25:31.780810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 01:30:49.516 [2024-12-09 05:25:31.780823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 01:30:49.516 [2024-12-09 05:25:31.780840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 01:30:49.516 [2024-12-09 05:25:31.780854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 01:30:49.516 [2024-12-09 05:25:31.780892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 01:30:49.516 [2024-12-09 05:25:31.780905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 01:30:49.516 [2024-12-09 05:25:31.780923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 01:30:49.516 [2024-12-09 05:25:31.780936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 01:30:49.516 [2024-12-09 05:25:31.780952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 01:30:49.516 [2024-12-09 05:25:31.780965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 01:30:49.516 [2024-12-09 05:25:31.780986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 01:30:49.516 [2024-12-09 05:25:31.781000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 01:30:49.516 [2024-12-09 05:25:31.781017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 01:30:49.516 [2024-12-09 05:25:31.781029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 01:30:49.516 [2024-12-09 05:25:31.781046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 01:30:49.516 [2024-12-09 05:25:31.781059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 01:30:49.516 [2024-12-09 05:25:31.781076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 01:30:49.516 [2024-12-09 05:25:31.781090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 01:30:49.516 [2024-12-09 05:25:31.781108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 01:30:49.516 [2024-12-09 05:25:31.781121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 01:30:49.516 [2024-12-09 05:25:31.781139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 01:30:49.516 [2024-12-09 05:25:31.781152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 01:30:49.516 [2024-12-09 05:25:31.781172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 01:30:49.516 [2024-12-09 05:25:31.781193] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 01:30:49.516 [2024-12-09 05:25:31.781210] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e21b221b-a6ff-4cd9-a4f3-70d0f937218d 01:30:49.516 [2024-12-09 05:25:31.781222] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 01:30:49.516 [2024-12-09 05:25:31.781242] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 01:30:49.516 [2024-12-09 05:25:31.781259] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 01:30:49.516 [2024-12-09 05:25:31.781276] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 01:30:49.516 [2024-12-09 05:25:31.781288] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 01:30:49.516 [2024-12-09 05:25:31.781305] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 01:30:49.516 [2024-12-09 05:25:31.781317] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 01:30:49.516 [2024-12-09 05:25:31.781332] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 01:30:49.516 [2024-12-09 05:25:31.781343] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 01:30:49.516 [2024-12-09 05:25:31.781359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:30:49.516 [2024-12-09 05:25:31.781372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 01:30:49.516 [2024-12-09 05:25:31.781390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.785 ms 01:30:49.516 [2024-12-09 05:25:31.781402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:30:49.516 [2024-12-09 05:25:31.801676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:30:49.516 [2024-12-09 05:25:31.801716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 01:30:49.516 [2024-12-09 05:25:31.801752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.214 ms 01:30:49.516 [2024-12-09 05:25:31.801765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:30:49.516 [2024-12-09 05:25:31.802392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:30:49.516 [2024-12-09 05:25:31.802419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 01:30:49.516 [2024-12-09 05:25:31.802437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.584 ms 01:30:49.516 [2024-12-09 05:25:31.802449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:30:49.516 [2024-12-09 05:25:31.874198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:30:49.516 [2024-12-09 05:25:31.874246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 01:30:49.516 [2024-12-09 05:25:31.874267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:30:49.516 [2024-12-09 05:25:31.874280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:30:49.516 [2024-12-09 05:25:31.874383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:30:49.516 [2024-12-09 05:25:31.874397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 01:30:49.516 [2024-12-09 05:25:31.874414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:30:49.516 [2024-12-09 05:25:31.874428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:30:49.516 [2024-12-09 05:25:31.874645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:30:49.516 [2024-12-09 05:25:31.874669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 01:30:49.516 [2024-12-09 05:25:31.874687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:30:49.516 [2024-12-09 05:25:31.874699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:30:49.516 [2024-12-09 05:25:31.874742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:30:49.516 [2024-12-09 05:25:31.874756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 01:30:49.516 [2024-12-09 05:25:31.874773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:30:49.516 [2024-12-09 05:25:31.874785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:30:49.775 [2024-12-09 05:25:32.012690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:30:49.775 [2024-12-09 05:25:32.012800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 01:30:49.775 [2024-12-09 05:25:32.012823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:30:49.775 [2024-12-09 05:25:32.012837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:30:49.775 [2024-12-09 05:25:32.116104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:30:49.775 [2024-12-09 05:25:32.116196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 01:30:49.775 [2024-12-09 05:25:32.116219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:30:49.775 [2024-12-09 05:25:32.116233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:30:49.775 [2024-12-09 05:25:32.116414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:30:49.775 [2024-12-09 05:25:32.116429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 01:30:49.775 [2024-12-09 05:25:32.116452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:30:49.775 [2024-12-09 05:25:32.116486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:30:49.775 [2024-12-09 05:25:32.116626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:30:49.775 [2024-12-09 05:25:32.116642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 01:30:49.775 [2024-12-09 05:25:32.116660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:30:49.775 [2024-12-09 05:25:32.116673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:30:49.775 [2024-12-09 05:25:32.116843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:30:49.775 [2024-12-09 05:25:32.116868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 01:30:49.776 [2024-12-09 05:25:32.116891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:30:49.776 [2024-12-09 05:25:32.116903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:30:49.776 [2024-12-09 05:25:32.116985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:30:49.776 [2024-12-09 05:25:32.117001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 01:30:49.776 [2024-12-09 05:25:32.117018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:30:49.776 [2024-12-09 05:25:32.117031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:30:49.776 [2024-12-09 05:25:32.117103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:30:49.776 [2024-12-09 05:25:32.117116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 01:30:49.776 [2024-12-09 05:25:32.117133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:30:49.776 [2024-12-09 05:25:32.117150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:30:49.776 [2024-12-09 05:25:32.117244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:30:49.776 [2024-12-09 05:25:32.117259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 01:30:49.776 [2024-12-09 05:25:32.117276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:30:49.776 [2024-12-09 05:25:32.117288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:30:49.776 [2024-12-09 05:25:32.117550] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 555.501 ms, result 0 01:30:49.776 true 01:30:49.776 05:25:32 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 77011 01:30:49.776 05:25:32 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 77011 ']' 01:30:49.776 05:25:32 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 77011 01:30:49.776 05:25:32 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 01:30:49.776 05:25:32 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:30:49.776 05:25:32 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77011 01:30:49.776 05:25:32 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:30:49.776 05:25:32 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:30:49.776 killing process with pid 77011 01:30:49.776 05:25:32 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77011' 01:30:49.776 05:25:32 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 77011 01:30:49.776 05:25:32 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 77011 01:30:55.095 05:25:37 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 01:30:55.095 05:25:37 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 01:30:55.095 05:25:37 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 01:30:55.095 05:25:37 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 01:30:55.095 05:25:37 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 01:30:55.095 05:25:37 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 01:30:55.095 05:25:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 01:30:55.095 05:25:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 01:30:55.095 05:25:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:30:55.095 05:25:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 01:30:55.095 05:25:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:30:55.095 05:25:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 01:30:55.095 05:25:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 01:30:55.095 05:25:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:30:55.095 05:25:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:30:55.095 05:25:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:30:55.095 05:25:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 01:30:55.095 05:25:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 01:30:55.095 05:25:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 01:30:55.095 05:25:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 01:30:55.095 05:25:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 01:30:55.095 05:25:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 01:30:55.095 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 01:30:55.095 fio-3.35 01:30:55.095 Starting 1 thread 01:31:01.661 01:31:01.661 test: (groupid=0, jobs=1): err= 0: pid=77241: Mon Dec 9 05:25:43 2024 01:31:01.661 read: IOPS=885, BW=58.8MiB/s (61.6MB/s)(255MiB/4331msec) 01:31:01.661 slat (nsec): min=4351, max=93063, avg=6256.07, stdev=2849.74 01:31:01.661 clat (usec): min=368, max=1138, avg=506.40, stdev=50.26 01:31:01.661 lat (usec): min=381, max=1143, avg=512.65, stdev=50.54 01:31:01.661 clat percentiles (usec): 01:31:01.661 | 1.00th=[ 392], 5.00th=[ 420], 10.00th=[ 453], 20.00th=[ 461], 01:31:01.661 | 30.00th=[ 474], 40.00th=[ 510], 50.00th=[ 519], 60.00th=[ 523], 01:31:01.661 | 70.00th=[ 529], 80.00th=[ 537], 90.00th=[ 553], 95.00th=[ 570], 01:31:01.661 | 99.00th=[ 635], 99.50th=[ 685], 99.90th=[ 873], 99.95th=[ 1074], 01:31:01.661 | 99.99th=[ 1139] 01:31:01.661 write: IOPS=891, BW=59.2MiB/s (62.1MB/s)(256MiB/4326msec); 0 zone resets 01:31:01.661 slat (nsec): min=14965, max=75258, avg=24957.13, stdev=5908.22 01:31:01.661 clat (usec): min=366, max=1074, avg=576.56, stdev=67.48 01:31:01.661 lat (usec): min=397, max=1101, avg=601.52, stdev=67.48 01:31:01.661 clat percentiles (usec): 01:31:01.661 | 1.00th=[ 445], 5.00th=[ 478], 10.00th=[ 498], 20.00th=[ 537], 01:31:01.661 | 30.00th=[ 545], 40.00th=[ 562], 50.00th=[ 586], 60.00th=[ 603], 01:31:01.661 | 70.00th=[ 603], 80.00th=[ 611], 90.00th=[ 619], 95.00th=[ 635], 01:31:01.661 | 99.00th=[ 906], 99.50th=[ 963], 99.90th=[ 1012], 99.95th=[ 1045], 01:31:01.661 | 99.99th=[ 1074] 01:31:01.661 bw ( KiB/s): min=58480, max=62560, per=99.93%, avg=60571.00, stdev=1258.98, samples=8 01:31:01.661 iops : min= 860, max= 920, avg=890.75, stdev=18.51, samples=8 01:31:01.661 lat (usec) : 500=23.89%, 750=75.07%, 1000=0.95% 01:31:01.661 lat (msec) : 2=0.09% 01:31:01.661 cpu : usr=98.73%, sys=0.32%, ctx=10, majf=0, minf=1169 01:31:01.661 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:31:01.661 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:31:01.661 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:31:01.661 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 01:31:01.661 latency : target=0, window=0, percentile=100.00%, depth=1 01:31:01.661 01:31:01.661 Run status group 0 (all jobs): 01:31:01.661 READ: bw=58.8MiB/s (61.6MB/s), 58.8MiB/s-58.8MiB/s (61.6MB/s-61.6MB/s), io=255MiB (267MB), run=4331-4331msec 01:31:01.661 WRITE: bw=59.2MiB/s (62.1MB/s), 59.2MiB/s-59.2MiB/s (62.1MB/s-62.1MB/s), io=256MiB (269MB), run=4326-4326msec 01:31:03.040 ----------------------------------------------------- 01:31:03.040 Suppressions used: 01:31:03.040 count bytes template 01:31:03.040 1 5 /usr/src/fio/parse.c 01:31:03.040 1 8 libtcmalloc_minimal.so 01:31:03.040 1 904 libcrypto.so 01:31:03.040 ----------------------------------------------------- 01:31:03.040 01:31:03.040 05:25:45 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 01:31:03.040 05:25:45 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 01:31:03.041 05:25:45 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 01:31:03.041 05:25:45 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 01:31:03.041 05:25:45 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 01:31:03.041 05:25:45 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 01:31:03.041 05:25:45 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 01:31:03.041 05:25:45 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 01:31:03.041 05:25:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 01:31:03.041 05:25:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 01:31:03.041 05:25:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:31:03.041 05:25:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 01:31:03.041 05:25:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:31:03.041 05:25:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 01:31:03.041 05:25:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 01:31:03.041 05:25:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:31:03.041 05:25:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:31:03.041 05:25:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 01:31:03.041 05:25:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:31:03.041 05:25:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 01:31:03.041 05:25:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 01:31:03.041 05:25:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 01:31:03.041 05:25:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 01:31:03.041 05:25:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 01:31:03.299 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 01:31:03.299 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 01:31:03.299 fio-3.35 01:31:03.299 Starting 2 threads 01:31:29.843 01:31:29.843 first_half: (groupid=0, jobs=1): err= 0: pid=77355: Mon Dec 9 05:26:11 2024 01:31:29.843 read: IOPS=2646, BW=10.3MiB/s (10.8MB/s)(255MiB/24653msec) 01:31:29.843 slat (nsec): min=3602, max=46447, avg=7698.24, stdev=2937.24 01:31:29.843 clat (usec): min=810, max=274334, avg=37129.54, stdev=20668.46 01:31:29.843 lat (usec): min=818, max=274340, avg=37137.24, stdev=20668.87 01:31:29.843 clat percentiles (msec): 01:31:29.843 | 1.00th=[ 9], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 33], 01:31:29.843 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 34], 01:31:29.843 | 70.00th=[ 34], 80.00th=[ 35], 90.00th=[ 38], 95.00th=[ 47], 01:31:29.843 | 99.00th=[ 159], 99.50th=[ 182], 99.90th=[ 205], 99.95th=[ 232], 01:31:29.843 | 99.99th=[ 268] 01:31:29.843 write: IOPS=3410, BW=13.3MiB/s (14.0MB/s)(256MiB/19218msec); 0 zone resets 01:31:29.844 slat (usec): min=4, max=1374, avg=10.48, stdev=10.25 01:31:29.844 clat (usec): min=458, max=87180, avg=11129.20, stdev=19105.61 01:31:29.844 lat (usec): min=469, max=87196, avg=11139.68, stdev=19105.99 01:31:29.844 clat percentiles (usec): 01:31:29.844 | 1.00th=[ 1020], 5.00th=[ 1287], 10.00th=[ 1500], 20.00th=[ 1844], 01:31:29.844 | 30.00th=[ 3064], 40.00th=[ 4686], 50.00th=[ 5538], 60.00th=[ 6259], 01:31:29.844 | 70.00th=[ 7308], 80.00th=[10945], 90.00th=[16450], 95.00th=[74974], 01:31:29.844 | 99.00th=[81265], 99.50th=[83362], 99.90th=[84411], 99.95th=[85459], 01:31:29.844 | 99.99th=[86508] 01:31:29.844 bw ( KiB/s): min= 136, max=40064, per=86.52%, avg=22795.13, stdev=11292.26, samples=23 01:31:29.844 iops : min= 34, max=10016, avg=5698.78, stdev=2823.06, samples=23 01:31:29.844 lat (usec) : 500=0.01%, 750=0.05%, 1000=0.38% 01:31:29.844 lat (msec) : 2=11.35%, 4=6.81%, 10=20.89%, 20=7.03%, 50=47.36% 01:31:29.844 lat (msec) : 100=4.72%, 250=1.39%, 500=0.01% 01:31:29.844 cpu : usr=99.08%, sys=0.19%, ctx=70, majf=0, minf=5599 01:31:29.844 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 01:31:29.844 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:31:29.844 complete : 0=0.0%, 4=99.8%, 8=0.2%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 01:31:29.844 issued rwts: total=65240,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 01:31:29.844 latency : target=0, window=0, percentile=100.00%, depth=128 01:31:29.844 second_half: (groupid=0, jobs=1): err= 0: pid=77356: Mon Dec 9 05:26:11 2024 01:31:29.844 read: IOPS=2630, BW=10.3MiB/s (10.8MB/s)(255MiB/24803msec) 01:31:29.844 slat (nsec): min=3544, max=97383, avg=12817.09, stdev=5138.47 01:31:29.844 clat (usec): min=856, max=279378, avg=36178.65, stdev=19505.72 01:31:29.844 lat (usec): min=876, max=279395, avg=36191.47, stdev=19506.42 01:31:29.844 clat percentiles (msec): 01:31:29.844 | 1.00th=[ 9], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 33], 01:31:29.844 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 34], 01:31:29.844 | 70.00th=[ 34], 80.00th=[ 35], 90.00th=[ 38], 95.00th=[ 46], 01:31:29.844 | 99.00th=[ 150], 99.50th=[ 169], 99.90th=[ 213], 99.95th=[ 253], 01:31:29.844 | 99.99th=[ 275] 01:31:29.844 write: IOPS=3293, BW=12.9MiB/s (13.5MB/s)(256MiB/19899msec); 0 zone resets 01:31:29.844 slat (usec): min=4, max=853, avg=14.12, stdev=10.03 01:31:29.844 clat (usec): min=429, max=87280, avg=12363.81, stdev=19691.25 01:31:29.844 lat (usec): min=449, max=87304, avg=12377.93, stdev=19691.95 01:31:29.844 clat percentiles (usec): 01:31:29.844 | 1.00th=[ 971], 5.00th=[ 1205], 10.00th=[ 1401], 20.00th=[ 1778], 01:31:29.844 | 30.00th=[ 3261], 40.00th=[ 5407], 50.00th=[ 6456], 60.00th=[ 7308], 01:31:29.844 | 70.00th=[ 8586], 80.00th=[11731], 90.00th=[29230], 95.00th=[76022], 01:31:29.844 | 99.00th=[82314], 99.50th=[83362], 99.90th=[85459], 99.95th=[85459], 01:31:29.844 | 99.99th=[86508] 01:31:29.844 bw ( KiB/s): min= 2808, max=42264, per=82.91%, avg=21845.33, stdev=11981.24, samples=24 01:31:29.844 iops : min= 702, max=10566, avg=5461.33, stdev=2995.31, samples=24 01:31:29.844 lat (usec) : 500=0.01%, 750=0.07%, 1000=0.60% 01:31:29.844 lat (msec) : 2=11.37%, 4=4.76%, 10=22.36%, 20=6.91%, 50=47.78% 01:31:29.844 lat (msec) : 100=4.96%, 250=1.16%, 500=0.03% 01:31:29.844 cpu : usr=99.07%, sys=0.25%, ctx=42, majf=0, minf=5516 01:31:29.844 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 01:31:29.844 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:31:29.844 complete : 0=0.0%, 4=98.3%, 8=1.6%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 01:31:29.844 issued rwts: total=65255,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 01:31:29.844 latency : target=0, window=0, percentile=100.00%, depth=128 01:31:29.844 01:31:29.844 Run status group 0 (all jobs): 01:31:29.844 READ: bw=20.6MiB/s (21.6MB/s), 10.3MiB/s-10.3MiB/s (10.8MB/s-10.8MB/s), io=510MiB (535MB), run=24653-24803msec 01:31:29.844 WRITE: bw=25.7MiB/s (27.0MB/s), 12.9MiB/s-13.3MiB/s (13.5MB/s-14.0MB/s), io=512MiB (537MB), run=19218-19899msec 01:31:32.374 ----------------------------------------------------- 01:31:32.374 Suppressions used: 01:31:32.374 count bytes template 01:31:32.374 2 10 /usr/src/fio/parse.c 01:31:32.374 2 192 /usr/src/fio/iolog.c 01:31:32.374 1 8 libtcmalloc_minimal.so 01:31:32.374 1 904 libcrypto.so 01:31:32.374 ----------------------------------------------------- 01:31:32.374 01:31:32.374 05:26:14 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 01:31:32.374 05:26:14 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 01:31:32.374 05:26:14 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 01:31:32.374 05:26:14 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 01:31:32.374 05:26:14 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 01:31:32.374 05:26:14 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 01:31:32.374 05:26:14 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 01:31:32.374 05:26:14 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 01:31:32.374 05:26:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 01:31:32.374 05:26:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 01:31:32.374 05:26:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:31:32.374 05:26:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 01:31:32.374 05:26:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:31:32.374 05:26:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 01:31:32.374 05:26:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 01:31:32.374 05:26:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:31:32.374 05:26:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:31:32.374 05:26:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:31:32.374 05:26:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 01:31:32.374 05:26:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 01:31:32.374 05:26:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 01:31:32.374 05:26:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 01:31:32.374 05:26:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 01:31:32.374 05:26:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 01:31:32.374 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 01:31:32.374 fio-3.35 01:31:32.374 Starting 1 thread 01:31:47.288 01:31:47.288 test: (groupid=0, jobs=1): err= 0: pid=77682: Mon Dec 9 05:26:29 2024 01:31:47.288 read: IOPS=7758, BW=30.3MiB/s (31.8MB/s)(255MiB/8404msec) 01:31:47.288 slat (nsec): min=3413, max=60273, avg=5076.90, stdev=1305.82 01:31:47.288 clat (usec): min=601, max=32447, avg=16490.11, stdev=1046.85 01:31:47.288 lat (usec): min=605, max=32452, avg=16495.19, stdev=1046.81 01:31:47.288 clat percentiles (usec): 01:31:47.288 | 1.00th=[15008], 5.00th=[15270], 10.00th=[15401], 20.00th=[15795], 01:31:47.288 | 30.00th=[16188], 40.00th=[16450], 50.00th=[16581], 60.00th=[16712], 01:31:47.288 | 70.00th=[16909], 80.00th=[16909], 90.00th=[17171], 95.00th=[17433], 01:31:47.288 | 99.00th=[18744], 99.50th=[22152], 99.90th=[27919], 99.95th=[28705], 01:31:47.288 | 99.99th=[31851] 01:31:47.288 write: IOPS=13.9k, BW=54.3MiB/s (56.9MB/s)(256MiB/4715msec); 0 zone resets 01:31:47.288 slat (usec): min=4, max=1659, avg= 7.53, stdev= 9.83 01:31:47.288 clat (usec): min=563, max=49584, avg=9160.84, stdev=10819.67 01:31:47.288 lat (usec): min=569, max=49590, avg=9168.37, stdev=10819.67 01:31:47.288 clat percentiles (usec): 01:31:47.288 | 1.00th=[ 873], 5.00th=[ 1037], 10.00th=[ 1156], 20.00th=[ 1336], 01:31:47.288 | 30.00th=[ 1516], 40.00th=[ 1860], 50.00th=[ 5866], 60.00th=[ 6783], 01:31:47.288 | 70.00th=[ 8356], 80.00th=[12649], 90.00th=[32113], 95.00th=[33817], 01:31:47.288 | 99.00th=[35390], 99.50th=[35914], 99.90th=[38011], 99.95th=[40109], 01:31:47.288 | 99.99th=[45351] 01:31:47.288 bw ( KiB/s): min=20656, max=77488, per=94.26%, avg=52404.90, stdev=14724.16, samples=10 01:31:47.288 iops : min= 5164, max=19372, avg=13101.20, stdev=3681.01, samples=10 01:31:47.288 lat (usec) : 750=0.07%, 1000=1.83% 01:31:47.288 lat (msec) : 2=18.59%, 4=0.62%, 10=16.56%, 20=54.01%, 50=8.32% 01:31:47.288 cpu : usr=98.87%, sys=0.34%, ctx=28, majf=0, minf=5565 01:31:47.288 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 01:31:47.288 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:31:47.288 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 01:31:47.288 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 01:31:47.288 latency : target=0, window=0, percentile=100.00%, depth=128 01:31:47.288 01:31:47.288 Run status group 0 (all jobs): 01:31:47.288 READ: bw=30.3MiB/s (31.8MB/s), 30.3MiB/s-30.3MiB/s (31.8MB/s-31.8MB/s), io=255MiB (267MB), run=8404-8404msec 01:31:47.288 WRITE: bw=54.3MiB/s (56.9MB/s), 54.3MiB/s-54.3MiB/s (56.9MB/s-56.9MB/s), io=256MiB (268MB), run=4715-4715msec 01:31:49.199 ----------------------------------------------------- 01:31:49.199 Suppressions used: 01:31:49.199 count bytes template 01:31:49.199 1 5 /usr/src/fio/parse.c 01:31:49.199 2 192 /usr/src/fio/iolog.c 01:31:49.199 1 8 libtcmalloc_minimal.so 01:31:49.199 1 904 libcrypto.so 01:31:49.199 ----------------------------------------------------- 01:31:49.199 01:31:49.457 05:26:31 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 01:31:49.457 05:26:31 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 01:31:49.457 05:26:31 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 01:31:49.457 05:26:31 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 01:31:49.457 05:26:31 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 01:31:49.457 Remove shared memory files 01:31:49.457 05:26:31 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 01:31:49.457 05:26:31 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 01:31:49.457 05:26:31 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 01:31:49.457 05:26:31 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid57729 /dev/shm/spdk_tgt_trace.pid75903 01:31:49.457 05:26:31 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 01:31:49.457 05:26:31 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 01:31:49.457 01:31:49.457 real 1m10.579s 01:31:49.457 user 2m30.983s 01:31:49.457 sys 0m4.558s 01:31:49.457 05:26:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 01:31:49.457 05:26:31 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 01:31:49.457 ************************************ 01:31:49.457 END TEST ftl_fio_basic 01:31:49.457 ************************************ 01:31:49.457 05:26:31 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 01:31:49.457 05:26:31 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:31:49.457 05:26:31 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 01:31:49.457 05:26:31 ftl -- common/autotest_common.sh@10 -- # set +x 01:31:49.457 ************************************ 01:31:49.457 START TEST ftl_bdevperf 01:31:49.457 ************************************ 01:31:49.457 05:26:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 01:31:49.717 * Looking for test storage... 01:31:49.717 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 01:31:49.717 05:26:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:31:49.717 05:26:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 01:31:49.717 05:26:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:31:49.717 05:26:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:31:49.717 05:26:32 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:31:49.717 05:26:32 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 01:31:49.717 05:26:32 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 01:31:49.717 05:26:32 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 01:31:49.717 05:26:32 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 01:31:49.717 05:26:32 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 01:31:49.717 05:26:32 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 01:31:49.717 05:26:32 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 01:31:49.717 05:26:32 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 01:31:49.717 05:26:32 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 01:31:49.717 05:26:32 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:31:49.717 05:26:32 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 01:31:49.717 05:26:32 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 01:31:49.717 05:26:32 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 01:31:49.717 05:26:32 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:31:49.717 05:26:32 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 01:31:49.717 05:26:32 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 01:31:49.717 05:26:32 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:31:49.717 05:26:32 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 01:31:49.717 05:26:32 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 01:31:49.717 05:26:32 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 01:31:49.717 05:26:32 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 01:31:49.717 05:26:32 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:31:49.717 05:26:32 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 01:31:49.717 05:26:32 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 01:31:49.717 05:26:32 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:31:49.717 05:26:32 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:31:49.717 05:26:32 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 01:31:49.717 05:26:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:31:49.717 05:26:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:31:49.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:31:49.717 --rc genhtml_branch_coverage=1 01:31:49.717 --rc genhtml_function_coverage=1 01:31:49.717 --rc genhtml_legend=1 01:31:49.717 --rc geninfo_all_blocks=1 01:31:49.717 --rc geninfo_unexecuted_blocks=1 01:31:49.717 01:31:49.717 ' 01:31:49.717 05:26:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:31:49.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:31:49.717 --rc genhtml_branch_coverage=1 01:31:49.717 --rc genhtml_function_coverage=1 01:31:49.717 --rc genhtml_legend=1 01:31:49.717 --rc geninfo_all_blocks=1 01:31:49.717 --rc geninfo_unexecuted_blocks=1 01:31:49.717 01:31:49.717 ' 01:31:49.717 05:26:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:31:49.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:31:49.717 --rc genhtml_branch_coverage=1 01:31:49.717 --rc genhtml_function_coverage=1 01:31:49.717 --rc genhtml_legend=1 01:31:49.717 --rc geninfo_all_blocks=1 01:31:49.717 --rc geninfo_unexecuted_blocks=1 01:31:49.717 01:31:49.717 ' 01:31:49.717 05:26:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:31:49.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:31:49.717 --rc genhtml_branch_coverage=1 01:31:49.717 --rc genhtml_function_coverage=1 01:31:49.717 --rc genhtml_legend=1 01:31:49.717 --rc geninfo_all_blocks=1 01:31:49.717 --rc geninfo_unexecuted_blocks=1 01:31:49.717 01:31:49.717 ' 01:31:49.717 05:26:32 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 01:31:49.717 05:26:32 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 01:31:49.717 05:26:32 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 01:31:49.717 05:26:32 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 01:31:49.717 05:26:32 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 01:31:49.717 05:26:32 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 01:31:49.717 05:26:32 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:31:49.717 05:26:32 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 01:31:49.717 05:26:32 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 01:31:49.717 05:26:32 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:31:49.717 05:26:32 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:31:49.717 05:26:32 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 01:31:49.717 05:26:32 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 01:31:49.717 05:26:32 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 01:31:49.717 05:26:32 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 01:31:49.717 05:26:32 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 01:31:49.717 05:26:32 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 01:31:49.717 05:26:32 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:31:49.717 05:26:32 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:31:49.717 05:26:32 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 01:31:49.717 05:26:32 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 01:31:49.717 05:26:32 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 01:31:49.717 05:26:32 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 01:31:49.717 05:26:32 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 01:31:49.717 05:26:32 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 01:31:49.717 05:26:32 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 01:31:49.717 05:26:32 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 01:31:49.717 05:26:32 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:31:49.717 05:26:32 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:31:49.717 05:26:32 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 01:31:49.717 05:26:32 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 01:31:49.717 05:26:32 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 01:31:49.717 05:26:32 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:31:49.717 05:26:32 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 01:31:49.717 05:26:32 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=77923 01:31:49.717 05:26:32 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 01:31:49.717 05:26:32 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 01:31:49.717 05:26:32 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 77923 01:31:49.717 05:26:32 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 77923 ']' 01:31:49.717 05:26:32 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:31:49.717 05:26:32 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 01:31:49.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:31:49.717 05:26:32 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:31:49.717 05:26:32 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 01:31:49.717 05:26:32 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 01:31:49.976 [2024-12-09 05:26:32.193764] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:31:49.976 [2024-12-09 05:26:32.193904] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77923 ] 01:31:49.976 [2024-12-09 05:26:32.381312] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:31:50.235 [2024-12-09 05:26:32.509326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:31:50.803 05:26:33 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:31:50.803 05:26:33 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 01:31:50.803 05:26:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 01:31:50.803 05:26:33 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 01:31:50.803 05:26:33 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 01:31:50.803 05:26:33 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 01:31:50.803 05:26:33 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 01:31:50.803 05:26:33 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 01:31:51.060 05:26:33 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 01:31:51.060 05:26:33 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 01:31:51.060 05:26:33 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 01:31:51.060 05:26:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 01:31:51.060 05:26:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 01:31:51.060 05:26:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 01:31:51.060 05:26:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 01:31:51.060 05:26:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 01:31:51.319 05:26:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 01:31:51.319 { 01:31:51.319 "name": "nvme0n1", 01:31:51.319 "aliases": [ 01:31:51.319 "edd4fce7-32fc-479a-888d-9c648923bcc3" 01:31:51.319 ], 01:31:51.319 "product_name": "NVMe disk", 01:31:51.319 "block_size": 4096, 01:31:51.319 "num_blocks": 1310720, 01:31:51.319 "uuid": "edd4fce7-32fc-479a-888d-9c648923bcc3", 01:31:51.319 "numa_id": -1, 01:31:51.319 "assigned_rate_limits": { 01:31:51.319 "rw_ios_per_sec": 0, 01:31:51.319 "rw_mbytes_per_sec": 0, 01:31:51.319 "r_mbytes_per_sec": 0, 01:31:51.319 "w_mbytes_per_sec": 0 01:31:51.319 }, 01:31:51.319 "claimed": true, 01:31:51.319 "claim_type": "read_many_write_one", 01:31:51.319 "zoned": false, 01:31:51.319 "supported_io_types": { 01:31:51.319 "read": true, 01:31:51.319 "write": true, 01:31:51.319 "unmap": true, 01:31:51.319 "flush": true, 01:31:51.319 "reset": true, 01:31:51.319 "nvme_admin": true, 01:31:51.319 "nvme_io": true, 01:31:51.319 "nvme_io_md": false, 01:31:51.319 "write_zeroes": true, 01:31:51.319 "zcopy": false, 01:31:51.319 "get_zone_info": false, 01:31:51.319 "zone_management": false, 01:31:51.319 "zone_append": false, 01:31:51.319 "compare": true, 01:31:51.319 "compare_and_write": false, 01:31:51.319 "abort": true, 01:31:51.319 "seek_hole": false, 01:31:51.319 "seek_data": false, 01:31:51.319 "copy": true, 01:31:51.319 "nvme_iov_md": false 01:31:51.319 }, 01:31:51.319 "driver_specific": { 01:31:51.319 "nvme": [ 01:31:51.319 { 01:31:51.319 "pci_address": "0000:00:11.0", 01:31:51.319 "trid": { 01:31:51.319 "trtype": "PCIe", 01:31:51.319 "traddr": "0000:00:11.0" 01:31:51.319 }, 01:31:51.319 "ctrlr_data": { 01:31:51.319 "cntlid": 0, 01:31:51.319 "vendor_id": "0x1b36", 01:31:51.319 "model_number": "QEMU NVMe Ctrl", 01:31:51.319 "serial_number": "12341", 01:31:51.319 "firmware_revision": "8.0.0", 01:31:51.319 "subnqn": "nqn.2019-08.org.qemu:12341", 01:31:51.319 "oacs": { 01:31:51.319 "security": 0, 01:31:51.319 "format": 1, 01:31:51.319 "firmware": 0, 01:31:51.319 "ns_manage": 1 01:31:51.319 }, 01:31:51.319 "multi_ctrlr": false, 01:31:51.319 "ana_reporting": false 01:31:51.319 }, 01:31:51.319 "vs": { 01:31:51.319 "nvme_version": "1.4" 01:31:51.319 }, 01:31:51.319 "ns_data": { 01:31:51.319 "id": 1, 01:31:51.319 "can_share": false 01:31:51.319 } 01:31:51.319 } 01:31:51.319 ], 01:31:51.319 "mp_policy": "active_passive" 01:31:51.319 } 01:31:51.319 } 01:31:51.319 ]' 01:31:51.319 05:26:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 01:31:51.319 05:26:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 01:31:51.319 05:26:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 01:31:51.319 05:26:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 01:31:51.319 05:26:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 01:31:51.319 05:26:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 01:31:51.319 05:26:33 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 01:31:51.319 05:26:33 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 01:31:51.319 05:26:33 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 01:31:51.319 05:26:33 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 01:31:51.319 05:26:33 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 01:31:51.579 05:26:33 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=0275fb6c-3c7d-4495-ac1f-1c75ccc3ca12 01:31:51.579 05:26:33 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 01:31:51.579 05:26:33 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0275fb6c-3c7d-4495-ac1f-1c75ccc3ca12 01:31:51.838 05:26:34 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 01:31:51.838 05:26:34 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=4f29216b-d743-4a16-a57a-0d62f261b4aa 01:31:51.838 05:26:34 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 4f29216b-d743-4a16-a57a-0d62f261b4aa 01:31:52.097 05:26:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=fd309130-0ec7-48bf-ad8b-51a464a7edca 01:31:52.097 05:26:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 fd309130-0ec7-48bf-ad8b-51a464a7edca 01:31:52.097 05:26:34 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 01:31:52.097 05:26:34 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 01:31:52.097 05:26:34 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=fd309130-0ec7-48bf-ad8b-51a464a7edca 01:31:52.097 05:26:34 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 01:31:52.097 05:26:34 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size fd309130-0ec7-48bf-ad8b-51a464a7edca 01:31:52.097 05:26:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=fd309130-0ec7-48bf-ad8b-51a464a7edca 01:31:52.097 05:26:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 01:31:52.097 05:26:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 01:31:52.097 05:26:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 01:31:52.097 05:26:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fd309130-0ec7-48bf-ad8b-51a464a7edca 01:31:52.357 05:26:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 01:31:52.357 { 01:31:52.357 "name": "fd309130-0ec7-48bf-ad8b-51a464a7edca", 01:31:52.357 "aliases": [ 01:31:52.357 "lvs/nvme0n1p0" 01:31:52.357 ], 01:31:52.357 "product_name": "Logical Volume", 01:31:52.357 "block_size": 4096, 01:31:52.357 "num_blocks": 26476544, 01:31:52.357 "uuid": "fd309130-0ec7-48bf-ad8b-51a464a7edca", 01:31:52.357 "assigned_rate_limits": { 01:31:52.357 "rw_ios_per_sec": 0, 01:31:52.357 "rw_mbytes_per_sec": 0, 01:31:52.357 "r_mbytes_per_sec": 0, 01:31:52.357 "w_mbytes_per_sec": 0 01:31:52.357 }, 01:31:52.357 "claimed": false, 01:31:52.357 "zoned": false, 01:31:52.357 "supported_io_types": { 01:31:52.357 "read": true, 01:31:52.357 "write": true, 01:31:52.357 "unmap": true, 01:31:52.357 "flush": false, 01:31:52.357 "reset": true, 01:31:52.357 "nvme_admin": false, 01:31:52.357 "nvme_io": false, 01:31:52.357 "nvme_io_md": false, 01:31:52.357 "write_zeroes": true, 01:31:52.357 "zcopy": false, 01:31:52.357 "get_zone_info": false, 01:31:52.357 "zone_management": false, 01:31:52.357 "zone_append": false, 01:31:52.357 "compare": false, 01:31:52.357 "compare_and_write": false, 01:31:52.357 "abort": false, 01:31:52.357 "seek_hole": true, 01:31:52.357 "seek_data": true, 01:31:52.357 "copy": false, 01:31:52.357 "nvme_iov_md": false 01:31:52.357 }, 01:31:52.357 "driver_specific": { 01:31:52.357 "lvol": { 01:31:52.357 "lvol_store_uuid": "4f29216b-d743-4a16-a57a-0d62f261b4aa", 01:31:52.357 "base_bdev": "nvme0n1", 01:31:52.357 "thin_provision": true, 01:31:52.357 "num_allocated_clusters": 0, 01:31:52.357 "snapshot": false, 01:31:52.357 "clone": false, 01:31:52.357 "esnap_clone": false 01:31:52.357 } 01:31:52.357 } 01:31:52.357 } 01:31:52.357 ]' 01:31:52.357 05:26:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 01:31:52.357 05:26:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 01:31:52.357 05:26:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 01:31:52.357 05:26:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 01:31:52.357 05:26:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 01:31:52.357 05:26:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 01:31:52.357 05:26:34 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 01:31:52.357 05:26:34 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 01:31:52.357 05:26:34 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 01:31:52.616 05:26:35 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 01:31:52.616 05:26:35 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 01:31:52.616 05:26:35 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size fd309130-0ec7-48bf-ad8b-51a464a7edca 01:31:52.616 05:26:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=fd309130-0ec7-48bf-ad8b-51a464a7edca 01:31:52.616 05:26:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 01:31:52.616 05:26:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 01:31:52.616 05:26:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 01:31:52.616 05:26:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fd309130-0ec7-48bf-ad8b-51a464a7edca 01:31:52.874 05:26:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 01:31:52.874 { 01:31:52.874 "name": "fd309130-0ec7-48bf-ad8b-51a464a7edca", 01:31:52.874 "aliases": [ 01:31:52.874 "lvs/nvme0n1p0" 01:31:52.874 ], 01:31:52.874 "product_name": "Logical Volume", 01:31:52.874 "block_size": 4096, 01:31:52.874 "num_blocks": 26476544, 01:31:52.874 "uuid": "fd309130-0ec7-48bf-ad8b-51a464a7edca", 01:31:52.874 "assigned_rate_limits": { 01:31:52.874 "rw_ios_per_sec": 0, 01:31:52.874 "rw_mbytes_per_sec": 0, 01:31:52.874 "r_mbytes_per_sec": 0, 01:31:52.874 "w_mbytes_per_sec": 0 01:31:52.874 }, 01:31:52.874 "claimed": false, 01:31:52.874 "zoned": false, 01:31:52.874 "supported_io_types": { 01:31:52.874 "read": true, 01:31:52.874 "write": true, 01:31:52.874 "unmap": true, 01:31:52.874 "flush": false, 01:31:52.874 "reset": true, 01:31:52.874 "nvme_admin": false, 01:31:52.874 "nvme_io": false, 01:31:52.874 "nvme_io_md": false, 01:31:52.874 "write_zeroes": true, 01:31:52.874 "zcopy": false, 01:31:52.874 "get_zone_info": false, 01:31:52.874 "zone_management": false, 01:31:52.874 "zone_append": false, 01:31:52.874 "compare": false, 01:31:52.874 "compare_and_write": false, 01:31:52.874 "abort": false, 01:31:52.874 "seek_hole": true, 01:31:52.874 "seek_data": true, 01:31:52.874 "copy": false, 01:31:52.874 "nvme_iov_md": false 01:31:52.874 }, 01:31:52.874 "driver_specific": { 01:31:52.874 "lvol": { 01:31:52.874 "lvol_store_uuid": "4f29216b-d743-4a16-a57a-0d62f261b4aa", 01:31:52.874 "base_bdev": "nvme0n1", 01:31:52.874 "thin_provision": true, 01:31:52.874 "num_allocated_clusters": 0, 01:31:52.874 "snapshot": false, 01:31:52.874 "clone": false, 01:31:52.874 "esnap_clone": false 01:31:52.874 } 01:31:52.874 } 01:31:52.874 } 01:31:52.874 ]' 01:31:52.874 05:26:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 01:31:52.874 05:26:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 01:31:52.874 05:26:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 01:31:53.130 05:26:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 01:31:53.130 05:26:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 01:31:53.130 05:26:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 01:31:53.130 05:26:35 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 01:31:53.130 05:26:35 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 01:31:53.130 05:26:35 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 01:31:53.130 05:26:35 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size fd309130-0ec7-48bf-ad8b-51a464a7edca 01:31:53.130 05:26:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=fd309130-0ec7-48bf-ad8b-51a464a7edca 01:31:53.130 05:26:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 01:31:53.130 05:26:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 01:31:53.130 05:26:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 01:31:53.130 05:26:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fd309130-0ec7-48bf-ad8b-51a464a7edca 01:31:53.388 05:26:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 01:31:53.388 { 01:31:53.388 "name": "fd309130-0ec7-48bf-ad8b-51a464a7edca", 01:31:53.388 "aliases": [ 01:31:53.388 "lvs/nvme0n1p0" 01:31:53.388 ], 01:31:53.388 "product_name": "Logical Volume", 01:31:53.388 "block_size": 4096, 01:31:53.388 "num_blocks": 26476544, 01:31:53.388 "uuid": "fd309130-0ec7-48bf-ad8b-51a464a7edca", 01:31:53.388 "assigned_rate_limits": { 01:31:53.388 "rw_ios_per_sec": 0, 01:31:53.388 "rw_mbytes_per_sec": 0, 01:31:53.388 "r_mbytes_per_sec": 0, 01:31:53.388 "w_mbytes_per_sec": 0 01:31:53.388 }, 01:31:53.388 "claimed": false, 01:31:53.388 "zoned": false, 01:31:53.388 "supported_io_types": { 01:31:53.388 "read": true, 01:31:53.388 "write": true, 01:31:53.388 "unmap": true, 01:31:53.388 "flush": false, 01:31:53.388 "reset": true, 01:31:53.388 "nvme_admin": false, 01:31:53.388 "nvme_io": false, 01:31:53.388 "nvme_io_md": false, 01:31:53.388 "write_zeroes": true, 01:31:53.388 "zcopy": false, 01:31:53.388 "get_zone_info": false, 01:31:53.388 "zone_management": false, 01:31:53.388 "zone_append": false, 01:31:53.388 "compare": false, 01:31:53.388 "compare_and_write": false, 01:31:53.388 "abort": false, 01:31:53.388 "seek_hole": true, 01:31:53.388 "seek_data": true, 01:31:53.388 "copy": false, 01:31:53.388 "nvme_iov_md": false 01:31:53.388 }, 01:31:53.388 "driver_specific": { 01:31:53.388 "lvol": { 01:31:53.388 "lvol_store_uuid": "4f29216b-d743-4a16-a57a-0d62f261b4aa", 01:31:53.388 "base_bdev": "nvme0n1", 01:31:53.388 "thin_provision": true, 01:31:53.388 "num_allocated_clusters": 0, 01:31:53.388 "snapshot": false, 01:31:53.388 "clone": false, 01:31:53.388 "esnap_clone": false 01:31:53.388 } 01:31:53.388 } 01:31:53.388 } 01:31:53.388 ]' 01:31:53.388 05:26:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 01:31:53.388 05:26:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 01:31:53.388 05:26:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 01:31:53.388 05:26:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 01:31:53.388 05:26:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 01:31:53.388 05:26:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 01:31:53.388 05:26:35 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 01:31:53.388 05:26:35 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d fd309130-0ec7-48bf-ad8b-51a464a7edca -c nvc0n1p0 --l2p_dram_limit 20 01:31:53.647 [2024-12-09 05:26:36.012393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:31:53.647 [2024-12-09 05:26:36.012452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 01:31:53.647 [2024-12-09 05:26:36.012481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 01:31:53.647 [2024-12-09 05:26:36.012511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:31:53.647 [2024-12-09 05:26:36.012594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:31:53.647 [2024-12-09 05:26:36.012610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 01:31:53.647 [2024-12-09 05:26:36.012621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 01:31:53.647 [2024-12-09 05:26:36.012635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:31:53.647 [2024-12-09 05:26:36.012656] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 01:31:53.647 [2024-12-09 05:26:36.013741] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 01:31:53.647 [2024-12-09 05:26:36.013770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:31:53.647 [2024-12-09 05:26:36.013784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 01:31:53.647 [2024-12-09 05:26:36.013796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.123 ms 01:31:53.647 [2024-12-09 05:26:36.013811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:31:53.647 [2024-12-09 05:26:36.013895] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 76f26940-5711-4665-a6ab-7bfc0f25ae7e 01:31:53.647 [2024-12-09 05:26:36.016371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:31:53.648 [2024-12-09 05:26:36.016416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 01:31:53.648 [2024-12-09 05:26:36.016439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 01:31:53.648 [2024-12-09 05:26:36.016449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:31:53.648 [2024-12-09 05:26:36.030415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:31:53.648 [2024-12-09 05:26:36.030447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 01:31:53.648 [2024-12-09 05:26:36.030489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.915 ms 01:31:53.648 [2024-12-09 05:26:36.030505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:31:53.648 [2024-12-09 05:26:36.030615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:31:53.648 [2024-12-09 05:26:36.030629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 01:31:53.648 [2024-12-09 05:26:36.030649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 01:31:53.648 [2024-12-09 05:26:36.030660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:31:53.648 [2024-12-09 05:26:36.030721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:31:53.648 [2024-12-09 05:26:36.030734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 01:31:53.648 [2024-12-09 05:26:36.030748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 01:31:53.648 [2024-12-09 05:26:36.030758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:31:53.648 [2024-12-09 05:26:36.030792] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 01:31:53.648 [2024-12-09 05:26:36.036570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:31:53.648 [2024-12-09 05:26:36.036751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 01:31:53.648 [2024-12-09 05:26:36.036774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.804 ms 01:31:53.648 [2024-12-09 05:26:36.036797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:31:53.648 [2024-12-09 05:26:36.036836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:31:53.648 [2024-12-09 05:26:36.036851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 01:31:53.648 [2024-12-09 05:26:36.036863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 01:31:53.648 [2024-12-09 05:26:36.036877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:31:53.648 [2024-12-09 05:26:36.036911] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 01:31:53.648 [2024-12-09 05:26:36.037056] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 01:31:53.648 [2024-12-09 05:26:36.037072] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 01:31:53.648 [2024-12-09 05:26:36.037090] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 01:31:53.648 [2024-12-09 05:26:36.037104] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 01:31:53.648 [2024-12-09 05:26:36.037120] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 01:31:53.648 [2024-12-09 05:26:36.037132] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 01:31:53.648 [2024-12-09 05:26:36.037146] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 01:31:53.648 [2024-12-09 05:26:36.037156] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 01:31:53.648 [2024-12-09 05:26:36.037170] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 01:31:53.648 [2024-12-09 05:26:36.037185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:31:53.648 [2024-12-09 05:26:36.037199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 01:31:53.648 [2024-12-09 05:26:36.037210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.276 ms 01:31:53.648 [2024-12-09 05:26:36.037223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:31:53.648 [2024-12-09 05:26:36.037295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:31:53.648 [2024-12-09 05:26:36.037311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 01:31:53.648 [2024-12-09 05:26:36.037322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 01:31:53.648 [2024-12-09 05:26:36.037339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:31:53.648 [2024-12-09 05:26:36.037420] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 01:31:53.648 [2024-12-09 05:26:36.037440] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 01:31:53.648 [2024-12-09 05:26:36.037452] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 01:31:53.648 [2024-12-09 05:26:36.037482] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:31:53.648 [2024-12-09 05:26:36.037493] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 01:31:53.648 [2024-12-09 05:26:36.037507] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 01:31:53.648 [2024-12-09 05:26:36.037516] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 01:31:53.648 [2024-12-09 05:26:36.037529] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 01:31:53.648 [2024-12-09 05:26:36.037538] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 01:31:53.648 [2024-12-09 05:26:36.037551] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 01:31:53.648 [2024-12-09 05:26:36.037560] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 01:31:53.648 [2024-12-09 05:26:36.037587] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 01:31:53.648 [2024-12-09 05:26:36.037597] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 01:31:53.648 [2024-12-09 05:26:36.037610] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 01:31:53.648 [2024-12-09 05:26:36.037623] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 01:31:53.648 [2024-12-09 05:26:36.037639] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:31:53.648 [2024-12-09 05:26:36.037649] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 01:31:53.648 [2024-12-09 05:26:36.037662] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 01:31:53.648 [2024-12-09 05:26:36.037672] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:31:53.648 [2024-12-09 05:26:36.037687] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 01:31:53.648 [2024-12-09 05:26:36.037697] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 01:31:53.648 [2024-12-09 05:26:36.037710] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:31:53.648 [2024-12-09 05:26:36.037720] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 01:31:53.648 [2024-12-09 05:26:36.037733] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 01:31:53.648 [2024-12-09 05:26:36.037743] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:31:53.648 [2024-12-09 05:26:36.037756] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 01:31:53.648 [2024-12-09 05:26:36.037765] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 01:31:53.648 [2024-12-09 05:26:36.037777] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:31:53.648 [2024-12-09 05:26:36.037786] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 01:31:53.648 [2024-12-09 05:26:36.037799] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 01:31:53.648 [2024-12-09 05:26:36.037808] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:31:53.648 [2024-12-09 05:26:36.037824] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 01:31:53.648 [2024-12-09 05:26:36.037833] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 01:31:53.648 [2024-12-09 05:26:36.037846] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 01:31:53.648 [2024-12-09 05:26:36.037855] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 01:31:53.648 [2024-12-09 05:26:36.037868] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 01:31:53.648 [2024-12-09 05:26:36.037878] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 01:31:53.648 [2024-12-09 05:26:36.037890] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 01:31:53.648 [2024-12-09 05:26:36.037900] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 01:31:53.648 [2024-12-09 05:26:36.037912] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:31:53.648 [2024-12-09 05:26:36.037921] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 01:31:53.648 [2024-12-09 05:26:36.037933] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 01:31:53.648 [2024-12-09 05:26:36.037942] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:31:53.648 [2024-12-09 05:26:36.037955] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 01:31:53.648 [2024-12-09 05:26:36.037966] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 01:31:53.648 [2024-12-09 05:26:36.037979] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 01:31:53.648 [2024-12-09 05:26:36.037990] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:31:53.648 [2024-12-09 05:26:36.038009] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 01:31:53.648 [2024-12-09 05:26:36.038020] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 01:31:53.648 [2024-12-09 05:26:36.038033] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 01:31:53.648 [2024-12-09 05:26:36.038042] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 01:31:53.648 [2024-12-09 05:26:36.038055] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 01:31:53.648 [2024-12-09 05:26:36.038065] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 01:31:53.648 [2024-12-09 05:26:36.038084] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 01:31:53.648 [2024-12-09 05:26:36.038096] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 01:31:53.648 [2024-12-09 05:26:36.038112] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 01:31:53.648 [2024-12-09 05:26:36.038123] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 01:31:53.648 [2024-12-09 05:26:36.038137] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 01:31:53.648 [2024-12-09 05:26:36.038148] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 01:31:53.648 [2024-12-09 05:26:36.038162] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 01:31:53.648 [2024-12-09 05:26:36.038172] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 01:31:53.648 [2024-12-09 05:26:36.038186] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 01:31:53.648 [2024-12-09 05:26:36.038196] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 01:31:53.648 [2024-12-09 05:26:36.038213] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 01:31:53.648 [2024-12-09 05:26:36.038224] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 01:31:53.648 [2024-12-09 05:26:36.038237] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 01:31:53.648 [2024-12-09 05:26:36.038247] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 01:31:53.648 [2024-12-09 05:26:36.038260] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 01:31:53.648 [2024-12-09 05:26:36.038271] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 01:31:53.648 [2024-12-09 05:26:36.038284] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 01:31:53.648 [2024-12-09 05:26:36.038296] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 01:31:53.648 [2024-12-09 05:26:36.038315] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 01:31:53.648 [2024-12-09 05:26:36.038326] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 01:31:53.648 [2024-12-09 05:26:36.038339] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 01:31:53.648 [2024-12-09 05:26:36.038349] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 01:31:53.648 [2024-12-09 05:26:36.038364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:31:53.648 [2024-12-09 05:26:36.038374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 01:31:53.648 [2024-12-09 05:26:36.038389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.995 ms 01:31:53.648 [2024-12-09 05:26:36.038401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:31:53.648 [2024-12-09 05:26:36.038446] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 01:31:53.648 [2024-12-09 05:26:36.038471] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 01:31:57.840 [2024-12-09 05:26:40.081569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:31:57.840 [2024-12-09 05:26:40.081674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 01:31:57.840 [2024-12-09 05:26:40.081714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4049.679 ms 01:31:57.840 [2024-12-09 05:26:40.081726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:31:57.840 [2024-12-09 05:26:40.126867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:31:57.840 [2024-12-09 05:26:40.127160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 01:31:57.840 [2024-12-09 05:26:40.127198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.887 ms 01:31:57.840 [2024-12-09 05:26:40.127211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:31:57.840 [2024-12-09 05:26:40.127391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:31:57.840 [2024-12-09 05:26:40.127405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 01:31:57.840 [2024-12-09 05:26:40.127424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 01:31:57.840 [2024-12-09 05:26:40.127436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:31:57.840 [2024-12-09 05:26:40.204352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:31:57.840 [2024-12-09 05:26:40.204402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 01:31:57.840 [2024-12-09 05:26:40.204438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 76.980 ms 01:31:57.840 [2024-12-09 05:26:40.204450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:31:57.840 [2024-12-09 05:26:40.204515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:31:57.840 [2024-12-09 05:26:40.204528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 01:31:57.840 [2024-12-09 05:26:40.204543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 01:31:57.840 [2024-12-09 05:26:40.204558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:31:57.840 [2024-12-09 05:26:40.205434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:31:57.840 [2024-12-09 05:26:40.205455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 01:31:57.840 [2024-12-09 05:26:40.205648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.781 ms 01:31:57.840 [2024-12-09 05:26:40.205670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:31:57.840 [2024-12-09 05:26:40.205801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:31:57.840 [2024-12-09 05:26:40.205814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 01:31:57.840 [2024-12-09 05:26:40.205832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 01:31:57.840 [2024-12-09 05:26:40.205843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:31:57.840 [2024-12-09 05:26:40.228102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:31:57.840 [2024-12-09 05:26:40.228138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 01:31:57.840 [2024-12-09 05:26:40.228155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.269 ms 01:31:57.840 [2024-12-09 05:26:40.228196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:31:57.840 [2024-12-09 05:26:40.242209] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 01:31:57.840 [2024-12-09 05:26:40.251748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:31:57.840 [2024-12-09 05:26:40.251917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 01:31:57.840 [2024-12-09 05:26:40.252009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.510 ms 01:31:57.840 [2024-12-09 05:26:40.252052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:31:58.099 [2024-12-09 05:26:40.351414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:31:58.099 [2024-12-09 05:26:40.351495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 01:31:58.099 [2024-12-09 05:26:40.351513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 99.488 ms 01:31:58.099 [2024-12-09 05:26:40.351527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:31:58.099 [2024-12-09 05:26:40.351761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:31:58.099 [2024-12-09 05:26:40.351783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 01:31:58.099 [2024-12-09 05:26:40.351796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.190 ms 01:31:58.099 [2024-12-09 05:26:40.351814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:31:58.099 [2024-12-09 05:26:40.387709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:31:58.099 [2024-12-09 05:26:40.387756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 01:31:58.099 [2024-12-09 05:26:40.387771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.902 ms 01:31:58.099 [2024-12-09 05:26:40.387801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:31:58.099 [2024-12-09 05:26:40.423597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:31:58.099 [2024-12-09 05:26:40.423643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 01:31:58.099 [2024-12-09 05:26:40.423659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.813 ms 01:31:58.099 [2024-12-09 05:26:40.423673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:31:58.099 [2024-12-09 05:26:40.424429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:31:58.099 [2024-12-09 05:26:40.424452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 01:31:58.099 [2024-12-09 05:26:40.424480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.720 ms 01:31:58.099 [2024-12-09 05:26:40.424495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:31:58.099 [2024-12-09 05:26:40.526498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:31:58.099 [2024-12-09 05:26:40.526560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 01:31:58.099 [2024-12-09 05:26:40.526576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 102.115 ms 01:31:58.099 [2024-12-09 05:26:40.526591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:31:58.357 [2024-12-09 05:26:40.564309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:31:58.357 [2024-12-09 05:26:40.564357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 01:31:58.357 [2024-12-09 05:26:40.564375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.699 ms 01:31:58.357 [2024-12-09 05:26:40.564389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:31:58.358 [2024-12-09 05:26:40.599528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:31:58.358 [2024-12-09 05:26:40.599682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 01:31:58.358 [2024-12-09 05:26:40.599718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.156 ms 01:31:58.358 [2024-12-09 05:26:40.599732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:31:58.358 [2024-12-09 05:26:40.633853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:31:58.358 [2024-12-09 05:26:40.633899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 01:31:58.358 [2024-12-09 05:26:40.633913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.139 ms 01:31:58.358 [2024-12-09 05:26:40.633926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:31:58.358 [2024-12-09 05:26:40.633967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:31:58.358 [2024-12-09 05:26:40.633986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 01:31:58.358 [2024-12-09 05:26:40.633997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 01:31:58.358 [2024-12-09 05:26:40.634010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:31:58.358 [2024-12-09 05:26:40.634121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:31:58.358 [2024-12-09 05:26:40.634136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 01:31:58.358 [2024-12-09 05:26:40.634147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 01:31:58.358 [2024-12-09 05:26:40.634160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:31:58.358 [2024-12-09 05:26:40.635614] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4630.194 ms, result 0 01:31:58.358 { 01:31:58.358 "name": "ftl0", 01:31:58.358 "uuid": "76f26940-5711-4665-a6ab-7bfc0f25ae7e" 01:31:58.358 } 01:31:58.358 05:26:40 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 01:31:58.358 05:26:40 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 01:31:58.358 05:26:40 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 01:31:58.616 05:26:40 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 01:31:58.616 [2024-12-09 05:26:40.975279] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 01:31:58.616 I/O size of 69632 is greater than zero copy threshold (65536). 01:31:58.616 Zero copy mechanism will not be used. 01:31:58.616 Running I/O for 4 seconds... 01:32:00.934 1333.00 IOPS, 88.52 MiB/s [2024-12-09T05:26:44.328Z] 1352.00 IOPS, 89.78 MiB/s [2024-12-09T05:26:45.272Z] 1375.67 IOPS, 91.35 MiB/s [2024-12-09T05:26:45.272Z] 1407.25 IOPS, 93.45 MiB/s 01:32:02.816 Latency(us) 01:32:02.816 [2024-12-09T05:26:45.272Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:32:02.816 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 01:32:02.816 ftl0 : 4.00 1406.84 93.42 0.00 0.00 744.12 206.44 2092.41 01:32:02.816 [2024-12-09T05:26:45.272Z] =================================================================================================================== 01:32:02.816 [2024-12-09T05:26:45.272Z] Total : 1406.84 93.42 0.00 0.00 744.12 206.44 2092.41 01:32:02.816 [2024-12-09 05:26:44.981184] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 01:32:02.816 { 01:32:02.816 "results": [ 01:32:02.816 { 01:32:02.816 "job": "ftl0", 01:32:02.816 "core_mask": "0x1", 01:32:02.816 "workload": "randwrite", 01:32:02.816 "status": "finished", 01:32:02.816 "queue_depth": 1, 01:32:02.816 "io_size": 69632, 01:32:02.816 "runtime": 4.001877, 01:32:02.816 "iops": 1406.83984040489, 01:32:02.816 "mibps": 93.42295815188723, 01:32:02.816 "io_failed": 0, 01:32:02.816 "io_timeout": 0, 01:32:02.816 "avg_latency_us": 744.1175504148031, 01:32:02.816 "min_latency_us": 206.4449799196787, 01:32:02.816 "max_latency_us": 2092.4144578313253 01:32:02.816 } 01:32:02.816 ], 01:32:02.816 "core_count": 1 01:32:02.816 } 01:32:02.816 05:26:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 01:32:02.816 [2024-12-09 05:26:45.122759] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 01:32:02.816 Running I/O for 4 seconds... 01:32:04.683 11499.00 IOPS, 44.92 MiB/s [2024-12-09T05:26:48.516Z] 11201.50 IOPS, 43.76 MiB/s [2024-12-09T05:26:49.453Z] 10890.00 IOPS, 42.54 MiB/s [2024-12-09T05:26:49.453Z] 10896.50 IOPS, 42.56 MiB/s 01:32:06.997 Latency(us) 01:32:06.997 [2024-12-09T05:26:49.453Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:32:06.997 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 01:32:06.997 ftl0 : 4.02 10872.69 42.47 0.00 0.00 11741.67 228.65 37900.34 01:32:06.997 [2024-12-09T05:26:49.453Z] =================================================================================================================== 01:32:06.997 [2024-12-09T05:26:49.453Z] Total : 10872.69 42.47 0.00 0.00 11741.67 0.00 37900.34 01:32:06.997 { 01:32:06.997 "results": [ 01:32:06.997 { 01:32:06.997 "job": "ftl0", 01:32:06.997 "core_mask": "0x1", 01:32:06.997 "workload": "randwrite", 01:32:06.997 "status": "finished", 01:32:06.997 "queue_depth": 128, 01:32:06.997 "io_size": 4096, 01:32:06.997 "runtime": 4.020257, 01:32:06.997 "iops": 10872.687989847414, 01:32:06.997 "mibps": 42.47143746034146, 01:32:06.997 "io_failed": 0, 01:32:06.997 "io_timeout": 0, 01:32:06.997 "avg_latency_us": 11741.67136975529, 01:32:06.997 "min_latency_us": 228.65220883534136, 01:32:06.997 "max_latency_us": 37900.33734939759 01:32:06.997 } 01:32:06.997 ], 01:32:06.997 "core_count": 1 01:32:06.997 } 01:32:06.997 [2024-12-09 05:26:49.148053] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 01:32:06.997 05:26:49 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 01:32:06.997 [2024-12-09 05:26:49.274029] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 01:32:06.997 Running I/O for 4 seconds... 01:32:08.903 8727.00 IOPS, 34.09 MiB/s [2024-12-09T05:26:52.295Z] 8596.50 IOPS, 33.58 MiB/s [2024-12-09T05:26:53.677Z] 8745.00 IOPS, 34.16 MiB/s [2024-12-09T05:26:53.677Z] 8778.75 IOPS, 34.29 MiB/s 01:32:11.221 Latency(us) 01:32:11.221 [2024-12-09T05:26:53.677Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:32:11.221 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 01:32:11.221 Verification LBA range: start 0x0 length 0x1400000 01:32:11.221 ftl0 : 4.01 8789.29 34.33 0.00 0.00 14519.08 259.91 27793.58 01:32:11.221 [2024-12-09T05:26:53.677Z] =================================================================================================================== 01:32:11.221 [2024-12-09T05:26:53.677Z] Total : 8789.29 34.33 0.00 0.00 14519.08 0.00 27793.58 01:32:11.221 [2024-12-09 05:26:53.297274] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 01:32:11.221 { 01:32:11.221 "results": [ 01:32:11.221 { 01:32:11.221 "job": "ftl0", 01:32:11.221 "core_mask": "0x1", 01:32:11.221 "workload": "verify", 01:32:11.221 "status": "finished", 01:32:11.221 "verify_range": { 01:32:11.221 "start": 0, 01:32:11.221 "length": 20971520 01:32:11.221 }, 01:32:11.221 "queue_depth": 128, 01:32:11.221 "io_size": 4096, 01:32:11.221 "runtime": 4.009651, 01:32:11.221 "iops": 8789.293631789898, 01:32:11.221 "mibps": 34.33317824917929, 01:32:11.221 "io_failed": 0, 01:32:11.221 "io_timeout": 0, 01:32:11.221 "avg_latency_us": 14519.07673834775, 01:32:11.221 "min_latency_us": 259.906827309237, 01:32:11.221 "max_latency_us": 27793.580722891566 01:32:11.221 } 01:32:11.221 ], 01:32:11.221 "core_count": 1 01:32:11.221 } 01:32:11.221 05:26:53 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 01:32:11.221 [2024-12-09 05:26:53.513766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:11.221 [2024-12-09 05:26:53.513832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 01:32:11.221 [2024-12-09 05:26:53.513849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 01:32:11.221 [2024-12-09 05:26:53.513880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:11.221 [2024-12-09 05:26:53.513909] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 01:32:11.221 [2024-12-09 05:26:53.518598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:11.221 [2024-12-09 05:26:53.518635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 01:32:11.221 [2024-12-09 05:26:53.518652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.671 ms 01:32:11.221 [2024-12-09 05:26:53.518679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:11.221 [2024-12-09 05:26:53.520811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:11.221 [2024-12-09 05:26:53.520856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 01:32:11.221 [2024-12-09 05:26:53.520881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.100 ms 01:32:11.221 [2024-12-09 05:26:53.520893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:11.480 [2024-12-09 05:26:53.736850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:11.481 [2024-12-09 05:26:53.736904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 01:32:11.481 [2024-12-09 05:26:53.736931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 216.280 ms 01:32:11.481 [2024-12-09 05:26:53.736942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:11.481 [2024-12-09 05:26:53.741817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:11.481 [2024-12-09 05:26:53.741850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 01:32:11.481 [2024-12-09 05:26:53.741866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.838 ms 01:32:11.481 [2024-12-09 05:26:53.741881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:11.481 [2024-12-09 05:26:53.777488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:11.481 [2024-12-09 05:26:53.777527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 01:32:11.481 [2024-12-09 05:26:53.777544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.595 ms 01:32:11.481 [2024-12-09 05:26:53.777554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:11.481 [2024-12-09 05:26:53.799253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:11.481 [2024-12-09 05:26:53.799298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 01:32:11.481 [2024-12-09 05:26:53.799315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.689 ms 01:32:11.481 [2024-12-09 05:26:53.799343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:11.481 [2024-12-09 05:26:53.799523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:11.481 [2024-12-09 05:26:53.799538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 01:32:11.481 [2024-12-09 05:26:53.799557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.133 ms 01:32:11.481 [2024-12-09 05:26:53.799567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:11.481 [2024-12-09 05:26:53.833507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:11.481 [2024-12-09 05:26:53.833544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 01:32:11.481 [2024-12-09 05:26:53.833560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.974 ms 01:32:11.481 [2024-12-09 05:26:53.833570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:11.481 [2024-12-09 05:26:53.867123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:11.481 [2024-12-09 05:26:53.867161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 01:32:11.481 [2024-12-09 05:26:53.867177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.565 ms 01:32:11.481 [2024-12-09 05:26:53.867202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:11.481 [2024-12-09 05:26:53.899951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:11.481 [2024-12-09 05:26:53.899990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 01:32:11.481 [2024-12-09 05:26:53.900007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.755 ms 01:32:11.481 [2024-12-09 05:26:53.900017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:11.481 [2024-12-09 05:26:53.933830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:11.481 [2024-12-09 05:26:53.933867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 01:32:11.481 [2024-12-09 05:26:53.933887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.773 ms 01:32:11.481 [2024-12-09 05:26:53.933896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:11.481 [2024-12-09 05:26:53.933936] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 01:32:11.481 [2024-12-09 05:26:53.933953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 01:32:11.481 [2024-12-09 05:26:53.933968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 01:32:11.481 [2024-12-09 05:26:53.933980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 01:32:11.481 [2024-12-09 05:26:53.933994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 01:32:11.481 [2024-12-09 05:26:53.934005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 01:32:11.481 [2024-12-09 05:26:53.934019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 01:32:11.481 [2024-12-09 05:26:53.934029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 01:32:11.481 [2024-12-09 05:26:53.934044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 01:32:11.481 [2024-12-09 05:26:53.934054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 01:32:11.481 [2024-12-09 05:26:53.934067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 01:32:11.481 [2024-12-09 05:26:53.934078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 01:32:11.481 [2024-12-09 05:26:53.934090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 01:32:11.481 [2024-12-09 05:26:53.934100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 01:32:11.481 [2024-12-09 05:26:53.934117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 01:32:11.481 [2024-12-09 05:26:53.934127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 01:32:11.481 [2024-12-09 05:26:53.934140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 01:32:11.481 [2024-12-09 05:26:53.934150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 01:32:11.481 [2024-12-09 05:26:53.934163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 01:32:11.481 [2024-12-09 05:26:53.934173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 01:32:11.481 [2024-12-09 05:26:53.934187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 01:32:11.481 [2024-12-09 05:26:53.934197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 01:32:11.481 [2024-12-09 05:26:53.934210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 01:32:11.481 [2024-12-09 05:26:53.934220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 01:32:11.481 [2024-12-09 05:26:53.934232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 01:32:11.481 [2024-12-09 05:26:53.934259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 01:32:11.481 [2024-12-09 05:26:53.934272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 01:32:11.481 [2024-12-09 05:26:53.934283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 01:32:11.481 [2024-12-09 05:26:53.934296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 01:32:11.481 [2024-12-09 05:26:53.934306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 01:32:11.481 [2024-12-09 05:26:53.934323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 01:32:11.481 [2024-12-09 05:26:53.934333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 01:32:11.481 [2024-12-09 05:26:53.934348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 01:32:11.481 [2024-12-09 05:26:53.934359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 01:32:11.481 [2024-12-09 05:26:53.934374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 01:32:11.481 [2024-12-09 05:26:53.934385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 01:32:11.481 [2024-12-09 05:26:53.934400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 01:32:11.481 [2024-12-09 05:26:53.934410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 01:32:11.481 [2024-12-09 05:26:53.934424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 01:32:11.481 [2024-12-09 05:26:53.934448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 01:32:11.481 [2024-12-09 05:26:53.934463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 01:32:11.481 [2024-12-09 05:26:53.934473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 01:32:11.481 [2024-12-09 05:26:53.934498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 01:32:11.481 [2024-12-09 05:26:53.934509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 01:32:11.481 [2024-12-09 05:26:53.934523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 01:32:11.481 [2024-12-09 05:26:53.934534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 01:32:11.481 [2024-12-09 05:26:53.934553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 01:32:11.481 [2024-12-09 05:26:53.934564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 01:32:11.481 [2024-12-09 05:26:53.934578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 01:32:11.481 [2024-12-09 05:26:53.934589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 01:32:11.481 [2024-12-09 05:26:53.934604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 01:32:11.481 [2024-12-09 05:26:53.934614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 01:32:11.481 [2024-12-09 05:26:53.934628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 01:32:11.481 [2024-12-09 05:26:53.934639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 01:32:11.481 [2024-12-09 05:26:53.934653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 01:32:11.481 [2024-12-09 05:26:53.934664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 01:32:11.481 [2024-12-09 05:26:53.934677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 01:32:11.481 [2024-12-09 05:26:53.934687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 01:32:11.481 [2024-12-09 05:26:53.934701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 01:32:11.482 [2024-12-09 05:26:53.934712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 01:32:11.482 [2024-12-09 05:26:53.934725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 01:32:11.482 [2024-12-09 05:26:53.934736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 01:32:11.482 [2024-12-09 05:26:53.934752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 01:32:11.482 [2024-12-09 05:26:53.934763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 01:32:11.482 [2024-12-09 05:26:53.934779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 01:32:11.482 [2024-12-09 05:26:53.934789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 01:32:11.482 [2024-12-09 05:26:53.934803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 01:32:11.482 [2024-12-09 05:26:53.934813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 01:32:11.482 [2024-12-09 05:26:53.934828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 01:32:11.482 [2024-12-09 05:26:53.934838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 01:32:11.482 [2024-12-09 05:26:53.934851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 01:32:11.482 [2024-12-09 05:26:53.934862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 01:32:11.482 [2024-12-09 05:26:53.934878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 01:32:11.482 [2024-12-09 05:26:53.934890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 01:32:11.482 [2024-12-09 05:26:53.934904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 01:32:11.482 [2024-12-09 05:26:53.934914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 01:32:11.482 [2024-12-09 05:26:53.934928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 01:32:11.482 [2024-12-09 05:26:53.934939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 01:32:11.482 [2024-12-09 05:26:53.934956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 01:32:11.482 [2024-12-09 05:26:53.934969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 01:32:11.482 [2024-12-09 05:26:53.934990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 01:32:11.482 [2024-12-09 05:26:53.935001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 01:32:11.482 [2024-12-09 05:26:53.935015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 01:32:11.482 [2024-12-09 05:26:53.935026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 01:32:11.482 [2024-12-09 05:26:53.935041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 01:32:11.482 [2024-12-09 05:26:53.935052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 01:32:11.482 [2024-12-09 05:26:53.935066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 01:32:11.482 [2024-12-09 05:26:53.935077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 01:32:11.482 [2024-12-09 05:26:53.935091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 01:32:11.482 [2024-12-09 05:26:53.935103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 01:32:11.482 [2024-12-09 05:26:53.935116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 01:32:11.482 [2024-12-09 05:26:53.935127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 01:32:11.482 [2024-12-09 05:26:53.935140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 01:32:11.482 [2024-12-09 05:26:53.935151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 01:32:11.482 [2024-12-09 05:26:53.935167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 01:32:11.482 [2024-12-09 05:26:53.935179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 01:32:11.482 [2024-12-09 05:26:53.935194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 01:32:11.482 [2024-12-09 05:26:53.935204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 01:32:11.482 [2024-12-09 05:26:53.935219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 01:32:11.482 [2024-12-09 05:26:53.935230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 01:32:11.482 [2024-12-09 05:26:53.935245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 01:32:11.482 [2024-12-09 05:26:53.935264] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 01:32:11.482 [2024-12-09 05:26:53.935278] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 76f26940-5711-4665-a6ab-7bfc0f25ae7e 01:32:11.482 [2024-12-09 05:26:53.935294] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 01:32:11.482 [2024-12-09 05:26:53.935307] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 01:32:11.482 [2024-12-09 05:26:53.935317] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 01:32:11.482 [2024-12-09 05:26:53.935331] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 01:32:11.482 [2024-12-09 05:26:53.935341] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 01:32:11.482 [2024-12-09 05:26:53.935355] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 01:32:11.482 [2024-12-09 05:26:53.935365] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 01:32:11.482 [2024-12-09 05:26:53.935381] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 01:32:11.482 [2024-12-09 05:26:53.935390] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 01:32:11.482 [2024-12-09 05:26:53.935403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:11.482 [2024-12-09 05:26:53.935414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 01:32:11.482 [2024-12-09 05:26:53.935428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.472 ms 01:32:11.482 [2024-12-09 05:26:53.935438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:11.742 [2024-12-09 05:26:53.955376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:11.742 [2024-12-09 05:26:53.955411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 01:32:11.742 [2024-12-09 05:26:53.955427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.908 ms 01:32:11.742 [2024-12-09 05:26:53.955453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:11.742 [2024-12-09 05:26:53.956030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:11.742 [2024-12-09 05:26:53.956053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 01:32:11.742 [2024-12-09 05:26:53.956067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.553 ms 01:32:11.742 [2024-12-09 05:26:53.956087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:11.742 [2024-12-09 05:26:54.011206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:32:11.742 [2024-12-09 05:26:54.011255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 01:32:11.742 [2024-12-09 05:26:54.011293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:32:11.742 [2024-12-09 05:26:54.011304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:11.742 [2024-12-09 05:26:54.011370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:32:11.742 [2024-12-09 05:26:54.011382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 01:32:11.742 [2024-12-09 05:26:54.011396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:32:11.742 [2024-12-09 05:26:54.011405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:11.742 [2024-12-09 05:26:54.011513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:32:11.742 [2024-12-09 05:26:54.011527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 01:32:11.742 [2024-12-09 05:26:54.011542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:32:11.742 [2024-12-09 05:26:54.011553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:11.742 [2024-12-09 05:26:54.011575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:32:11.742 [2024-12-09 05:26:54.011586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 01:32:11.742 [2024-12-09 05:26:54.011600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:32:11.742 [2024-12-09 05:26:54.011610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:11.742 [2024-12-09 05:26:54.139028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:32:11.742 [2024-12-09 05:26:54.139092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 01:32:11.742 [2024-12-09 05:26:54.139131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:32:11.742 [2024-12-09 05:26:54.139143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:12.001 [2024-12-09 05:26:54.241255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:32:12.001 [2024-12-09 05:26:54.241317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 01:32:12.001 [2024-12-09 05:26:54.241335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:32:12.001 [2024-12-09 05:26:54.241346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:12.001 [2024-12-09 05:26:54.241519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:32:12.001 [2024-12-09 05:26:54.241534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 01:32:12.001 [2024-12-09 05:26:54.241548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:32:12.001 [2024-12-09 05:26:54.241559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:12.001 [2024-12-09 05:26:54.241629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:32:12.001 [2024-12-09 05:26:54.241641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 01:32:12.001 [2024-12-09 05:26:54.241655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:32:12.001 [2024-12-09 05:26:54.241665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:12.001 [2024-12-09 05:26:54.241816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:32:12.001 [2024-12-09 05:26:54.241834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 01:32:12.001 [2024-12-09 05:26:54.241852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:32:12.001 [2024-12-09 05:26:54.241862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:12.001 [2024-12-09 05:26:54.241905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:32:12.001 [2024-12-09 05:26:54.241933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 01:32:12.001 [2024-12-09 05:26:54.241947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:32:12.001 [2024-12-09 05:26:54.241958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:12.001 [2024-12-09 05:26:54.242010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:32:12.001 [2024-12-09 05:26:54.242025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 01:32:12.001 [2024-12-09 05:26:54.242040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:32:12.001 [2024-12-09 05:26:54.242063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:12.001 [2024-12-09 05:26:54.242121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:32:12.001 [2024-12-09 05:26:54.242134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 01:32:12.001 [2024-12-09 05:26:54.242148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:32:12.001 [2024-12-09 05:26:54.242158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:12.001 [2024-12-09 05:26:54.242320] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 729.680 ms, result 0 01:32:12.001 true 01:32:12.001 05:26:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 77923 01:32:12.001 05:26:54 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 77923 ']' 01:32:12.001 05:26:54 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 77923 01:32:12.001 05:26:54 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 01:32:12.001 05:26:54 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:32:12.001 05:26:54 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77923 01:32:12.001 killing process with pid 77923 01:32:12.001 Received shutdown signal, test time was about 4.000000 seconds 01:32:12.001 01:32:12.001 Latency(us) 01:32:12.001 [2024-12-09T05:26:54.457Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:32:12.001 [2024-12-09T05:26:54.457Z] =================================================================================================================== 01:32:12.001 [2024-12-09T05:26:54.457Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:32:12.001 05:26:54 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:32:12.001 05:26:54 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:32:12.001 05:26:54 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77923' 01:32:12.001 05:26:54 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 77923 01:32:12.001 05:26:54 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 77923 01:32:16.192 Remove shared memory files 01:32:16.192 05:26:57 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 01:32:16.192 05:26:57 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 01:32:16.192 05:26:57 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 01:32:16.192 05:26:57 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 01:32:16.192 05:26:57 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 01:32:16.192 05:26:57 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 01:32:16.192 05:26:57 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 01:32:16.192 05:26:57 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 01:32:16.192 01:32:16.192 real 0m26.169s 01:32:16.192 user 0m28.509s 01:32:16.192 sys 0m1.476s 01:32:16.192 05:26:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 01:32:16.192 05:26:57 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 01:32:16.192 ************************************ 01:32:16.192 END TEST ftl_bdevperf 01:32:16.192 ************************************ 01:32:16.192 05:26:58 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 01:32:16.192 05:26:58 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:32:16.192 05:26:58 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 01:32:16.192 05:26:58 ftl -- common/autotest_common.sh@10 -- # set +x 01:32:16.192 ************************************ 01:32:16.192 START TEST ftl_trim 01:32:16.192 ************************************ 01:32:16.192 05:26:58 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 01:32:16.192 * Looking for test storage... 01:32:16.192 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 01:32:16.192 05:26:58 ftl.ftl_trim -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:32:16.192 05:26:58 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lcov --version 01:32:16.192 05:26:58 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:32:16.192 05:26:58 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:32:16.192 05:26:58 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:32:16.192 05:26:58 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 01:32:16.192 05:26:58 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 01:32:16.192 05:26:58 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 01:32:16.192 05:26:58 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 01:32:16.192 05:26:58 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 01:32:16.192 05:26:58 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 01:32:16.192 05:26:58 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 01:32:16.192 05:26:58 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 01:32:16.192 05:26:58 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 01:32:16.192 05:26:58 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:32:16.192 05:26:58 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 01:32:16.192 05:26:58 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 01:32:16.192 05:26:58 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 01:32:16.192 05:26:58 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:32:16.192 05:26:58 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 01:32:16.192 05:26:58 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 01:32:16.192 05:26:58 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:32:16.192 05:26:58 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 01:32:16.192 05:26:58 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 01:32:16.192 05:26:58 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 01:32:16.192 05:26:58 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 01:32:16.192 05:26:58 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:32:16.192 05:26:58 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 01:32:16.192 05:26:58 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 01:32:16.192 05:26:58 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:32:16.192 05:26:58 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:32:16.192 05:26:58 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 01:32:16.192 05:26:58 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:32:16.192 05:26:58 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:32:16.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:32:16.192 --rc genhtml_branch_coverage=1 01:32:16.192 --rc genhtml_function_coverage=1 01:32:16.192 --rc genhtml_legend=1 01:32:16.192 --rc geninfo_all_blocks=1 01:32:16.192 --rc geninfo_unexecuted_blocks=1 01:32:16.192 01:32:16.192 ' 01:32:16.192 05:26:58 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:32:16.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:32:16.192 --rc genhtml_branch_coverage=1 01:32:16.192 --rc genhtml_function_coverage=1 01:32:16.192 --rc genhtml_legend=1 01:32:16.192 --rc geninfo_all_blocks=1 01:32:16.192 --rc geninfo_unexecuted_blocks=1 01:32:16.192 01:32:16.192 ' 01:32:16.192 05:26:58 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:32:16.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:32:16.192 --rc genhtml_branch_coverage=1 01:32:16.192 --rc genhtml_function_coverage=1 01:32:16.192 --rc genhtml_legend=1 01:32:16.192 --rc geninfo_all_blocks=1 01:32:16.192 --rc geninfo_unexecuted_blocks=1 01:32:16.192 01:32:16.192 ' 01:32:16.192 05:26:58 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:32:16.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:32:16.192 --rc genhtml_branch_coverage=1 01:32:16.192 --rc genhtml_function_coverage=1 01:32:16.192 --rc genhtml_legend=1 01:32:16.192 --rc geninfo_all_blocks=1 01:32:16.192 --rc geninfo_unexecuted_blocks=1 01:32:16.192 01:32:16.192 ' 01:32:16.192 05:26:58 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 01:32:16.192 05:26:58 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 01:32:16.192 05:26:58 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 01:32:16.192 05:26:58 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 01:32:16.192 05:26:58 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 01:32:16.192 05:26:58 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 01:32:16.192 05:26:58 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:32:16.192 05:26:58 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 01:32:16.192 05:26:58 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 01:32:16.192 05:26:58 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:32:16.192 05:26:58 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:32:16.192 05:26:58 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 01:32:16.192 05:26:58 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 01:32:16.192 05:26:58 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 01:32:16.192 05:26:58 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 01:32:16.192 05:26:58 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 01:32:16.192 05:26:58 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 01:32:16.192 05:26:58 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:32:16.192 05:26:58 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:32:16.192 05:26:58 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 01:32:16.192 05:26:58 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 01:32:16.192 05:26:58 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 01:32:16.192 05:26:58 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 01:32:16.192 05:26:58 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 01:32:16.192 05:26:58 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 01:32:16.192 05:26:58 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 01:32:16.192 05:26:58 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 01:32:16.192 05:26:58 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:32:16.192 05:26:58 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:32:16.192 05:26:58 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:32:16.192 05:26:58 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 01:32:16.192 05:26:58 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 01:32:16.192 05:26:58 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 01:32:16.192 05:26:58 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 01:32:16.192 05:26:58 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 01:32:16.192 05:26:58 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 01:32:16.192 05:26:58 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 01:32:16.192 05:26:58 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 01:32:16.192 05:26:58 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 01:32:16.192 05:26:58 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 01:32:16.192 05:26:58 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 01:32:16.192 05:26:58 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=78287 01:32:16.192 05:26:58 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 01:32:16.192 05:26:58 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 78287 01:32:16.192 05:26:58 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 78287 ']' 01:32:16.192 05:26:58 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:32:16.192 05:26:58 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 01:32:16.192 05:26:58 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:32:16.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:32:16.192 05:26:58 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 01:32:16.192 05:26:58 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 01:32:16.192 [2024-12-09 05:26:58.466960] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:32:16.192 [2024-12-09 05:26:58.467109] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78287 ] 01:32:16.466 [2024-12-09 05:26:58.654673] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 01:32:16.466 [2024-12-09 05:26:58.794413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:32:16.466 [2024-12-09 05:26:58.794488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:32:16.466 [2024-12-09 05:26:58.794547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:32:17.401 05:26:59 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:32:17.401 05:26:59 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 01:32:17.401 05:26:59 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 01:32:17.401 05:26:59 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 01:32:17.401 05:26:59 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 01:32:17.401 05:26:59 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 01:32:17.401 05:26:59 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 01:32:17.401 05:26:59 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 01:32:17.660 05:27:00 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 01:32:17.660 05:27:00 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 01:32:17.660 05:27:00 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 01:32:17.660 05:27:00 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 01:32:17.660 05:27:00 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 01:32:17.660 05:27:00 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 01:32:17.660 05:27:00 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 01:32:17.660 05:27:00 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 01:32:17.918 05:27:00 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 01:32:17.918 { 01:32:17.918 "name": "nvme0n1", 01:32:17.918 "aliases": [ 01:32:17.918 "1a24459b-6566-475a-be9f-7f364e62c77c" 01:32:17.918 ], 01:32:17.918 "product_name": "NVMe disk", 01:32:17.918 "block_size": 4096, 01:32:17.918 "num_blocks": 1310720, 01:32:17.918 "uuid": "1a24459b-6566-475a-be9f-7f364e62c77c", 01:32:17.918 "numa_id": -1, 01:32:17.918 "assigned_rate_limits": { 01:32:17.918 "rw_ios_per_sec": 0, 01:32:17.918 "rw_mbytes_per_sec": 0, 01:32:17.918 "r_mbytes_per_sec": 0, 01:32:17.918 "w_mbytes_per_sec": 0 01:32:17.918 }, 01:32:17.918 "claimed": true, 01:32:17.918 "claim_type": "read_many_write_one", 01:32:17.918 "zoned": false, 01:32:17.918 "supported_io_types": { 01:32:17.918 "read": true, 01:32:17.918 "write": true, 01:32:17.918 "unmap": true, 01:32:17.918 "flush": true, 01:32:17.918 "reset": true, 01:32:17.918 "nvme_admin": true, 01:32:17.918 "nvme_io": true, 01:32:17.918 "nvme_io_md": false, 01:32:17.918 "write_zeroes": true, 01:32:17.918 "zcopy": false, 01:32:17.918 "get_zone_info": false, 01:32:17.918 "zone_management": false, 01:32:17.918 "zone_append": false, 01:32:17.918 "compare": true, 01:32:17.918 "compare_and_write": false, 01:32:17.918 "abort": true, 01:32:17.918 "seek_hole": false, 01:32:17.918 "seek_data": false, 01:32:17.918 "copy": true, 01:32:17.918 "nvme_iov_md": false 01:32:17.918 }, 01:32:17.918 "driver_specific": { 01:32:17.918 "nvme": [ 01:32:17.918 { 01:32:17.918 "pci_address": "0000:00:11.0", 01:32:17.918 "trid": { 01:32:17.918 "trtype": "PCIe", 01:32:17.918 "traddr": "0000:00:11.0" 01:32:17.918 }, 01:32:17.918 "ctrlr_data": { 01:32:17.918 "cntlid": 0, 01:32:17.918 "vendor_id": "0x1b36", 01:32:17.918 "model_number": "QEMU NVMe Ctrl", 01:32:17.918 "serial_number": "12341", 01:32:17.918 "firmware_revision": "8.0.0", 01:32:17.918 "subnqn": "nqn.2019-08.org.qemu:12341", 01:32:17.918 "oacs": { 01:32:17.918 "security": 0, 01:32:17.918 "format": 1, 01:32:17.918 "firmware": 0, 01:32:17.918 "ns_manage": 1 01:32:17.918 }, 01:32:17.918 "multi_ctrlr": false, 01:32:17.918 "ana_reporting": false 01:32:17.918 }, 01:32:17.918 "vs": { 01:32:17.918 "nvme_version": "1.4" 01:32:17.918 }, 01:32:17.918 "ns_data": { 01:32:17.918 "id": 1, 01:32:17.918 "can_share": false 01:32:17.918 } 01:32:17.918 } 01:32:17.918 ], 01:32:17.918 "mp_policy": "active_passive" 01:32:17.918 } 01:32:17.918 } 01:32:17.918 ]' 01:32:17.918 05:27:00 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 01:32:17.918 05:27:00 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 01:32:17.918 05:27:00 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 01:32:18.177 05:27:00 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 01:32:18.177 05:27:00 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 01:32:18.177 05:27:00 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 01:32:18.177 05:27:00 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 01:32:18.177 05:27:00 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 01:32:18.177 05:27:00 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 01:32:18.177 05:27:00 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 01:32:18.177 05:27:00 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 01:32:18.177 05:27:00 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=4f29216b-d743-4a16-a57a-0d62f261b4aa 01:32:18.177 05:27:00 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 01:32:18.177 05:27:00 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4f29216b-d743-4a16-a57a-0d62f261b4aa 01:32:18.435 05:27:00 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 01:32:18.694 05:27:01 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=8fb93c42-fdd1-418a-a35f-b87c841c6766 01:32:18.694 05:27:01 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 8fb93c42-fdd1-418a-a35f-b87c841c6766 01:32:18.952 05:27:01 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=e8843ea4-8e7b-4ad6-910d-8a6d4af92f57 01:32:18.952 05:27:01 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 e8843ea4-8e7b-4ad6-910d-8a6d4af92f57 01:32:18.953 05:27:01 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 01:32:18.953 05:27:01 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 01:32:18.953 05:27:01 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=e8843ea4-8e7b-4ad6-910d-8a6d4af92f57 01:32:18.953 05:27:01 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 01:32:18.953 05:27:01 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size e8843ea4-8e7b-4ad6-910d-8a6d4af92f57 01:32:18.953 05:27:01 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=e8843ea4-8e7b-4ad6-910d-8a6d4af92f57 01:32:18.953 05:27:01 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 01:32:18.953 05:27:01 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 01:32:18.953 05:27:01 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 01:32:18.953 05:27:01 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e8843ea4-8e7b-4ad6-910d-8a6d4af92f57 01:32:19.212 05:27:01 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 01:32:19.212 { 01:32:19.212 "name": "e8843ea4-8e7b-4ad6-910d-8a6d4af92f57", 01:32:19.212 "aliases": [ 01:32:19.212 "lvs/nvme0n1p0" 01:32:19.212 ], 01:32:19.212 "product_name": "Logical Volume", 01:32:19.212 "block_size": 4096, 01:32:19.212 "num_blocks": 26476544, 01:32:19.212 "uuid": "e8843ea4-8e7b-4ad6-910d-8a6d4af92f57", 01:32:19.212 "assigned_rate_limits": { 01:32:19.212 "rw_ios_per_sec": 0, 01:32:19.212 "rw_mbytes_per_sec": 0, 01:32:19.212 "r_mbytes_per_sec": 0, 01:32:19.212 "w_mbytes_per_sec": 0 01:32:19.212 }, 01:32:19.212 "claimed": false, 01:32:19.212 "zoned": false, 01:32:19.212 "supported_io_types": { 01:32:19.212 "read": true, 01:32:19.212 "write": true, 01:32:19.212 "unmap": true, 01:32:19.212 "flush": false, 01:32:19.212 "reset": true, 01:32:19.212 "nvme_admin": false, 01:32:19.212 "nvme_io": false, 01:32:19.212 "nvme_io_md": false, 01:32:19.212 "write_zeroes": true, 01:32:19.212 "zcopy": false, 01:32:19.212 "get_zone_info": false, 01:32:19.212 "zone_management": false, 01:32:19.212 "zone_append": false, 01:32:19.212 "compare": false, 01:32:19.212 "compare_and_write": false, 01:32:19.212 "abort": false, 01:32:19.212 "seek_hole": true, 01:32:19.212 "seek_data": true, 01:32:19.212 "copy": false, 01:32:19.212 "nvme_iov_md": false 01:32:19.212 }, 01:32:19.212 "driver_specific": { 01:32:19.212 "lvol": { 01:32:19.212 "lvol_store_uuid": "8fb93c42-fdd1-418a-a35f-b87c841c6766", 01:32:19.212 "base_bdev": "nvme0n1", 01:32:19.212 "thin_provision": true, 01:32:19.212 "num_allocated_clusters": 0, 01:32:19.212 "snapshot": false, 01:32:19.212 "clone": false, 01:32:19.212 "esnap_clone": false 01:32:19.212 } 01:32:19.212 } 01:32:19.212 } 01:32:19.212 ]' 01:32:19.212 05:27:01 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 01:32:19.212 05:27:01 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 01:32:19.212 05:27:01 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 01:32:19.212 05:27:01 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 01:32:19.212 05:27:01 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 01:32:19.212 05:27:01 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 01:32:19.212 05:27:01 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 01:32:19.212 05:27:01 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 01:32:19.212 05:27:01 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 01:32:19.471 05:27:01 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 01:32:19.471 05:27:01 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 01:32:19.471 05:27:01 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size e8843ea4-8e7b-4ad6-910d-8a6d4af92f57 01:32:19.471 05:27:01 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=e8843ea4-8e7b-4ad6-910d-8a6d4af92f57 01:32:19.471 05:27:01 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 01:32:19.471 05:27:01 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 01:32:19.471 05:27:01 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 01:32:19.471 05:27:01 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e8843ea4-8e7b-4ad6-910d-8a6d4af92f57 01:32:19.729 05:27:02 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 01:32:19.729 { 01:32:19.729 "name": "e8843ea4-8e7b-4ad6-910d-8a6d4af92f57", 01:32:19.729 "aliases": [ 01:32:19.729 "lvs/nvme0n1p0" 01:32:19.729 ], 01:32:19.729 "product_name": "Logical Volume", 01:32:19.729 "block_size": 4096, 01:32:19.729 "num_blocks": 26476544, 01:32:19.729 "uuid": "e8843ea4-8e7b-4ad6-910d-8a6d4af92f57", 01:32:19.729 "assigned_rate_limits": { 01:32:19.729 "rw_ios_per_sec": 0, 01:32:19.729 "rw_mbytes_per_sec": 0, 01:32:19.729 "r_mbytes_per_sec": 0, 01:32:19.729 "w_mbytes_per_sec": 0 01:32:19.729 }, 01:32:19.729 "claimed": false, 01:32:19.729 "zoned": false, 01:32:19.729 "supported_io_types": { 01:32:19.729 "read": true, 01:32:19.729 "write": true, 01:32:19.729 "unmap": true, 01:32:19.729 "flush": false, 01:32:19.729 "reset": true, 01:32:19.729 "nvme_admin": false, 01:32:19.729 "nvme_io": false, 01:32:19.729 "nvme_io_md": false, 01:32:19.729 "write_zeroes": true, 01:32:19.729 "zcopy": false, 01:32:19.729 "get_zone_info": false, 01:32:19.729 "zone_management": false, 01:32:19.729 "zone_append": false, 01:32:19.729 "compare": false, 01:32:19.729 "compare_and_write": false, 01:32:19.729 "abort": false, 01:32:19.729 "seek_hole": true, 01:32:19.729 "seek_data": true, 01:32:19.729 "copy": false, 01:32:19.729 "nvme_iov_md": false 01:32:19.729 }, 01:32:19.729 "driver_specific": { 01:32:19.729 "lvol": { 01:32:19.729 "lvol_store_uuid": "8fb93c42-fdd1-418a-a35f-b87c841c6766", 01:32:19.729 "base_bdev": "nvme0n1", 01:32:19.729 "thin_provision": true, 01:32:19.729 "num_allocated_clusters": 0, 01:32:19.729 "snapshot": false, 01:32:19.729 "clone": false, 01:32:19.729 "esnap_clone": false 01:32:19.729 } 01:32:19.729 } 01:32:19.729 } 01:32:19.729 ]' 01:32:19.729 05:27:02 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 01:32:19.729 05:27:02 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 01:32:19.729 05:27:02 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 01:32:19.729 05:27:02 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 01:32:19.729 05:27:02 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 01:32:19.729 05:27:02 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 01:32:19.729 05:27:02 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 01:32:19.729 05:27:02 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 01:32:19.988 05:27:02 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 01:32:19.988 05:27:02 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 01:32:19.988 05:27:02 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size e8843ea4-8e7b-4ad6-910d-8a6d4af92f57 01:32:19.988 05:27:02 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=e8843ea4-8e7b-4ad6-910d-8a6d4af92f57 01:32:19.988 05:27:02 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 01:32:19.988 05:27:02 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 01:32:19.988 05:27:02 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 01:32:19.988 05:27:02 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e8843ea4-8e7b-4ad6-910d-8a6d4af92f57 01:32:20.247 05:27:02 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 01:32:20.247 { 01:32:20.247 "name": "e8843ea4-8e7b-4ad6-910d-8a6d4af92f57", 01:32:20.247 "aliases": [ 01:32:20.247 "lvs/nvme0n1p0" 01:32:20.247 ], 01:32:20.247 "product_name": "Logical Volume", 01:32:20.247 "block_size": 4096, 01:32:20.247 "num_blocks": 26476544, 01:32:20.247 "uuid": "e8843ea4-8e7b-4ad6-910d-8a6d4af92f57", 01:32:20.247 "assigned_rate_limits": { 01:32:20.247 "rw_ios_per_sec": 0, 01:32:20.247 "rw_mbytes_per_sec": 0, 01:32:20.247 "r_mbytes_per_sec": 0, 01:32:20.247 "w_mbytes_per_sec": 0 01:32:20.247 }, 01:32:20.247 "claimed": false, 01:32:20.247 "zoned": false, 01:32:20.247 "supported_io_types": { 01:32:20.247 "read": true, 01:32:20.247 "write": true, 01:32:20.247 "unmap": true, 01:32:20.247 "flush": false, 01:32:20.247 "reset": true, 01:32:20.247 "nvme_admin": false, 01:32:20.247 "nvme_io": false, 01:32:20.247 "nvme_io_md": false, 01:32:20.247 "write_zeroes": true, 01:32:20.247 "zcopy": false, 01:32:20.247 "get_zone_info": false, 01:32:20.247 "zone_management": false, 01:32:20.247 "zone_append": false, 01:32:20.247 "compare": false, 01:32:20.247 "compare_and_write": false, 01:32:20.247 "abort": false, 01:32:20.247 "seek_hole": true, 01:32:20.247 "seek_data": true, 01:32:20.247 "copy": false, 01:32:20.247 "nvme_iov_md": false 01:32:20.247 }, 01:32:20.247 "driver_specific": { 01:32:20.247 "lvol": { 01:32:20.247 "lvol_store_uuid": "8fb93c42-fdd1-418a-a35f-b87c841c6766", 01:32:20.247 "base_bdev": "nvme0n1", 01:32:20.247 "thin_provision": true, 01:32:20.247 "num_allocated_clusters": 0, 01:32:20.247 "snapshot": false, 01:32:20.247 "clone": false, 01:32:20.247 "esnap_clone": false 01:32:20.247 } 01:32:20.247 } 01:32:20.247 } 01:32:20.247 ]' 01:32:20.247 05:27:02 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 01:32:20.247 05:27:02 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 01:32:20.247 05:27:02 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 01:32:20.247 05:27:02 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 01:32:20.247 05:27:02 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 01:32:20.247 05:27:02 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 01:32:20.247 05:27:02 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 01:32:20.247 05:27:02 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d e8843ea4-8e7b-4ad6-910d-8a6d4af92f57 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 01:32:20.507 [2024-12-09 05:27:02.843873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:20.507 [2024-12-09 05:27:02.843940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 01:32:20.507 [2024-12-09 05:27:02.843962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 01:32:20.507 [2024-12-09 05:27:02.843974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:20.507 [2024-12-09 05:27:02.847888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:20.507 [2024-12-09 05:27:02.847933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 01:32:20.507 [2024-12-09 05:27:02.847950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.877 ms 01:32:20.507 [2024-12-09 05:27:02.847961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:20.507 [2024-12-09 05:27:02.848156] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 01:32:20.507 [2024-12-09 05:27:02.849199] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 01:32:20.507 [2024-12-09 05:27:02.849239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:20.507 [2024-12-09 05:27:02.849251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 01:32:20.507 [2024-12-09 05:27:02.849265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.095 ms 01:32:20.507 [2024-12-09 05:27:02.849276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:20.507 [2024-12-09 05:27:02.849584] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID ea97d973-01dc-423e-88bb-a65e4c614878 01:32:20.507 [2024-12-09 05:27:02.852065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:20.507 [2024-12-09 05:27:02.852108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 01:32:20.507 [2024-12-09 05:27:02.852121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 01:32:20.507 [2024-12-09 05:27:02.852137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:20.507 [2024-12-09 05:27:02.865768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:20.507 [2024-12-09 05:27:02.865815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 01:32:20.507 [2024-12-09 05:27:02.865830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.536 ms 01:32:20.507 [2024-12-09 05:27:02.865847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:20.507 [2024-12-09 05:27:02.866031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:20.507 [2024-12-09 05:27:02.866055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 01:32:20.507 [2024-12-09 05:27:02.866067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 01:32:20.507 [2024-12-09 05:27:02.866086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:20.507 [2024-12-09 05:27:02.866139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:20.508 [2024-12-09 05:27:02.866154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 01:32:20.508 [2024-12-09 05:27:02.866165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 01:32:20.508 [2024-12-09 05:27:02.866183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:20.508 [2024-12-09 05:27:02.866231] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 01:32:20.508 [2024-12-09 05:27:02.872633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:20.508 [2024-12-09 05:27:02.872672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 01:32:20.508 [2024-12-09 05:27:02.872689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.416 ms 01:32:20.508 [2024-12-09 05:27:02.872701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:20.508 [2024-12-09 05:27:02.872791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:20.508 [2024-12-09 05:27:02.872828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 01:32:20.508 [2024-12-09 05:27:02.872844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 01:32:20.508 [2024-12-09 05:27:02.872855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:20.508 [2024-12-09 05:27:02.872896] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 01:32:20.508 [2024-12-09 05:27:02.873037] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 01:32:20.508 [2024-12-09 05:27:02.873065] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 01:32:20.508 [2024-12-09 05:27:02.873080] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 01:32:20.508 [2024-12-09 05:27:02.873098] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 01:32:20.508 [2024-12-09 05:27:02.873110] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 01:32:20.508 [2024-12-09 05:27:02.873126] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 01:32:20.508 [2024-12-09 05:27:02.873136] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 01:32:20.508 [2024-12-09 05:27:02.873155] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 01:32:20.508 [2024-12-09 05:27:02.873165] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 01:32:20.508 [2024-12-09 05:27:02.873180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:20.508 [2024-12-09 05:27:02.873191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 01:32:20.508 [2024-12-09 05:27:02.873207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.287 ms 01:32:20.508 [2024-12-09 05:27:02.873218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:20.508 [2024-12-09 05:27:02.873314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:20.508 [2024-12-09 05:27:02.873328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 01:32:20.508 [2024-12-09 05:27:02.873342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 01:32:20.508 [2024-12-09 05:27:02.873352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:20.508 [2024-12-09 05:27:02.873499] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 01:32:20.508 [2024-12-09 05:27:02.873517] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 01:32:20.508 [2024-12-09 05:27:02.873532] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 01:32:20.508 [2024-12-09 05:27:02.873543] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:32:20.508 [2024-12-09 05:27:02.873557] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 01:32:20.508 [2024-12-09 05:27:02.873566] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 01:32:20.508 [2024-12-09 05:27:02.873578] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 01:32:20.508 [2024-12-09 05:27:02.873588] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 01:32:20.508 [2024-12-09 05:27:02.873601] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 01:32:20.508 [2024-12-09 05:27:02.873610] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 01:32:20.508 [2024-12-09 05:27:02.873623] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 01:32:20.508 [2024-12-09 05:27:02.873632] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 01:32:20.508 [2024-12-09 05:27:02.873645] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 01:32:20.508 [2024-12-09 05:27:02.873654] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 01:32:20.508 [2024-12-09 05:27:02.873667] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 01:32:20.508 [2024-12-09 05:27:02.873679] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:32:20.508 [2024-12-09 05:27:02.873695] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 01:32:20.508 [2024-12-09 05:27:02.873705] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 01:32:20.508 [2024-12-09 05:27:02.873720] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:32:20.508 [2024-12-09 05:27:02.873730] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 01:32:20.508 [2024-12-09 05:27:02.873742] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 01:32:20.508 [2024-12-09 05:27:02.873752] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:32:20.508 [2024-12-09 05:27:02.873764] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 01:32:20.508 [2024-12-09 05:27:02.873774] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 01:32:20.508 [2024-12-09 05:27:02.873786] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:32:20.508 [2024-12-09 05:27:02.873796] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 01:32:20.508 [2024-12-09 05:27:02.873808] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 01:32:20.508 [2024-12-09 05:27:02.873817] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:32:20.508 [2024-12-09 05:27:02.873830] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 01:32:20.508 [2024-12-09 05:27:02.873840] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 01:32:20.508 [2024-12-09 05:27:02.873852] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:32:20.508 [2024-12-09 05:27:02.873862] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 01:32:20.508 [2024-12-09 05:27:02.873879] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 01:32:20.508 [2024-12-09 05:27:02.873888] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 01:32:20.508 [2024-12-09 05:27:02.873901] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 01:32:20.508 [2024-12-09 05:27:02.873910] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 01:32:20.508 [2024-12-09 05:27:02.873923] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 01:32:20.508 [2024-12-09 05:27:02.873932] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 01:32:20.508 [2024-12-09 05:27:02.873944] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 01:32:20.508 [2024-12-09 05:27:02.873953] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:32:20.508 [2024-12-09 05:27:02.873965] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 01:32:20.508 [2024-12-09 05:27:02.873975] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 01:32:20.508 [2024-12-09 05:27:02.873987] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:32:20.508 [2024-12-09 05:27:02.873996] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 01:32:20.508 [2024-12-09 05:27:02.874009] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 01:32:20.508 [2024-12-09 05:27:02.874020] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 01:32:20.508 [2024-12-09 05:27:02.874034] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:32:20.508 [2024-12-09 05:27:02.874046] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 01:32:20.508 [2024-12-09 05:27:02.874064] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 01:32:20.508 [2024-12-09 05:27:02.874075] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 01:32:20.508 [2024-12-09 05:27:02.874088] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 01:32:20.508 [2024-12-09 05:27:02.874097] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 01:32:20.508 [2024-12-09 05:27:02.874110] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 01:32:20.508 [2024-12-09 05:27:02.874125] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 01:32:20.508 [2024-12-09 05:27:02.874142] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 01:32:20.508 [2024-12-09 05:27:02.874162] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 01:32:20.508 [2024-12-09 05:27:02.874176] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 01:32:20.508 [2024-12-09 05:27:02.874186] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 01:32:20.508 [2024-12-09 05:27:02.874200] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 01:32:20.508 [2024-12-09 05:27:02.874210] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 01:32:20.508 [2024-12-09 05:27:02.874224] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 01:32:20.508 [2024-12-09 05:27:02.874235] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 01:32:20.508 [2024-12-09 05:27:02.874249] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 01:32:20.508 [2024-12-09 05:27:02.874260] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 01:32:20.508 [2024-12-09 05:27:02.874276] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 01:32:20.508 [2024-12-09 05:27:02.874287] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 01:32:20.508 [2024-12-09 05:27:02.874301] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 01:32:20.508 [2024-12-09 05:27:02.874311] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 01:32:20.508 [2024-12-09 05:27:02.874325] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 01:32:20.508 [2024-12-09 05:27:02.874335] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 01:32:20.509 [2024-12-09 05:27:02.874351] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 01:32:20.509 [2024-12-09 05:27:02.874362] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 01:32:20.509 [2024-12-09 05:27:02.874376] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 01:32:20.509 [2024-12-09 05:27:02.874386] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 01:32:20.509 [2024-12-09 05:27:02.874400] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 01:32:20.509 [2024-12-09 05:27:02.874411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:20.509 [2024-12-09 05:27:02.874446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 01:32:20.509 [2024-12-09 05:27:02.874456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.988 ms 01:32:20.509 [2024-12-09 05:27:02.874481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:20.509 [2024-12-09 05:27:02.874589] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 01:32:20.509 [2024-12-09 05:27:02.874610] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 01:32:24.698 [2024-12-09 05:27:06.924083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:24.699 [2024-12-09 05:27:06.924182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 01:32:24.699 [2024-12-09 05:27:06.924203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4056.066 ms 01:32:24.699 [2024-12-09 05:27:06.924218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:24.699 [2024-12-09 05:27:06.971422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:24.699 [2024-12-09 05:27:06.971506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 01:32:24.699 [2024-12-09 05:27:06.971525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.856 ms 01:32:24.699 [2024-12-09 05:27:06.971540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:24.699 [2024-12-09 05:27:06.971742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:24.699 [2024-12-09 05:27:06.971760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 01:32:24.699 [2024-12-09 05:27:06.971797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 01:32:24.699 [2024-12-09 05:27:06.971824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:24.699 [2024-12-09 05:27:07.041143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:24.699 [2024-12-09 05:27:07.041206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 01:32:24.699 [2024-12-09 05:27:07.041238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.389 ms 01:32:24.699 [2024-12-09 05:27:07.041254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:24.699 [2024-12-09 05:27:07.041347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:24.699 [2024-12-09 05:27:07.041363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 01:32:24.699 [2024-12-09 05:27:07.041375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 01:32:24.699 [2024-12-09 05:27:07.041389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:24.699 [2024-12-09 05:27:07.042214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:24.699 [2024-12-09 05:27:07.042239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 01:32:24.699 [2024-12-09 05:27:07.042252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.785 ms 01:32:24.699 [2024-12-09 05:27:07.042266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:24.699 [2024-12-09 05:27:07.042394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:24.699 [2024-12-09 05:27:07.042409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 01:32:24.699 [2024-12-09 05:27:07.042439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 01:32:24.699 [2024-12-09 05:27:07.042458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:24.699 [2024-12-09 05:27:07.068263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:24.699 [2024-12-09 05:27:07.068315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 01:32:24.699 [2024-12-09 05:27:07.068331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.786 ms 01:32:24.699 [2024-12-09 05:27:07.068344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:24.699 [2024-12-09 05:27:07.082406] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 01:32:24.699 [2024-12-09 05:27:07.109303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:24.699 [2024-12-09 05:27:07.109358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 01:32:24.699 [2024-12-09 05:27:07.109379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.868 ms 01:32:24.699 [2024-12-09 05:27:07.109390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:24.956 [2024-12-09 05:27:07.218797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:24.956 [2024-12-09 05:27:07.218876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 01:32:24.956 [2024-12-09 05:27:07.218899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 109.400 ms 01:32:24.956 [2024-12-09 05:27:07.218911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:24.957 [2024-12-09 05:27:07.219248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:24.957 [2024-12-09 05:27:07.219264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 01:32:24.957 [2024-12-09 05:27:07.219284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.190 ms 01:32:24.957 [2024-12-09 05:27:07.219296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:24.957 [2024-12-09 05:27:07.255347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:24.957 [2024-12-09 05:27:07.255392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 01:32:24.957 [2024-12-09 05:27:07.255412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.059 ms 01:32:24.957 [2024-12-09 05:27:07.255443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:24.957 [2024-12-09 05:27:07.290871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:24.957 [2024-12-09 05:27:07.290922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 01:32:24.957 [2024-12-09 05:27:07.290942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.377 ms 01:32:24.957 [2024-12-09 05:27:07.290952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:24.957 [2024-12-09 05:27:07.291903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:24.957 [2024-12-09 05:27:07.291936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 01:32:24.957 [2024-12-09 05:27:07.291952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.850 ms 01:32:24.957 [2024-12-09 05:27:07.291963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:24.957 [2024-12-09 05:27:07.396901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:24.957 [2024-12-09 05:27:07.396953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 01:32:24.957 [2024-12-09 05:27:07.396978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 105.054 ms 01:32:24.957 [2024-12-09 05:27:07.396989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:25.214 [2024-12-09 05:27:07.435891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:25.215 [2024-12-09 05:27:07.435938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 01:32:25.215 [2024-12-09 05:27:07.435957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.819 ms 01:32:25.215 [2024-12-09 05:27:07.435985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:25.215 [2024-12-09 05:27:07.471647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:25.215 [2024-12-09 05:27:07.471689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 01:32:25.215 [2024-12-09 05:27:07.471706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.612 ms 01:32:25.215 [2024-12-09 05:27:07.471716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:25.215 [2024-12-09 05:27:07.508754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:25.215 [2024-12-09 05:27:07.508817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 01:32:25.215 [2024-12-09 05:27:07.508835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.985 ms 01:32:25.215 [2024-12-09 05:27:07.508846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:25.215 [2024-12-09 05:27:07.508953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:25.215 [2024-12-09 05:27:07.508973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 01:32:25.215 [2024-12-09 05:27:07.508993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 01:32:25.215 [2024-12-09 05:27:07.509004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:25.215 [2024-12-09 05:27:07.509113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:25.215 [2024-12-09 05:27:07.509131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 01:32:25.215 [2024-12-09 05:27:07.509146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 01:32:25.215 [2024-12-09 05:27:07.509157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:25.215 [2024-12-09 05:27:07.510475] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 01:32:25.215 [2024-12-09 05:27:07.515446] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4673.860 ms, result 0 01:32:25.215 [2024-12-09 05:27:07.516475] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 01:32:25.215 { 01:32:25.215 "name": "ftl0", 01:32:25.215 "uuid": "ea97d973-01dc-423e-88bb-a65e4c614878" 01:32:25.215 } 01:32:25.215 05:27:07 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 01:32:25.215 05:27:07 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 01:32:25.215 05:27:07 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:32:25.215 05:27:07 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 01:32:25.215 05:27:07 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:32:25.215 05:27:07 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:32:25.215 05:27:07 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 01:32:25.473 05:27:07 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 01:32:25.731 [ 01:32:25.731 { 01:32:25.731 "name": "ftl0", 01:32:25.731 "aliases": [ 01:32:25.731 "ea97d973-01dc-423e-88bb-a65e4c614878" 01:32:25.731 ], 01:32:25.731 "product_name": "FTL disk", 01:32:25.731 "block_size": 4096, 01:32:25.731 "num_blocks": 23592960, 01:32:25.731 "uuid": "ea97d973-01dc-423e-88bb-a65e4c614878", 01:32:25.731 "assigned_rate_limits": { 01:32:25.731 "rw_ios_per_sec": 0, 01:32:25.731 "rw_mbytes_per_sec": 0, 01:32:25.731 "r_mbytes_per_sec": 0, 01:32:25.731 "w_mbytes_per_sec": 0 01:32:25.731 }, 01:32:25.731 "claimed": false, 01:32:25.731 "zoned": false, 01:32:25.731 "supported_io_types": { 01:32:25.731 "read": true, 01:32:25.731 "write": true, 01:32:25.731 "unmap": true, 01:32:25.731 "flush": true, 01:32:25.731 "reset": false, 01:32:25.731 "nvme_admin": false, 01:32:25.731 "nvme_io": false, 01:32:25.731 "nvme_io_md": false, 01:32:25.731 "write_zeroes": true, 01:32:25.731 "zcopy": false, 01:32:25.731 "get_zone_info": false, 01:32:25.731 "zone_management": false, 01:32:25.731 "zone_append": false, 01:32:25.731 "compare": false, 01:32:25.731 "compare_and_write": false, 01:32:25.731 "abort": false, 01:32:25.731 "seek_hole": false, 01:32:25.731 "seek_data": false, 01:32:25.731 "copy": false, 01:32:25.731 "nvme_iov_md": false 01:32:25.731 }, 01:32:25.731 "driver_specific": { 01:32:25.731 "ftl": { 01:32:25.731 "base_bdev": "e8843ea4-8e7b-4ad6-910d-8a6d4af92f57", 01:32:25.731 "cache": "nvc0n1p0" 01:32:25.731 } 01:32:25.731 } 01:32:25.731 } 01:32:25.731 ] 01:32:25.731 05:27:07 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 01:32:25.731 05:27:07 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 01:32:25.732 05:27:07 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 01:32:25.732 05:27:08 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 01:32:25.732 05:27:08 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 01:32:26.299 05:27:08 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 01:32:26.299 { 01:32:26.299 "name": "ftl0", 01:32:26.299 "aliases": [ 01:32:26.299 "ea97d973-01dc-423e-88bb-a65e4c614878" 01:32:26.299 ], 01:32:26.299 "product_name": "FTL disk", 01:32:26.299 "block_size": 4096, 01:32:26.299 "num_blocks": 23592960, 01:32:26.299 "uuid": "ea97d973-01dc-423e-88bb-a65e4c614878", 01:32:26.299 "assigned_rate_limits": { 01:32:26.299 "rw_ios_per_sec": 0, 01:32:26.299 "rw_mbytes_per_sec": 0, 01:32:26.299 "r_mbytes_per_sec": 0, 01:32:26.299 "w_mbytes_per_sec": 0 01:32:26.299 }, 01:32:26.299 "claimed": false, 01:32:26.299 "zoned": false, 01:32:26.299 "supported_io_types": { 01:32:26.299 "read": true, 01:32:26.299 "write": true, 01:32:26.299 "unmap": true, 01:32:26.299 "flush": true, 01:32:26.299 "reset": false, 01:32:26.299 "nvme_admin": false, 01:32:26.299 "nvme_io": false, 01:32:26.299 "nvme_io_md": false, 01:32:26.299 "write_zeroes": true, 01:32:26.299 "zcopy": false, 01:32:26.299 "get_zone_info": false, 01:32:26.299 "zone_management": false, 01:32:26.299 "zone_append": false, 01:32:26.299 "compare": false, 01:32:26.299 "compare_and_write": false, 01:32:26.299 "abort": false, 01:32:26.299 "seek_hole": false, 01:32:26.299 "seek_data": false, 01:32:26.299 "copy": false, 01:32:26.299 "nvme_iov_md": false 01:32:26.299 }, 01:32:26.299 "driver_specific": { 01:32:26.299 "ftl": { 01:32:26.299 "base_bdev": "e8843ea4-8e7b-4ad6-910d-8a6d4af92f57", 01:32:26.299 "cache": "nvc0n1p0" 01:32:26.299 } 01:32:26.299 } 01:32:26.299 } 01:32:26.299 ]' 01:32:26.299 05:27:08 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 01:32:26.299 05:27:08 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 01:32:26.299 05:27:08 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 01:32:26.299 [2024-12-09 05:27:08.681220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:26.299 [2024-12-09 05:27:08.681295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 01:32:26.299 [2024-12-09 05:27:08.681334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 01:32:26.299 [2024-12-09 05:27:08.681349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:26.299 [2024-12-09 05:27:08.681392] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 01:32:26.299 [2024-12-09 05:27:08.686162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:26.299 [2024-12-09 05:27:08.686197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 01:32:26.299 [2024-12-09 05:27:08.686223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.752 ms 01:32:26.299 [2024-12-09 05:27:08.686235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:26.299 [2024-12-09 05:27:08.686937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:26.299 [2024-12-09 05:27:08.686961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 01:32:26.299 [2024-12-09 05:27:08.686984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.639 ms 01:32:26.299 [2024-12-09 05:27:08.686996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:26.299 [2024-12-09 05:27:08.689845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:26.299 [2024-12-09 05:27:08.689869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 01:32:26.299 [2024-12-09 05:27:08.689885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.808 ms 01:32:26.299 [2024-12-09 05:27:08.689896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:26.299 [2024-12-09 05:27:08.695627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:26.299 [2024-12-09 05:27:08.695664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 01:32:26.299 [2024-12-09 05:27:08.695681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.701 ms 01:32:26.299 [2024-12-09 05:27:08.695692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:26.299 [2024-12-09 05:27:08.734080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:26.299 [2024-12-09 05:27:08.734125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 01:32:26.299 [2024-12-09 05:27:08.734149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.322 ms 01:32:26.299 [2024-12-09 05:27:08.734160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:26.559 [2024-12-09 05:27:08.756845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:26.560 [2024-12-09 05:27:08.756889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 01:32:26.560 [2024-12-09 05:27:08.756912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.595 ms 01:32:26.560 [2024-12-09 05:27:08.756922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:26.560 [2024-12-09 05:27:08.757249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:26.560 [2024-12-09 05:27:08.757269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 01:32:26.560 [2024-12-09 05:27:08.757285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.158 ms 01:32:26.560 [2024-12-09 05:27:08.757296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:26.560 [2024-12-09 05:27:08.793427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:26.560 [2024-12-09 05:27:08.793472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 01:32:26.560 [2024-12-09 05:27:08.793490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.150 ms 01:32:26.560 [2024-12-09 05:27:08.793501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:26.560 [2024-12-09 05:27:08.828686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:26.560 [2024-12-09 05:27:08.828726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 01:32:26.560 [2024-12-09 05:27:08.828748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.132 ms 01:32:26.560 [2024-12-09 05:27:08.828758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:26.560 [2024-12-09 05:27:08.864000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:26.560 [2024-12-09 05:27:08.864043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 01:32:26.560 [2024-12-09 05:27:08.864060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.193 ms 01:32:26.560 [2024-12-09 05:27:08.864087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:26.560 [2024-12-09 05:27:08.899283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:26.560 [2024-12-09 05:27:08.899323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 01:32:26.560 [2024-12-09 05:27:08.899355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.091 ms 01:32:26.560 [2024-12-09 05:27:08.899365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:26.560 [2024-12-09 05:27:08.899455] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 01:32:26.560 [2024-12-09 05:27:08.899495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 01:32:26.560 [2024-12-09 05:27:08.899512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 01:32:26.560 [2024-12-09 05:27:08.899523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 01:32:26.560 [2024-12-09 05:27:08.899539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 01:32:26.560 [2024-12-09 05:27:08.899550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 01:32:26.560 [2024-12-09 05:27:08.899568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 01:32:26.560 [2024-12-09 05:27:08.899579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 01:32:26.560 [2024-12-09 05:27:08.899594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 01:32:26.560 [2024-12-09 05:27:08.899604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 01:32:26.560 [2024-12-09 05:27:08.899618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 01:32:26.560 [2024-12-09 05:27:08.899629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 01:32:26.560 [2024-12-09 05:27:08.899643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 01:32:26.560 [2024-12-09 05:27:08.899653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 01:32:26.560 [2024-12-09 05:27:08.899667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 01:32:26.560 [2024-12-09 05:27:08.899677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 01:32:26.560 [2024-12-09 05:27:08.899692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 01:32:26.560 [2024-12-09 05:27:08.899718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 01:32:26.560 [2024-12-09 05:27:08.899732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 01:32:26.560 [2024-12-09 05:27:08.899742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 01:32:26.560 [2024-12-09 05:27:08.899779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 01:32:26.560 [2024-12-09 05:27:08.899790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 01:32:26.560 [2024-12-09 05:27:08.899809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 01:32:26.560 [2024-12-09 05:27:08.899820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 01:32:26.560 [2024-12-09 05:27:08.899834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 01:32:26.560 [2024-12-09 05:27:08.899845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 01:32:26.560 [2024-12-09 05:27:08.899859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 01:32:26.560 [2024-12-09 05:27:08.899870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 01:32:26.560 [2024-12-09 05:27:08.899884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 01:32:26.560 [2024-12-09 05:27:08.899895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 01:32:26.560 [2024-12-09 05:27:08.899908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 01:32:26.560 [2024-12-09 05:27:08.899919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 01:32:26.560 [2024-12-09 05:27:08.899936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 01:32:26.560 [2024-12-09 05:27:08.899947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 01:32:26.560 [2024-12-09 05:27:08.899961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 01:32:26.560 [2024-12-09 05:27:08.899973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 01:32:26.560 [2024-12-09 05:27:08.899987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 01:32:26.560 [2024-12-09 05:27:08.899999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 01:32:26.560 [2024-12-09 05:27:08.900017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 01:32:26.560 [2024-12-09 05:27:08.900027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 01:32:26.560 [2024-12-09 05:27:08.900041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 01:32:26.560 [2024-12-09 05:27:08.900052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 01:32:26.560 [2024-12-09 05:27:08.900067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 01:32:26.560 [2024-12-09 05:27:08.900078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 01:32:26.560 [2024-12-09 05:27:08.900094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 01:32:26.560 [2024-12-09 05:27:08.900105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 01:32:26.560 [2024-12-09 05:27:08.900119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 01:32:26.560 [2024-12-09 05:27:08.900130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 01:32:26.560 [2024-12-09 05:27:08.900145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 01:32:26.560 [2024-12-09 05:27:08.900156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 01:32:26.560 [2024-12-09 05:27:08.900170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 01:32:26.560 [2024-12-09 05:27:08.900181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 01:32:26.560 [2024-12-09 05:27:08.900195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 01:32:26.560 [2024-12-09 05:27:08.900206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 01:32:26.560 [2024-12-09 05:27:08.900224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 01:32:26.560 [2024-12-09 05:27:08.900235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 01:32:26.560 [2024-12-09 05:27:08.900249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 01:32:26.560 [2024-12-09 05:27:08.900260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 01:32:26.560 [2024-12-09 05:27:08.900275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 01:32:26.560 [2024-12-09 05:27:08.900286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 01:32:26.560 [2024-12-09 05:27:08.900300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 01:32:26.560 [2024-12-09 05:27:08.900311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 01:32:26.560 [2024-12-09 05:27:08.900325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 01:32:26.560 [2024-12-09 05:27:08.900337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 01:32:26.560 [2024-12-09 05:27:08.900353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 01:32:26.560 [2024-12-09 05:27:08.900364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 01:32:26.560 [2024-12-09 05:27:08.900378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 01:32:26.560 [2024-12-09 05:27:08.900390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 01:32:26.560 [2024-12-09 05:27:08.900404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 01:32:26.560 [2024-12-09 05:27:08.900415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 01:32:26.560 [2024-12-09 05:27:08.900434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 01:32:26.560 [2024-12-09 05:27:08.900445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 01:32:26.560 [2024-12-09 05:27:08.900469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 01:32:26.560 [2024-12-09 05:27:08.900481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 01:32:26.560 [2024-12-09 05:27:08.900496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 01:32:26.560 [2024-12-09 05:27:08.900507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 01:32:26.560 [2024-12-09 05:27:08.900522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 01:32:26.560 [2024-12-09 05:27:08.900533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 01:32:26.560 [2024-12-09 05:27:08.900547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 01:32:26.560 [2024-12-09 05:27:08.900558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 01:32:26.560 [2024-12-09 05:27:08.900572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 01:32:26.560 [2024-12-09 05:27:08.900583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 01:32:26.560 [2024-12-09 05:27:08.900598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 01:32:26.560 [2024-12-09 05:27:08.900609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 01:32:26.560 [2024-12-09 05:27:08.900623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 01:32:26.560 [2024-12-09 05:27:08.900634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 01:32:26.560 [2024-12-09 05:27:08.900652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 01:32:26.560 [2024-12-09 05:27:08.900663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 01:32:26.560 [2024-12-09 05:27:08.900677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 01:32:26.560 [2024-12-09 05:27:08.900688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 01:32:26.560 [2024-12-09 05:27:08.900703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 01:32:26.560 [2024-12-09 05:27:08.900714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 01:32:26.560 [2024-12-09 05:27:08.900729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 01:32:26.560 [2024-12-09 05:27:08.900739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 01:32:26.560 [2024-12-09 05:27:08.900754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 01:32:26.560 [2024-12-09 05:27:08.900766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 01:32:26.560 [2024-12-09 05:27:08.900781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 01:32:26.560 [2024-12-09 05:27:08.900793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 01:32:26.560 [2024-12-09 05:27:08.900809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 01:32:26.560 [2024-12-09 05:27:08.900821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 01:32:26.560 [2024-12-09 05:27:08.900835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 01:32:26.560 [2024-12-09 05:27:08.900855] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 01:32:26.560 [2024-12-09 05:27:08.900872] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ea97d973-01dc-423e-88bb-a65e4c614878 01:32:26.560 [2024-12-09 05:27:08.900883] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 01:32:26.560 [2024-12-09 05:27:08.900897] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 01:32:26.560 [2024-12-09 05:27:08.900911] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 01:32:26.560 [2024-12-09 05:27:08.900925] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 01:32:26.560 [2024-12-09 05:27:08.900935] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 01:32:26.560 [2024-12-09 05:27:08.900950] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 01:32:26.560 [2024-12-09 05:27:08.900960] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 01:32:26.560 [2024-12-09 05:27:08.900973] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 01:32:26.560 [2024-12-09 05:27:08.900982] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 01:32:26.560 [2024-12-09 05:27:08.900996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:26.560 [2024-12-09 05:27:08.901006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 01:32:26.560 [2024-12-09 05:27:08.901021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.545 ms 01:32:26.560 [2024-12-09 05:27:08.901031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:26.560 [2024-12-09 05:27:08.921575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:26.560 [2024-12-09 05:27:08.921612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 01:32:26.560 [2024-12-09 05:27:08.921649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.531 ms 01:32:26.560 [2024-12-09 05:27:08.921660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:26.560 [2024-12-09 05:27:08.922308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:26.560 [2024-12-09 05:27:08.922331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 01:32:26.560 [2024-12-09 05:27:08.922346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.574 ms 01:32:26.560 [2024-12-09 05:27:08.922356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:26.560 [2024-12-09 05:27:08.994409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:32:26.560 [2024-12-09 05:27:08.994452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 01:32:26.560 [2024-12-09 05:27:08.994486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:32:26.560 [2024-12-09 05:27:08.994498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:26.560 [2024-12-09 05:27:08.994637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:32:26.560 [2024-12-09 05:27:08.994651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 01:32:26.560 [2024-12-09 05:27:08.994666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:32:26.560 [2024-12-09 05:27:08.994677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:26.561 [2024-12-09 05:27:08.994778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:32:26.561 [2024-12-09 05:27:08.994796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 01:32:26.561 [2024-12-09 05:27:08.994814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:32:26.561 [2024-12-09 05:27:08.994825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:26.561 [2024-12-09 05:27:08.994869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:32:26.561 [2024-12-09 05:27:08.994881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 01:32:26.561 [2024-12-09 05:27:08.994895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:32:26.561 [2024-12-09 05:27:08.994906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:26.819 [2024-12-09 05:27:09.132778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:32:26.819 [2024-12-09 05:27:09.132844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 01:32:26.819 [2024-12-09 05:27:09.132865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:32:26.819 [2024-12-09 05:27:09.132877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:26.819 [2024-12-09 05:27:09.238995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:32:26.819 [2024-12-09 05:27:09.239072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 01:32:26.819 [2024-12-09 05:27:09.239094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:32:26.819 [2024-12-09 05:27:09.239105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:26.819 [2024-12-09 05:27:09.239291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:32:26.819 [2024-12-09 05:27:09.239305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 01:32:26.819 [2024-12-09 05:27:09.239328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:32:26.819 [2024-12-09 05:27:09.239340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:26.819 [2024-12-09 05:27:09.239433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:32:26.819 [2024-12-09 05:27:09.239445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 01:32:26.819 [2024-12-09 05:27:09.239460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:32:26.819 [2024-12-09 05:27:09.239471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:26.819 [2024-12-09 05:27:09.239673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:32:26.819 [2024-12-09 05:27:09.239687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 01:32:26.819 [2024-12-09 05:27:09.239701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:32:26.819 [2024-12-09 05:27:09.239716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:26.819 [2024-12-09 05:27:09.239793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:32:26.819 [2024-12-09 05:27:09.239822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 01:32:26.819 [2024-12-09 05:27:09.239836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:32:26.819 [2024-12-09 05:27:09.239847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:26.819 [2024-12-09 05:27:09.239920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:32:26.819 [2024-12-09 05:27:09.239933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 01:32:26.819 [2024-12-09 05:27:09.239952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:32:26.819 [2024-12-09 05:27:09.239965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:26.820 [2024-12-09 05:27:09.240039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:32:26.820 [2024-12-09 05:27:09.240052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 01:32:26.820 [2024-12-09 05:27:09.240067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:32:26.820 [2024-12-09 05:27:09.240077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:26.820 [2024-12-09 05:27:09.240324] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 559.983 ms, result 0 01:32:26.820 true 01:32:26.820 05:27:09 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 78287 01:32:26.820 05:27:09 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78287 ']' 01:32:26.820 05:27:09 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78287 01:32:26.820 05:27:09 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 01:32:27.078 05:27:09 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:32:27.079 05:27:09 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78287 01:32:27.079 05:27:09 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:32:27.079 05:27:09 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:32:27.079 killing process with pid 78287 01:32:27.079 05:27:09 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78287' 01:32:27.079 05:27:09 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 78287 01:32:27.079 05:27:09 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 78287 01:32:29.613 05:27:11 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 01:32:30.549 65536+0 records in 01:32:30.549 65536+0 records out 01:32:30.549 268435456 bytes (268 MB, 256 MiB) copied, 1.00579 s, 267 MB/s 01:32:30.549 05:27:12 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 01:32:30.808 [2024-12-09 05:27:13.068160] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:32:30.808 [2024-12-09 05:27:13.068316] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78498 ] 01:32:30.808 [2024-12-09 05:27:13.257394] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:32:31.118 [2024-12-09 05:27:13.390363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:32:31.393 [2024-12-09 05:27:13.802847] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 01:32:31.393 [2024-12-09 05:27:13.802940] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 01:32:31.654 [2024-12-09 05:27:13.969796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:31.654 [2024-12-09 05:27:13.969856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 01:32:31.654 [2024-12-09 05:27:13.969873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 01:32:31.654 [2024-12-09 05:27:13.969883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:31.654 [2024-12-09 05:27:13.973284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:31.654 [2024-12-09 05:27:13.973326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 01:32:31.654 [2024-12-09 05:27:13.973341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.385 ms 01:32:31.654 [2024-12-09 05:27:13.973350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:31.654 [2024-12-09 05:27:13.973452] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 01:32:31.654 [2024-12-09 05:27:13.974561] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 01:32:31.654 [2024-12-09 05:27:13.974597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:31.654 [2024-12-09 05:27:13.974608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 01:32:31.654 [2024-12-09 05:27:13.974620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.155 ms 01:32:31.654 [2024-12-09 05:27:13.974631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:31.654 [2024-12-09 05:27:13.977095] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 01:32:31.654 [2024-12-09 05:27:13.996749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:31.654 [2024-12-09 05:27:13.996791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 01:32:31.654 [2024-12-09 05:27:13.996805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.687 ms 01:32:31.654 [2024-12-09 05:27:13.996816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:31.654 [2024-12-09 05:27:13.996917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:31.654 [2024-12-09 05:27:13.996932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 01:32:31.654 [2024-12-09 05:27:13.996944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 01:32:31.654 [2024-12-09 05:27:13.996954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:31.654 [2024-12-09 05:27:14.009076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:31.654 [2024-12-09 05:27:14.009106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 01:32:31.654 [2024-12-09 05:27:14.009119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.099 ms 01:32:31.654 [2024-12-09 05:27:14.009129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:31.654 [2024-12-09 05:27:14.009250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:31.654 [2024-12-09 05:27:14.009266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 01:32:31.654 [2024-12-09 05:27:14.009277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 01:32:31.654 [2024-12-09 05:27:14.009287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:31.654 [2024-12-09 05:27:14.009319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:31.654 [2024-12-09 05:27:14.009330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 01:32:31.654 [2024-12-09 05:27:14.009340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 01:32:31.654 [2024-12-09 05:27:14.009350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:31.654 [2024-12-09 05:27:14.009375] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 01:32:31.654 [2024-12-09 05:27:14.015230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:31.654 [2024-12-09 05:27:14.015261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 01:32:31.654 [2024-12-09 05:27:14.015274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.871 ms 01:32:31.654 [2024-12-09 05:27:14.015300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:31.654 [2024-12-09 05:27:14.015352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:31.654 [2024-12-09 05:27:14.015365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 01:32:31.654 [2024-12-09 05:27:14.015378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 01:32:31.654 [2024-12-09 05:27:14.015388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:31.654 [2024-12-09 05:27:14.015416] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 01:32:31.654 [2024-12-09 05:27:14.015442] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 01:32:31.654 [2024-12-09 05:27:14.015492] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 01:32:31.654 [2024-12-09 05:27:14.015513] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 01:32:31.654 [2024-12-09 05:27:14.015605] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 01:32:31.654 [2024-12-09 05:27:14.015619] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 01:32:31.654 [2024-12-09 05:27:14.015633] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 01:32:31.654 [2024-12-09 05:27:14.015650] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 01:32:31.654 [2024-12-09 05:27:14.015663] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 01:32:31.654 [2024-12-09 05:27:14.015675] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 01:32:31.654 [2024-12-09 05:27:14.015687] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 01:32:31.654 [2024-12-09 05:27:14.015697] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 01:32:31.654 [2024-12-09 05:27:14.015707] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 01:32:31.654 [2024-12-09 05:27:14.015718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:31.654 [2024-12-09 05:27:14.015728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 01:32:31.654 [2024-12-09 05:27:14.015740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.307 ms 01:32:31.654 [2024-12-09 05:27:14.015750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:31.654 [2024-12-09 05:27:14.015828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:31.654 [2024-12-09 05:27:14.015844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 01:32:31.654 [2024-12-09 05:27:14.015855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 01:32:31.654 [2024-12-09 05:27:14.015865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:31.654 [2024-12-09 05:27:14.015956] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 01:32:31.654 [2024-12-09 05:27:14.015970] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 01:32:31.654 [2024-12-09 05:27:14.015982] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 01:32:31.654 [2024-12-09 05:27:14.015993] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:32:31.654 [2024-12-09 05:27:14.016004] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 01:32:31.654 [2024-12-09 05:27:14.016014] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 01:32:31.654 [2024-12-09 05:27:14.016024] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 01:32:31.654 [2024-12-09 05:27:14.016033] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 01:32:31.654 [2024-12-09 05:27:14.016042] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 01:32:31.654 [2024-12-09 05:27:14.016052] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 01:32:31.654 [2024-12-09 05:27:14.016063] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 01:32:31.654 [2024-12-09 05:27:14.016085] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 01:32:31.654 [2024-12-09 05:27:14.016095] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 01:32:31.654 [2024-12-09 05:27:14.016105] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 01:32:31.654 [2024-12-09 05:27:14.016115] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 01:32:31.654 [2024-12-09 05:27:14.016125] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:32:31.654 [2024-12-09 05:27:14.016135] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 01:32:31.654 [2024-12-09 05:27:14.016144] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 01:32:31.654 [2024-12-09 05:27:14.016154] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:32:31.654 [2024-12-09 05:27:14.016163] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 01:32:31.654 [2024-12-09 05:27:14.016173] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 01:32:31.654 [2024-12-09 05:27:14.016182] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:32:31.654 [2024-12-09 05:27:14.016192] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 01:32:31.654 [2024-12-09 05:27:14.016201] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 01:32:31.655 [2024-12-09 05:27:14.016210] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:32:31.655 [2024-12-09 05:27:14.016219] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 01:32:31.655 [2024-12-09 05:27:14.016229] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 01:32:31.655 [2024-12-09 05:27:14.016238] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:32:31.655 [2024-12-09 05:27:14.016248] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 01:32:31.655 [2024-12-09 05:27:14.016257] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 01:32:31.655 [2024-12-09 05:27:14.016268] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:32:31.655 [2024-12-09 05:27:14.016277] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 01:32:31.655 [2024-12-09 05:27:14.016286] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 01:32:31.655 [2024-12-09 05:27:14.016294] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 01:32:31.655 [2024-12-09 05:27:14.016303] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 01:32:31.655 [2024-12-09 05:27:14.016312] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 01:32:31.655 [2024-12-09 05:27:14.016321] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 01:32:31.655 [2024-12-09 05:27:14.016330] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 01:32:31.655 [2024-12-09 05:27:14.016339] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 01:32:31.655 [2024-12-09 05:27:14.016348] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:32:31.655 [2024-12-09 05:27:14.016357] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 01:32:31.655 [2024-12-09 05:27:14.016368] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 01:32:31.655 [2024-12-09 05:27:14.016377] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:32:31.655 [2024-12-09 05:27:14.016386] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 01:32:31.655 [2024-12-09 05:27:14.016396] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 01:32:31.655 [2024-12-09 05:27:14.016411] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 01:32:31.655 [2024-12-09 05:27:14.016421] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:32:31.655 [2024-12-09 05:27:14.016431] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 01:32:31.655 [2024-12-09 05:27:14.016441] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 01:32:31.655 [2024-12-09 05:27:14.016450] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 01:32:31.655 [2024-12-09 05:27:14.016471] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 01:32:31.655 [2024-12-09 05:27:14.016481] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 01:32:31.655 [2024-12-09 05:27:14.016491] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 01:32:31.655 [2024-12-09 05:27:14.016503] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 01:32:31.655 [2024-12-09 05:27:14.016517] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 01:32:31.655 [2024-12-09 05:27:14.016529] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 01:32:31.655 [2024-12-09 05:27:14.016540] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 01:32:31.655 [2024-12-09 05:27:14.016551] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 01:32:31.655 [2024-12-09 05:27:14.016562] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 01:32:31.655 [2024-12-09 05:27:14.016573] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 01:32:31.655 [2024-12-09 05:27:14.016584] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 01:32:31.655 [2024-12-09 05:27:14.016595] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 01:32:31.655 [2024-12-09 05:27:14.016606] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 01:32:31.655 [2024-12-09 05:27:14.016616] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 01:32:31.655 [2024-12-09 05:27:14.016626] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 01:32:31.655 [2024-12-09 05:27:14.016637] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 01:32:31.655 [2024-12-09 05:27:14.016647] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 01:32:31.655 [2024-12-09 05:27:14.016657] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 01:32:31.655 [2024-12-09 05:27:14.016669] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 01:32:31.655 [2024-12-09 05:27:14.016679] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 01:32:31.655 [2024-12-09 05:27:14.016691] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 01:32:31.655 [2024-12-09 05:27:14.016702] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 01:32:31.655 [2024-12-09 05:27:14.016712] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 01:32:31.655 [2024-12-09 05:27:14.016723] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 01:32:31.655 [2024-12-09 05:27:14.016735] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 01:32:31.655 [2024-12-09 05:27:14.016747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:31.655 [2024-12-09 05:27:14.016763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 01:32:31.655 [2024-12-09 05:27:14.016773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.845 ms 01:32:31.655 [2024-12-09 05:27:14.016783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:31.655 [2024-12-09 05:27:14.066115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:31.655 [2024-12-09 05:27:14.066153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 01:32:31.655 [2024-12-09 05:27:14.066167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.351 ms 01:32:31.655 [2024-12-09 05:27:14.066194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:31.655 [2024-12-09 05:27:14.066344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:31.655 [2024-12-09 05:27:14.066358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 01:32:31.655 [2024-12-09 05:27:14.066370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 01:32:31.655 [2024-12-09 05:27:14.066380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:31.915 [2024-12-09 05:27:14.145379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:31.915 [2024-12-09 05:27:14.145425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 01:32:31.915 [2024-12-09 05:27:14.145439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 79.101 ms 01:32:31.915 [2024-12-09 05:27:14.145466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:31.915 [2024-12-09 05:27:14.145559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:31.915 [2024-12-09 05:27:14.145574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 01:32:31.915 [2024-12-09 05:27:14.145587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 01:32:31.915 [2024-12-09 05:27:14.145599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:31.915 [2024-12-09 05:27:14.146364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:31.915 [2024-12-09 05:27:14.146386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 01:32:31.915 [2024-12-09 05:27:14.146412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.742 ms 01:32:31.915 [2024-12-09 05:27:14.146423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:31.915 [2024-12-09 05:27:14.146576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:31.915 [2024-12-09 05:27:14.146593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 01:32:31.915 [2024-12-09 05:27:14.146605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 01:32:31.915 [2024-12-09 05:27:14.146615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:31.915 [2024-12-09 05:27:14.170796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:31.915 [2024-12-09 05:27:14.170831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 01:32:31.915 [2024-12-09 05:27:14.170844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.196 ms 01:32:31.915 [2024-12-09 05:27:14.170855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:31.915 [2024-12-09 05:27:14.190198] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 01:32:31.915 [2024-12-09 05:27:14.190238] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 01:32:31.915 [2024-12-09 05:27:14.190255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:31.915 [2024-12-09 05:27:14.190267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 01:32:31.915 [2024-12-09 05:27:14.190279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.293 ms 01:32:31.915 [2024-12-09 05:27:14.190289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:31.915 [2024-12-09 05:27:14.218713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:31.915 [2024-12-09 05:27:14.218753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 01:32:31.915 [2024-12-09 05:27:14.218768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.384 ms 01:32:31.915 [2024-12-09 05:27:14.218779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:31.915 [2024-12-09 05:27:14.235913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:31.915 [2024-12-09 05:27:14.235951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 01:32:31.915 [2024-12-09 05:27:14.235963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.081 ms 01:32:31.915 [2024-12-09 05:27:14.235974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:31.915 [2024-12-09 05:27:14.253018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:31.915 [2024-12-09 05:27:14.253065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 01:32:31.915 [2024-12-09 05:27:14.253077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.997 ms 01:32:31.915 [2024-12-09 05:27:14.253087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:31.915 [2024-12-09 05:27:14.253861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:31.915 [2024-12-09 05:27:14.253894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 01:32:31.915 [2024-12-09 05:27:14.253907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.670 ms 01:32:31.915 [2024-12-09 05:27:14.253919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:31.915 [2024-12-09 05:27:14.348988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:31.915 [2024-12-09 05:27:14.349050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 01:32:31.915 [2024-12-09 05:27:14.349068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 95.189 ms 01:32:31.915 [2024-12-09 05:27:14.349081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:31.915 [2024-12-09 05:27:14.359868] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 01:32:32.175 [2024-12-09 05:27:14.385193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:32.175 [2024-12-09 05:27:14.385251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 01:32:32.175 [2024-12-09 05:27:14.385268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.054 ms 01:32:32.175 [2024-12-09 05:27:14.385280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:32.175 [2024-12-09 05:27:14.385442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:32.175 [2024-12-09 05:27:14.385474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 01:32:32.175 [2024-12-09 05:27:14.385489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 01:32:32.175 [2024-12-09 05:27:14.385516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:32.175 [2024-12-09 05:27:14.385591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:32.175 [2024-12-09 05:27:14.385604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 01:32:32.175 [2024-12-09 05:27:14.385632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 01:32:32.175 [2024-12-09 05:27:14.385644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:32.175 [2024-12-09 05:27:14.385689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:32.175 [2024-12-09 05:27:14.385711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 01:32:32.175 [2024-12-09 05:27:14.385723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 01:32:32.175 [2024-12-09 05:27:14.385733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:32.175 [2024-12-09 05:27:14.385780] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 01:32:32.175 [2024-12-09 05:27:14.385807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:32.175 [2024-12-09 05:27:14.385817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 01:32:32.175 [2024-12-09 05:27:14.385829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 01:32:32.175 [2024-12-09 05:27:14.385840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:32.175 [2024-12-09 05:27:14.421291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:32.175 [2024-12-09 05:27:14.421334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 01:32:32.175 [2024-12-09 05:27:14.421349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.484 ms 01:32:32.175 [2024-12-09 05:27:14.421359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:32.175 [2024-12-09 05:27:14.421512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:32.175 [2024-12-09 05:27:14.421528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 01:32:32.175 [2024-12-09 05:27:14.421541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 01:32:32.175 [2024-12-09 05:27:14.421551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:32.175 [2024-12-09 05:27:14.422872] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 01:32:32.175 [2024-12-09 05:27:14.427202] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 453.436 ms, result 0 01:32:32.175 [2024-12-09 05:27:14.428156] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 01:32:32.175 [2024-12-09 05:27:14.445948] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 01:32:33.111  [2024-12-09T05:27:16.503Z] Copying: 20/256 [MB] (20 MBps) [2024-12-09T05:27:17.876Z] Copying: 42/256 [MB] (21 MBps) [2024-12-09T05:27:18.475Z] Copying: 64/256 [MB] (21 MBps) [2024-12-09T05:27:19.849Z] Copying: 85/256 [MB] (21 MBps) [2024-12-09T05:27:20.782Z] Copying: 106/256 [MB] (20 MBps) [2024-12-09T05:27:21.716Z] Copying: 127/256 [MB] (20 MBps) [2024-12-09T05:27:22.650Z] Copying: 147/256 [MB] (20 MBps) [2024-12-09T05:27:23.585Z] Copying: 169/256 [MB] (21 MBps) [2024-12-09T05:27:24.523Z] Copying: 191/256 [MB] (21 MBps) [2024-12-09T05:27:25.461Z] Copying: 213/256 [MB] (21 MBps) [2024-12-09T05:27:26.397Z] Copying: 236/256 [MB] (23 MBps) [2024-12-09T05:27:26.397Z] Copying: 256/256 [MB] (average 21 MBps)[2024-12-09 05:27:26.306098] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 01:32:43.941 [2024-12-09 05:27:26.320579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:43.941 [2024-12-09 05:27:26.320633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 01:32:43.941 [2024-12-09 05:27:26.320655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 01:32:43.941 [2024-12-09 05:27:26.320675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:43.941 [2024-12-09 05:27:26.320699] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 01:32:43.941 [2024-12-09 05:27:26.324915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:43.941 [2024-12-09 05:27:26.324945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 01:32:43.941 [2024-12-09 05:27:26.324964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.206 ms 01:32:43.941 [2024-12-09 05:27:26.324974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:43.941 [2024-12-09 05:27:26.327044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:43.941 [2024-12-09 05:27:26.327086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 01:32:43.941 [2024-12-09 05:27:26.327098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.047 ms 01:32:43.941 [2024-12-09 05:27:26.327108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:43.941 [2024-12-09 05:27:26.333752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:43.941 [2024-12-09 05:27:26.333797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 01:32:43.941 [2024-12-09 05:27:26.333809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.635 ms 01:32:43.941 [2024-12-09 05:27:26.333819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:43.941 [2024-12-09 05:27:26.339004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:43.941 [2024-12-09 05:27:26.339035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 01:32:43.941 [2024-12-09 05:27:26.339047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.147 ms 01:32:43.941 [2024-12-09 05:27:26.339056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:43.941 [2024-12-09 05:27:26.373221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:43.941 [2024-12-09 05:27:26.373256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 01:32:43.941 [2024-12-09 05:27:26.373269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.174 ms 01:32:43.941 [2024-12-09 05:27:26.373278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:43.941 [2024-12-09 05:27:26.394948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:43.941 [2024-12-09 05:27:26.394994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 01:32:43.941 [2024-12-09 05:27:26.395012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.633 ms 01:32:43.941 [2024-12-09 05:27:26.395022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:43.941 [2024-12-09 05:27:26.395167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:43.941 [2024-12-09 05:27:26.395185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 01:32:43.941 [2024-12-09 05:27:26.395196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 01:32:43.941 [2024-12-09 05:27:26.395219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:44.202 [2024-12-09 05:27:26.430673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:44.202 [2024-12-09 05:27:26.430710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 01:32:44.202 [2024-12-09 05:27:26.430722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.494 ms 01:32:44.202 [2024-12-09 05:27:26.430733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:44.202 [2024-12-09 05:27:26.465607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:44.202 [2024-12-09 05:27:26.465640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 01:32:44.202 [2024-12-09 05:27:26.465652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.872 ms 01:32:44.202 [2024-12-09 05:27:26.465662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:44.202 [2024-12-09 05:27:26.500657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:44.202 [2024-12-09 05:27:26.500690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 01:32:44.202 [2024-12-09 05:27:26.500702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.998 ms 01:32:44.202 [2024-12-09 05:27:26.500712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:44.202 [2024-12-09 05:27:26.534301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:44.202 [2024-12-09 05:27:26.534334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 01:32:44.202 [2024-12-09 05:27:26.534346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.561 ms 01:32:44.202 [2024-12-09 05:27:26.534355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:44.202 [2024-12-09 05:27:26.534407] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 01:32:44.202 [2024-12-09 05:27:26.534424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 01:32:44.202 [2024-12-09 05:27:26.534438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 01:32:44.202 [2024-12-09 05:27:26.534450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 01:32:44.202 [2024-12-09 05:27:26.534477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 01:32:44.202 [2024-12-09 05:27:26.534489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 01:32:44.202 [2024-12-09 05:27:26.534515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 01:32:44.202 [2024-12-09 05:27:26.534528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 01:32:44.202 [2024-12-09 05:27:26.534539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 01:32:44.202 [2024-12-09 05:27:26.534551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 01:32:44.202 [2024-12-09 05:27:26.534562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 01:32:44.202 [2024-12-09 05:27:26.534573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 01:32:44.202 [2024-12-09 05:27:26.534584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 01:32:44.202 [2024-12-09 05:27:26.534594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 01:32:44.202 [2024-12-09 05:27:26.534606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 01:32:44.202 [2024-12-09 05:27:26.534616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 01:32:44.202 [2024-12-09 05:27:26.534627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 01:32:44.202 [2024-12-09 05:27:26.534637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 01:32:44.202 [2024-12-09 05:27:26.534647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 01:32:44.202 [2024-12-09 05:27:26.534657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 01:32:44.202 [2024-12-09 05:27:26.534667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 01:32:44.202 [2024-12-09 05:27:26.534677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 01:32:44.202 [2024-12-09 05:27:26.534687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 01:32:44.202 [2024-12-09 05:27:26.534697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 01:32:44.202 [2024-12-09 05:27:26.534707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 01:32:44.202 [2024-12-09 05:27:26.534717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 01:32:44.202 [2024-12-09 05:27:26.534727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 01:32:44.202 [2024-12-09 05:27:26.534736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 01:32:44.202 [2024-12-09 05:27:26.534747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 01:32:44.202 [2024-12-09 05:27:26.534758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 01:32:44.202 [2024-12-09 05:27:26.534769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 01:32:44.202 [2024-12-09 05:27:26.534780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 01:32:44.202 [2024-12-09 05:27:26.534790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 01:32:44.202 [2024-12-09 05:27:26.534800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 01:32:44.202 [2024-12-09 05:27:26.534811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 01:32:44.202 [2024-12-09 05:27:26.534823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 01:32:44.202 [2024-12-09 05:27:26.534833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 01:32:44.202 [2024-12-09 05:27:26.534844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 01:32:44.202 [2024-12-09 05:27:26.534854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 01:32:44.202 [2024-12-09 05:27:26.534864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 01:32:44.202 [2024-12-09 05:27:26.534875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 01:32:44.202 [2024-12-09 05:27:26.534885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 01:32:44.202 [2024-12-09 05:27:26.534896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 01:32:44.202 [2024-12-09 05:27:26.534907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 01:32:44.202 [2024-12-09 05:27:26.534917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 01:32:44.202 [2024-12-09 05:27:26.534927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 01:32:44.202 [2024-12-09 05:27:26.534937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 01:32:44.202 [2024-12-09 05:27:26.534947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 01:32:44.202 [2024-12-09 05:27:26.534957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 01:32:44.202 [2024-12-09 05:27:26.534967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 01:32:44.202 [2024-12-09 05:27:26.534977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 01:32:44.202 [2024-12-09 05:27:26.534996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 01:32:44.202 [2024-12-09 05:27:26.535007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 01:32:44.202 [2024-12-09 05:27:26.535017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 01:32:44.202 [2024-12-09 05:27:26.535028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 01:32:44.202 [2024-12-09 05:27:26.535038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 01:32:44.202 [2024-12-09 05:27:26.535048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 01:32:44.202 [2024-12-09 05:27:26.535059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 01:32:44.202 [2024-12-09 05:27:26.535069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 01:32:44.202 [2024-12-09 05:27:26.535079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 01:32:44.202 [2024-12-09 05:27:26.535090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 01:32:44.202 [2024-12-09 05:27:26.535101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 01:32:44.202 [2024-12-09 05:27:26.535111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 01:32:44.202 [2024-12-09 05:27:26.535122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 01:32:44.202 [2024-12-09 05:27:26.535133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 01:32:44.202 [2024-12-09 05:27:26.535143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 01:32:44.202 [2024-12-09 05:27:26.535154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 01:32:44.202 [2024-12-09 05:27:26.535165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 01:32:44.202 [2024-12-09 05:27:26.535175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 01:32:44.202 [2024-12-09 05:27:26.535186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 01:32:44.202 [2024-12-09 05:27:26.535198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 01:32:44.202 [2024-12-09 05:27:26.535209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 01:32:44.202 [2024-12-09 05:27:26.535220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 01:32:44.202 [2024-12-09 05:27:26.535232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 01:32:44.202 [2024-12-09 05:27:26.535242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 01:32:44.202 [2024-12-09 05:27:26.535252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 01:32:44.203 [2024-12-09 05:27:26.535262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 01:32:44.203 [2024-12-09 05:27:26.535273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 01:32:44.203 [2024-12-09 05:27:26.535283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 01:32:44.203 [2024-12-09 05:27:26.535293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 01:32:44.203 [2024-12-09 05:27:26.535303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 01:32:44.203 [2024-12-09 05:27:26.535314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 01:32:44.203 [2024-12-09 05:27:26.535325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 01:32:44.203 [2024-12-09 05:27:26.535335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 01:32:44.203 [2024-12-09 05:27:26.535346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 01:32:44.203 [2024-12-09 05:27:26.535356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 01:32:44.203 [2024-12-09 05:27:26.535367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 01:32:44.203 [2024-12-09 05:27:26.535378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 01:32:44.203 [2024-12-09 05:27:26.535388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 01:32:44.203 [2024-12-09 05:27:26.535398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 01:32:44.203 [2024-12-09 05:27:26.535408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 01:32:44.203 [2024-12-09 05:27:26.535419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 01:32:44.203 [2024-12-09 05:27:26.535431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 01:32:44.203 [2024-12-09 05:27:26.535441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 01:32:44.203 [2024-12-09 05:27:26.535452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 01:32:44.203 [2024-12-09 05:27:26.535487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 01:32:44.203 [2024-12-09 05:27:26.535498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 01:32:44.203 [2024-12-09 05:27:26.535509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 01:32:44.203 [2024-12-09 05:27:26.535520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 01:32:44.203 [2024-12-09 05:27:26.535531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 01:32:44.203 [2024-12-09 05:27:26.535544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 01:32:44.203 [2024-12-09 05:27:26.535563] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 01:32:44.203 [2024-12-09 05:27:26.535574] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ea97d973-01dc-423e-88bb-a65e4c614878 01:32:44.203 [2024-12-09 05:27:26.535585] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 01:32:44.203 [2024-12-09 05:27:26.535595] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 01:32:44.203 [2024-12-09 05:27:26.535605] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 01:32:44.203 [2024-12-09 05:27:26.535615] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 01:32:44.203 [2024-12-09 05:27:26.535625] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 01:32:44.203 [2024-12-09 05:27:26.535635] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 01:32:44.203 [2024-12-09 05:27:26.535645] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 01:32:44.203 [2024-12-09 05:27:26.535655] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 01:32:44.203 [2024-12-09 05:27:26.535664] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 01:32:44.203 [2024-12-09 05:27:26.535674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:44.203 [2024-12-09 05:27:26.535699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 01:32:44.203 [2024-12-09 05:27:26.535709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.269 ms 01:32:44.203 [2024-12-09 05:27:26.535718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:44.203 [2024-12-09 05:27:26.556152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:44.203 [2024-12-09 05:27:26.556184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 01:32:44.203 [2024-12-09 05:27:26.556196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.448 ms 01:32:44.203 [2024-12-09 05:27:26.556206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:44.203 [2024-12-09 05:27:26.556889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:44.203 [2024-12-09 05:27:26.556915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 01:32:44.203 [2024-12-09 05:27:26.556928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.644 ms 01:32:44.203 [2024-12-09 05:27:26.556937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:44.203 [2024-12-09 05:27:26.611988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:32:44.203 [2024-12-09 05:27:26.612023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 01:32:44.203 [2024-12-09 05:27:26.612037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:32:44.203 [2024-12-09 05:27:26.612048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:44.203 [2024-12-09 05:27:26.612172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:32:44.203 [2024-12-09 05:27:26.612184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 01:32:44.203 [2024-12-09 05:27:26.612203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:32:44.203 [2024-12-09 05:27:26.612213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:44.203 [2024-12-09 05:27:26.612263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:32:44.203 [2024-12-09 05:27:26.612277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 01:32:44.203 [2024-12-09 05:27:26.612288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:32:44.203 [2024-12-09 05:27:26.612298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:44.203 [2024-12-09 05:27:26.612316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:32:44.203 [2024-12-09 05:27:26.612331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 01:32:44.203 [2024-12-09 05:27:26.612342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:32:44.203 [2024-12-09 05:27:26.612352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:44.479 [2024-12-09 05:27:26.738698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:32:44.479 [2024-12-09 05:27:26.738752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 01:32:44.479 [2024-12-09 05:27:26.738768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:32:44.479 [2024-12-09 05:27:26.738781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:44.479 [2024-12-09 05:27:26.838774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:32:44.479 [2024-12-09 05:27:26.838824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 01:32:44.479 [2024-12-09 05:27:26.838841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:32:44.479 [2024-12-09 05:27:26.838852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:44.479 [2024-12-09 05:27:26.838962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:32:44.479 [2024-12-09 05:27:26.838975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 01:32:44.479 [2024-12-09 05:27:26.838994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:32:44.479 [2024-12-09 05:27:26.839005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:44.479 [2024-12-09 05:27:26.839037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:32:44.479 [2024-12-09 05:27:26.839047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 01:32:44.479 [2024-12-09 05:27:26.839064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:32:44.479 [2024-12-09 05:27:26.839074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:44.479 [2024-12-09 05:27:26.839195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:32:44.479 [2024-12-09 05:27:26.839208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 01:32:44.479 [2024-12-09 05:27:26.839219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:32:44.479 [2024-12-09 05:27:26.839230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:44.479 [2024-12-09 05:27:26.839268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:32:44.479 [2024-12-09 05:27:26.839280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 01:32:44.479 [2024-12-09 05:27:26.839291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:32:44.479 [2024-12-09 05:27:26.839305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:44.479 [2024-12-09 05:27:26.839355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:32:44.479 [2024-12-09 05:27:26.839366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 01:32:44.479 [2024-12-09 05:27:26.839377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:32:44.479 [2024-12-09 05:27:26.839386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:44.479 [2024-12-09 05:27:26.839437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:32:44.479 [2024-12-09 05:27:26.839450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 01:32:44.479 [2024-12-09 05:27:26.839483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:32:44.479 [2024-12-09 05:27:26.839494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:44.479 [2024-12-09 05:27:26.839739] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 519.980 ms, result 0 01:32:45.853 01:32:45.853 01:32:45.853 05:27:28 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=78656 01:32:45.853 05:27:28 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 01:32:45.853 05:27:28 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 78656 01:32:45.853 05:27:28 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 78656 ']' 01:32:45.853 05:27:28 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:32:45.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:32:45.853 05:27:28 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 01:32:45.853 05:27:28 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:32:45.853 05:27:28 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 01:32:45.853 05:27:28 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 01:32:46.111 [2024-12-09 05:27:28.337774] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:32:46.111 [2024-12-09 05:27:28.337910] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78656 ] 01:32:46.111 [2024-12-09 05:27:28.522830] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:32:46.370 [2024-12-09 05:27:28.650079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:32:47.305 05:27:29 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:32:47.305 05:27:29 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 01:32:47.305 05:27:29 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 01:32:47.563 [2024-12-09 05:27:29.856436] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 01:32:47.563 [2024-12-09 05:27:29.856522] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 01:32:47.823 [2024-12-09 05:27:30.040284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:47.823 [2024-12-09 05:27:30.040342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 01:32:47.823 [2024-12-09 05:27:30.040363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 01:32:47.823 [2024-12-09 05:27:30.040374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:47.823 [2024-12-09 05:27:30.043912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:47.823 [2024-12-09 05:27:30.043953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 01:32:47.823 [2024-12-09 05:27:30.043968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.515 ms 01:32:47.823 [2024-12-09 05:27:30.043978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:47.823 [2024-12-09 05:27:30.044101] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 01:32:47.823 [2024-12-09 05:27:30.045081] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 01:32:47.823 [2024-12-09 05:27:30.045118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:47.823 [2024-12-09 05:27:30.045130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 01:32:47.823 [2024-12-09 05:27:30.045144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.043 ms 01:32:47.823 [2024-12-09 05:27:30.045157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:47.823 [2024-12-09 05:27:30.047771] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 01:32:47.823 [2024-12-09 05:27:30.067980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:47.823 [2024-12-09 05:27:30.068025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 01:32:47.823 [2024-12-09 05:27:30.068040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.249 ms 01:32:47.823 [2024-12-09 05:27:30.068054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:47.823 [2024-12-09 05:27:30.068172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:47.823 [2024-12-09 05:27:30.068190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 01:32:47.823 [2024-12-09 05:27:30.068201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 01:32:47.823 [2024-12-09 05:27:30.068215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:47.823 [2024-12-09 05:27:30.080169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:47.823 [2024-12-09 05:27:30.080206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 01:32:47.823 [2024-12-09 05:27:30.080219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.920 ms 01:32:47.823 [2024-12-09 05:27:30.080233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:47.823 [2024-12-09 05:27:30.080409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:47.823 [2024-12-09 05:27:30.080430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 01:32:47.823 [2024-12-09 05:27:30.080445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 01:32:47.823 [2024-12-09 05:27:30.080476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:47.823 [2024-12-09 05:27:30.080511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:47.823 [2024-12-09 05:27:30.080527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 01:32:47.823 [2024-12-09 05:27:30.080537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 01:32:47.823 [2024-12-09 05:27:30.080551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:47.823 [2024-12-09 05:27:30.080581] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 01:32:47.823 [2024-12-09 05:27:30.086043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:47.823 [2024-12-09 05:27:30.086075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 01:32:47.823 [2024-12-09 05:27:30.086090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.476 ms 01:32:47.823 [2024-12-09 05:27:30.086100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:47.823 [2024-12-09 05:27:30.086162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:47.823 [2024-12-09 05:27:30.086184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 01:32:47.823 [2024-12-09 05:27:30.086203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 01:32:47.823 [2024-12-09 05:27:30.086213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:47.823 [2024-12-09 05:27:30.086240] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 01:32:47.823 [2024-12-09 05:27:30.086262] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 01:32:47.823 [2024-12-09 05:27:30.086312] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 01:32:47.823 [2024-12-09 05:27:30.086333] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 01:32:47.823 [2024-12-09 05:27:30.086426] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 01:32:47.823 [2024-12-09 05:27:30.086443] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 01:32:47.823 [2024-12-09 05:27:30.086476] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 01:32:47.823 [2024-12-09 05:27:30.086489] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 01:32:47.823 [2024-12-09 05:27:30.086504] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 01:32:47.823 [2024-12-09 05:27:30.086516] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 01:32:47.823 [2024-12-09 05:27:30.086536] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 01:32:47.823 [2024-12-09 05:27:30.086546] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 01:32:47.823 [2024-12-09 05:27:30.086579] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 01:32:47.823 [2024-12-09 05:27:30.086590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:47.823 [2024-12-09 05:27:30.086607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 01:32:47.823 [2024-12-09 05:27:30.086618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.358 ms 01:32:47.823 [2024-12-09 05:27:30.086640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:47.823 [2024-12-09 05:27:30.086713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:47.823 [2024-12-09 05:27:30.086731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 01:32:47.823 [2024-12-09 05:27:30.086741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 01:32:47.823 [2024-12-09 05:27:30.086756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:47.823 [2024-12-09 05:27:30.086848] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 01:32:47.823 [2024-12-09 05:27:30.086873] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 01:32:47.823 [2024-12-09 05:27:30.086883] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 01:32:47.823 [2024-12-09 05:27:30.086899] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:32:47.823 [2024-12-09 05:27:30.086918] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 01:32:47.823 [2024-12-09 05:27:30.086932] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 01:32:47.823 [2024-12-09 05:27:30.086941] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 01:32:47.823 [2024-12-09 05:27:30.086962] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 01:32:47.823 [2024-12-09 05:27:30.086971] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 01:32:47.823 [2024-12-09 05:27:30.086994] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 01:32:47.823 [2024-12-09 05:27:30.087003] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 01:32:47.823 [2024-12-09 05:27:30.087032] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 01:32:47.823 [2024-12-09 05:27:30.087041] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 01:32:47.823 [2024-12-09 05:27:30.087056] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 01:32:47.823 [2024-12-09 05:27:30.087066] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 01:32:47.823 [2024-12-09 05:27:30.087078] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:32:47.823 [2024-12-09 05:27:30.087088] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 01:32:47.823 [2024-12-09 05:27:30.087101] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 01:32:47.823 [2024-12-09 05:27:30.087121] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:32:47.823 [2024-12-09 05:27:30.087134] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 01:32:47.823 [2024-12-09 05:27:30.087143] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 01:32:47.823 [2024-12-09 05:27:30.087156] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:32:47.823 [2024-12-09 05:27:30.087165] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 01:32:47.823 [2024-12-09 05:27:30.087181] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 01:32:47.823 [2024-12-09 05:27:30.087190] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:32:47.823 [2024-12-09 05:27:30.087203] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 01:32:47.823 [2024-12-09 05:27:30.087212] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 01:32:47.823 [2024-12-09 05:27:30.087226] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:32:47.823 [2024-12-09 05:27:30.087236] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 01:32:47.823 [2024-12-09 05:27:30.087248] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 01:32:47.823 [2024-12-09 05:27:30.087258] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:32:47.823 [2024-12-09 05:27:30.087270] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 01:32:47.824 [2024-12-09 05:27:30.087280] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 01:32:47.824 [2024-12-09 05:27:30.087293] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 01:32:47.824 [2024-12-09 05:27:30.087302] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 01:32:47.824 [2024-12-09 05:27:30.087314] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 01:32:47.824 [2024-12-09 05:27:30.087324] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 01:32:47.824 [2024-12-09 05:27:30.087336] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 01:32:47.824 [2024-12-09 05:27:30.087346] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 01:32:47.824 [2024-12-09 05:27:30.087360] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:32:47.824 [2024-12-09 05:27:30.087369] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 01:32:47.824 [2024-12-09 05:27:30.087380] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 01:32:47.824 [2024-12-09 05:27:30.087389] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:32:47.824 [2024-12-09 05:27:30.087402] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 01:32:47.824 [2024-12-09 05:27:30.087413] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 01:32:47.824 [2024-12-09 05:27:30.087426] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 01:32:47.824 [2024-12-09 05:27:30.087437] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:32:47.824 [2024-12-09 05:27:30.087450] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 01:32:47.824 [2024-12-09 05:27:30.087459] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 01:32:47.824 [2024-12-09 05:27:30.087471] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 01:32:47.824 [2024-12-09 05:27:30.087499] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 01:32:47.824 [2024-12-09 05:27:30.087512] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 01:32:47.824 [2024-12-09 05:27:30.087521] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 01:32:47.824 [2024-12-09 05:27:30.087536] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 01:32:47.824 [2024-12-09 05:27:30.087550] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 01:32:47.824 [2024-12-09 05:27:30.087570] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 01:32:47.824 [2024-12-09 05:27:30.087581] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 01:32:47.824 [2024-12-09 05:27:30.087594] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 01:32:47.824 [2024-12-09 05:27:30.087605] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 01:32:47.824 [2024-12-09 05:27:30.087619] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 01:32:47.824 [2024-12-09 05:27:30.087630] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 01:32:47.824 [2024-12-09 05:27:30.087643] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 01:32:47.824 [2024-12-09 05:27:30.087653] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 01:32:47.824 [2024-12-09 05:27:30.087666] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 01:32:47.824 [2024-12-09 05:27:30.087677] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 01:32:47.824 [2024-12-09 05:27:30.087690] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 01:32:47.824 [2024-12-09 05:27:30.087701] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 01:32:47.824 [2024-12-09 05:27:30.087715] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 01:32:47.824 [2024-12-09 05:27:30.087725] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 01:32:47.824 [2024-12-09 05:27:30.087738] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 01:32:47.824 [2024-12-09 05:27:30.087749] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 01:32:47.824 [2024-12-09 05:27:30.087765] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 01:32:47.824 [2024-12-09 05:27:30.087776] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 01:32:47.824 [2024-12-09 05:27:30.087788] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 01:32:47.824 [2024-12-09 05:27:30.087799] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 01:32:47.824 [2024-12-09 05:27:30.087813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:47.824 [2024-12-09 05:27:30.087824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 01:32:47.824 [2024-12-09 05:27:30.087842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.015 ms 01:32:47.824 [2024-12-09 05:27:30.087853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:47.824 [2024-12-09 05:27:30.136280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:47.824 [2024-12-09 05:27:30.136314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 01:32:47.824 [2024-12-09 05:27:30.136335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.435 ms 01:32:47.824 [2024-12-09 05:27:30.136346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:47.824 [2024-12-09 05:27:30.136507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:47.824 [2024-12-09 05:27:30.136522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 01:32:47.824 [2024-12-09 05:27:30.136536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 01:32:47.824 [2024-12-09 05:27:30.136546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:47.824 [2024-12-09 05:27:30.189135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:47.824 [2024-12-09 05:27:30.189173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 01:32:47.824 [2024-12-09 05:27:30.189189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.641 ms 01:32:47.824 [2024-12-09 05:27:30.189200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:47.824 [2024-12-09 05:27:30.189282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:47.824 [2024-12-09 05:27:30.189294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 01:32:47.824 [2024-12-09 05:27:30.189308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 01:32:47.824 [2024-12-09 05:27:30.189319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:47.824 [2024-12-09 05:27:30.190115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:47.824 [2024-12-09 05:27:30.190136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 01:32:47.824 [2024-12-09 05:27:30.190150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.772 ms 01:32:47.824 [2024-12-09 05:27:30.190160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:47.824 [2024-12-09 05:27:30.190295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:47.824 [2024-12-09 05:27:30.190309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 01:32:47.824 [2024-12-09 05:27:30.190323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 01:32:47.824 [2024-12-09 05:27:30.190333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:47.824 [2024-12-09 05:27:30.216281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:47.824 [2024-12-09 05:27:30.216317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 01:32:47.824 [2024-12-09 05:27:30.216335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.960 ms 01:32:47.824 [2024-12-09 05:27:30.216346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:47.824 [2024-12-09 05:27:30.269604] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 01:32:47.824 [2024-12-09 05:27:30.269660] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 01:32:47.824 [2024-12-09 05:27:30.269687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:47.824 [2024-12-09 05:27:30.269699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 01:32:47.824 [2024-12-09 05:27:30.269715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.288 ms 01:32:47.824 [2024-12-09 05:27:30.269736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:48.083 [2024-12-09 05:27:30.298379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:48.083 [2024-12-09 05:27:30.298417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 01:32:48.083 [2024-12-09 05:27:30.298435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.595 ms 01:32:48.083 [2024-12-09 05:27:30.298449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:48.083 [2024-12-09 05:27:30.315324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:48.083 [2024-12-09 05:27:30.315359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 01:32:48.083 [2024-12-09 05:27:30.315378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.774 ms 01:32:48.083 [2024-12-09 05:27:30.315388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:48.083 [2024-12-09 05:27:30.331879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:48.083 [2024-12-09 05:27:30.331912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 01:32:48.083 [2024-12-09 05:27:30.331928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.439 ms 01:32:48.083 [2024-12-09 05:27:30.331937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:48.083 [2024-12-09 05:27:30.332710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:48.083 [2024-12-09 05:27:30.332740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 01:32:48.083 [2024-12-09 05:27:30.332755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.664 ms 01:32:48.083 [2024-12-09 05:27:30.332765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:48.083 [2024-12-09 05:27:30.426506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:48.083 [2024-12-09 05:27:30.426563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 01:32:48.083 [2024-12-09 05:27:30.426583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 93.860 ms 01:32:48.083 [2024-12-09 05:27:30.426594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:48.083 [2024-12-09 05:27:30.436861] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 01:32:48.083 [2024-12-09 05:27:30.460639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:48.083 [2024-12-09 05:27:30.460689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 01:32:48.083 [2024-12-09 05:27:30.460706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.015 ms 01:32:48.083 [2024-12-09 05:27:30.460720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:48.083 [2024-12-09 05:27:30.460843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:48.083 [2024-12-09 05:27:30.460860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 01:32:48.083 [2024-12-09 05:27:30.460872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 01:32:48.083 [2024-12-09 05:27:30.460887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:48.083 [2024-12-09 05:27:30.460950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:48.083 [2024-12-09 05:27:30.460966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 01:32:48.083 [2024-12-09 05:27:30.460980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 01:32:48.083 [2024-12-09 05:27:30.460994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:48.083 [2024-12-09 05:27:30.461022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:48.084 [2024-12-09 05:27:30.461036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 01:32:48.084 [2024-12-09 05:27:30.461046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 01:32:48.084 [2024-12-09 05:27:30.461060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:48.084 [2024-12-09 05:27:30.461105] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 01:32:48.084 [2024-12-09 05:27:30.461129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:48.084 [2024-12-09 05:27:30.461140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 01:32:48.084 [2024-12-09 05:27:30.461153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 01:32:48.084 [2024-12-09 05:27:30.461166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:48.084 [2024-12-09 05:27:30.497078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:48.084 [2024-12-09 05:27:30.497118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 01:32:48.084 [2024-12-09 05:27:30.497135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.938 ms 01:32:48.084 [2024-12-09 05:27:30.497145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:48.084 [2024-12-09 05:27:30.497270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:48.084 [2024-12-09 05:27:30.497288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 01:32:48.084 [2024-12-09 05:27:30.497302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 01:32:48.084 [2024-12-09 05:27:30.497312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:48.084 [2024-12-09 05:27:30.498717] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 01:32:48.084 [2024-12-09 05:27:30.502831] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 458.775 ms, result 0 01:32:48.084 [2024-12-09 05:27:30.504073] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 01:32:48.084 Some configs were skipped because the RPC state that can call them passed over. 01:32:48.350 05:27:30 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 01:32:48.351 [2024-12-09 05:27:30.746660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:48.351 [2024-12-09 05:27:30.746710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 01:32:48.351 [2024-12-09 05:27:30.746724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.657 ms 01:32:48.351 [2024-12-09 05:27:30.746738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:48.351 [2024-12-09 05:27:30.746792] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.790 ms, result 0 01:32:48.351 true 01:32:48.351 05:27:30 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 01:32:48.624 [2024-12-09 05:27:30.934094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:48.624 [2024-12-09 05:27:30.934131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 01:32:48.624 [2024-12-09 05:27:30.934147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.238 ms 01:32:48.624 [2024-12-09 05:27:30.934156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:48.624 [2024-12-09 05:27:30.934194] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.341 ms, result 0 01:32:48.624 true 01:32:48.624 05:27:30 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 78656 01:32:48.624 05:27:30 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78656 ']' 01:32:48.624 05:27:30 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78656 01:32:48.624 05:27:30 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 01:32:48.624 05:27:30 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:32:48.624 05:27:30 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78656 01:32:48.624 05:27:30 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:32:48.624 05:27:30 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:32:48.624 killing process with pid 78656 01:32:48.624 05:27:30 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78656' 01:32:48.624 05:27:30 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 78656 01:32:48.624 05:27:30 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 78656 01:32:50.006 [2024-12-09 05:27:32.169985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:50.006 [2024-12-09 05:27:32.170056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 01:32:50.006 [2024-12-09 05:27:32.170074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 01:32:50.006 [2024-12-09 05:27:32.170096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:50.006 [2024-12-09 05:27:32.170121] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 01:32:50.006 [2024-12-09 05:27:32.174333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:50.006 [2024-12-09 05:27:32.174367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 01:32:50.006 [2024-12-09 05:27:32.174384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.196 ms 01:32:50.006 [2024-12-09 05:27:32.174394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:50.006 [2024-12-09 05:27:32.174686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:50.006 [2024-12-09 05:27:32.174702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 01:32:50.006 [2024-12-09 05:27:32.174716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.210 ms 01:32:50.006 [2024-12-09 05:27:32.174725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:50.006 [2024-12-09 05:27:32.178024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:50.006 [2024-12-09 05:27:32.178064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 01:32:50.006 [2024-12-09 05:27:32.178079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.279 ms 01:32:50.006 [2024-12-09 05:27:32.178089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:50.006 [2024-12-09 05:27:32.183358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:50.006 [2024-12-09 05:27:32.183392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 01:32:50.006 [2024-12-09 05:27:32.183410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.233 ms 01:32:50.006 [2024-12-09 05:27:32.183420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:50.006 [2024-12-09 05:27:32.197360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:50.006 [2024-12-09 05:27:32.197403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 01:32:50.006 [2024-12-09 05:27:32.197421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.883 ms 01:32:50.006 [2024-12-09 05:27:32.197431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:50.006 [2024-12-09 05:27:32.208702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:50.006 [2024-12-09 05:27:32.208739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 01:32:50.006 [2024-12-09 05:27:32.208755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.219 ms 01:32:50.006 [2024-12-09 05:27:32.208764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:50.006 [2024-12-09 05:27:32.208911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:50.006 [2024-12-09 05:27:32.208925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 01:32:50.006 [2024-12-09 05:27:32.208938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 01:32:50.006 [2024-12-09 05:27:32.208948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:50.006 [2024-12-09 05:27:32.223906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:50.006 [2024-12-09 05:27:32.223939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 01:32:50.006 [2024-12-09 05:27:32.223955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.958 ms 01:32:50.006 [2024-12-09 05:27:32.223964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:50.006 [2024-12-09 05:27:32.237785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:50.006 [2024-12-09 05:27:32.237816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 01:32:50.006 [2024-12-09 05:27:32.237837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.791 ms 01:32:50.006 [2024-12-09 05:27:32.237846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:50.006 [2024-12-09 05:27:32.251413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:50.006 [2024-12-09 05:27:32.251445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 01:32:50.006 [2024-12-09 05:27:32.251466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.535 ms 01:32:50.006 [2024-12-09 05:27:32.251476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:50.006 [2024-12-09 05:27:32.264860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:50.006 [2024-12-09 05:27:32.264894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 01:32:50.006 [2024-12-09 05:27:32.264909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.327 ms 01:32:50.006 [2024-12-09 05:27:32.264918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:50.007 [2024-12-09 05:27:32.264979] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 01:32:50.007 [2024-12-09 05:27:32.265001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 01:32:50.007 [2024-12-09 05:27:32.265017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 01:32:50.007 [2024-12-09 05:27:32.265028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 01:32:50.007 [2024-12-09 05:27:32.265043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 01:32:50.007 [2024-12-09 05:27:32.265053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 01:32:50.007 [2024-12-09 05:27:32.265070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 01:32:50.007 [2024-12-09 05:27:32.265080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 01:32:50.007 [2024-12-09 05:27:32.265094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 01:32:50.007 [2024-12-09 05:27:32.265107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 01:32:50.007 [2024-12-09 05:27:32.265120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 01:32:50.007 [2024-12-09 05:27:32.265130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 01:32:50.007 [2024-12-09 05:27:32.265144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 01:32:50.007 [2024-12-09 05:27:32.265155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 01:32:50.007 [2024-12-09 05:27:32.265171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 01:32:50.007 [2024-12-09 05:27:32.265181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 01:32:50.007 [2024-12-09 05:27:32.265195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 01:32:50.007 [2024-12-09 05:27:32.265205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 01:32:50.007 [2024-12-09 05:27:32.265218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 01:32:50.007 [2024-12-09 05:27:32.265228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 01:32:50.007 [2024-12-09 05:27:32.265242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 01:32:50.007 [2024-12-09 05:27:32.265252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 01:32:50.007 [2024-12-09 05:27:32.265268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 01:32:50.007 [2024-12-09 05:27:32.265279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 01:32:50.007 [2024-12-09 05:27:32.265292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 01:32:50.007 [2024-12-09 05:27:32.265303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 01:32:50.007 [2024-12-09 05:27:32.265317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 01:32:50.007 [2024-12-09 05:27:32.265331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 01:32:50.007 [2024-12-09 05:27:32.265344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 01:32:50.007 [2024-12-09 05:27:32.265355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 01:32:50.007 [2024-12-09 05:27:32.265371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 01:32:50.007 [2024-12-09 05:27:32.265381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 01:32:50.007 [2024-12-09 05:27:32.265395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 01:32:50.007 [2024-12-09 05:27:32.265404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 01:32:50.007 [2024-12-09 05:27:32.265418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 01:32:50.007 [2024-12-09 05:27:32.265428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 01:32:50.007 [2024-12-09 05:27:32.265441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 01:32:50.007 [2024-12-09 05:27:32.265451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 01:32:50.007 [2024-12-09 05:27:32.265480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 01:32:50.007 [2024-12-09 05:27:32.265491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 01:32:50.007 [2024-12-09 05:27:32.265506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 01:32:50.007 [2024-12-09 05:27:32.265516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 01:32:50.007 [2024-12-09 05:27:32.265530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 01:32:50.007 [2024-12-09 05:27:32.265540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 01:32:50.007 [2024-12-09 05:27:32.265554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 01:32:50.007 [2024-12-09 05:27:32.265565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 01:32:50.007 [2024-12-09 05:27:32.265578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 01:32:50.007 [2024-12-09 05:27:32.265588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 01:32:50.007 [2024-12-09 05:27:32.265601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 01:32:50.007 [2024-12-09 05:27:32.265612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 01:32:50.007 [2024-12-09 05:27:32.265625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 01:32:50.007 [2024-12-09 05:27:32.265635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 01:32:50.007 [2024-12-09 05:27:32.265648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 01:32:50.007 [2024-12-09 05:27:32.265658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 01:32:50.007 [2024-12-09 05:27:32.265675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 01:32:50.007 [2024-12-09 05:27:32.265685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 01:32:50.007 [2024-12-09 05:27:32.265698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 01:32:50.007 [2024-12-09 05:27:32.265708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 01:32:50.007 [2024-12-09 05:27:32.265722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 01:32:50.007 [2024-12-09 05:27:32.265732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 01:32:50.007 [2024-12-09 05:27:32.265746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 01:32:50.007 [2024-12-09 05:27:32.265756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 01:32:50.007 [2024-12-09 05:27:32.265770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 01:32:50.007 [2024-12-09 05:27:32.265781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 01:32:50.007 [2024-12-09 05:27:32.265797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 01:32:50.007 [2024-12-09 05:27:32.265807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 01:32:50.007 [2024-12-09 05:27:32.265821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 01:32:50.007 [2024-12-09 05:27:32.265832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 01:32:50.007 [2024-12-09 05:27:32.265847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 01:32:50.007 [2024-12-09 05:27:32.265858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 01:32:50.007 [2024-12-09 05:27:32.265874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 01:32:50.007 [2024-12-09 05:27:32.265885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 01:32:50.007 [2024-12-09 05:27:32.265899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 01:32:50.007 [2024-12-09 05:27:32.265910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 01:32:50.007 [2024-12-09 05:27:32.265924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 01:32:50.007 [2024-12-09 05:27:32.265934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 01:32:50.007 [2024-12-09 05:27:32.265947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 01:32:50.007 [2024-12-09 05:27:32.265957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 01:32:50.007 [2024-12-09 05:27:32.265970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 01:32:50.007 [2024-12-09 05:27:32.265980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 01:32:50.007 [2024-12-09 05:27:32.265993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 01:32:50.007 [2024-12-09 05:27:32.266003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 01:32:50.007 [2024-12-09 05:27:32.266016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 01:32:50.007 [2024-12-09 05:27:32.266026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 01:32:50.007 [2024-12-09 05:27:32.266040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 01:32:50.007 [2024-12-09 05:27:32.266050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 01:32:50.007 [2024-12-09 05:27:32.266068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 01:32:50.008 [2024-12-09 05:27:32.266078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 01:32:50.008 [2024-12-09 05:27:32.266091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 01:32:50.008 [2024-12-09 05:27:32.266102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 01:32:50.008 [2024-12-09 05:27:32.266115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 01:32:50.008 [2024-12-09 05:27:32.266126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 01:32:50.008 [2024-12-09 05:27:32.266141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 01:32:50.008 [2024-12-09 05:27:32.266151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 01:32:50.008 [2024-12-09 05:27:32.266166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 01:32:50.008 [2024-12-09 05:27:32.266178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 01:32:50.008 [2024-12-09 05:27:32.266191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 01:32:50.008 [2024-12-09 05:27:32.266202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 01:32:50.008 [2024-12-09 05:27:32.266217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 01:32:50.008 [2024-12-09 05:27:32.266227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 01:32:50.008 [2024-12-09 05:27:32.266248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 01:32:50.008 [2024-12-09 05:27:32.266275] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 01:32:50.008 [2024-12-09 05:27:32.266298] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ea97d973-01dc-423e-88bb-a65e4c614878 01:32:50.008 [2024-12-09 05:27:32.266309] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 01:32:50.008 [2024-12-09 05:27:32.266322] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 01:32:50.008 [2024-12-09 05:27:32.266331] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 01:32:50.008 [2024-12-09 05:27:32.266344] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 01:32:50.008 [2024-12-09 05:27:32.266354] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 01:32:50.008 [2024-12-09 05:27:32.266367] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 01:32:50.008 [2024-12-09 05:27:32.266377] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 01:32:50.008 [2024-12-09 05:27:32.266389] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 01:32:50.008 [2024-12-09 05:27:32.266397] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 01:32:50.008 [2024-12-09 05:27:32.266411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:50.008 [2024-12-09 05:27:32.266421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 01:32:50.008 [2024-12-09 05:27:32.266438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.436 ms 01:32:50.008 [2024-12-09 05:27:32.266448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:50.008 [2024-12-09 05:27:32.286167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:50.008 [2024-12-09 05:27:32.286200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 01:32:50.008 [2024-12-09 05:27:32.286219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.705 ms 01:32:50.008 [2024-12-09 05:27:32.286230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:50.008 [2024-12-09 05:27:32.286851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:50.008 [2024-12-09 05:27:32.286878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 01:32:50.008 [2024-12-09 05:27:32.286892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.569 ms 01:32:50.008 [2024-12-09 05:27:32.286903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:50.008 [2024-12-09 05:27:32.357843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:32:50.008 [2024-12-09 05:27:32.357881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 01:32:50.008 [2024-12-09 05:27:32.357897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:32:50.008 [2024-12-09 05:27:32.357908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:50.008 [2024-12-09 05:27:32.358012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:32:50.008 [2024-12-09 05:27:32.358030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 01:32:50.008 [2024-12-09 05:27:32.358045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:32:50.008 [2024-12-09 05:27:32.358055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:50.008 [2024-12-09 05:27:32.358112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:32:50.008 [2024-12-09 05:27:32.358125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 01:32:50.008 [2024-12-09 05:27:32.358143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:32:50.008 [2024-12-09 05:27:32.358153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:50.008 [2024-12-09 05:27:32.358176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:32:50.008 [2024-12-09 05:27:32.358188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 01:32:50.008 [2024-12-09 05:27:32.358206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:32:50.008 [2024-12-09 05:27:32.358216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:50.268 [2024-12-09 05:27:32.485281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:32:50.268 [2024-12-09 05:27:32.485343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 01:32:50.268 [2024-12-09 05:27:32.485362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:32:50.268 [2024-12-09 05:27:32.485374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:50.268 [2024-12-09 05:27:32.584813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:32:50.268 [2024-12-09 05:27:32.584874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 01:32:50.268 [2024-12-09 05:27:32.584914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:32:50.268 [2024-12-09 05:27:32.584925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:50.268 [2024-12-09 05:27:32.585049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:32:50.268 [2024-12-09 05:27:32.585063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 01:32:50.268 [2024-12-09 05:27:32.585085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:32:50.268 [2024-12-09 05:27:32.585096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:50.268 [2024-12-09 05:27:32.585135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:32:50.268 [2024-12-09 05:27:32.585146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 01:32:50.268 [2024-12-09 05:27:32.585162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:32:50.268 [2024-12-09 05:27:32.585177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:50.268 [2024-12-09 05:27:32.585300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:32:50.268 [2024-12-09 05:27:32.585314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 01:32:50.268 [2024-12-09 05:27:32.585332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:32:50.268 [2024-12-09 05:27:32.585342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:50.268 [2024-12-09 05:27:32.585417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:32:50.268 [2024-12-09 05:27:32.585430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 01:32:50.268 [2024-12-09 05:27:32.585446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:32:50.268 [2024-12-09 05:27:32.585456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:50.268 [2024-12-09 05:27:32.585539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:32:50.268 [2024-12-09 05:27:32.585551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 01:32:50.268 [2024-12-09 05:27:32.585573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:32:50.268 [2024-12-09 05:27:32.585584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:50.268 [2024-12-09 05:27:32.585639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:32:50.268 [2024-12-09 05:27:32.585652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 01:32:50.268 [2024-12-09 05:27:32.585666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:32:50.268 [2024-12-09 05:27:32.585680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:50.268 [2024-12-09 05:27:32.585850] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 416.510 ms, result 0 01:32:51.648 05:27:33 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 01:32:51.648 05:27:33 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 01:32:51.648 [2024-12-09 05:27:33.840544] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:32:51.648 [2024-12-09 05:27:33.840683] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78725 ] 01:32:51.648 [2024-12-09 05:27:34.027636] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:32:51.908 [2024-12-09 05:27:34.153914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:32:52.168 [2024-12-09 05:27:34.564868] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 01:32:52.168 [2024-12-09 05:27:34.564957] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 01:32:52.430 [2024-12-09 05:27:34.731179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:52.430 [2024-12-09 05:27:34.731229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 01:32:52.430 [2024-12-09 05:27:34.731247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 01:32:52.430 [2024-12-09 05:27:34.731258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:52.430 [2024-12-09 05:27:34.734773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:52.430 [2024-12-09 05:27:34.734810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 01:32:52.430 [2024-12-09 05:27:34.734823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.499 ms 01:32:52.430 [2024-12-09 05:27:34.734833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:52.430 [2024-12-09 05:27:34.734937] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 01:32:52.430 [2024-12-09 05:27:34.735830] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 01:32:52.430 [2024-12-09 05:27:34.735866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:52.430 [2024-12-09 05:27:34.735878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 01:32:52.430 [2024-12-09 05:27:34.735890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.939 ms 01:32:52.430 [2024-12-09 05:27:34.735899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:52.430 [2024-12-09 05:27:34.738227] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 01:32:52.430 [2024-12-09 05:27:34.757665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:52.430 [2024-12-09 05:27:34.757711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 01:32:52.430 [2024-12-09 05:27:34.757725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.470 ms 01:32:52.430 [2024-12-09 05:27:34.757736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:52.430 [2024-12-09 05:27:34.757838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:52.430 [2024-12-09 05:27:34.757852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 01:32:52.430 [2024-12-09 05:27:34.757865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 01:32:52.430 [2024-12-09 05:27:34.757875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:52.430 [2024-12-09 05:27:34.770080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:52.430 [2024-12-09 05:27:34.770108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 01:32:52.430 [2024-12-09 05:27:34.770121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.182 ms 01:32:52.430 [2024-12-09 05:27:34.770130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:52.430 [2024-12-09 05:27:34.770258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:52.430 [2024-12-09 05:27:34.770272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 01:32:52.430 [2024-12-09 05:27:34.770284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 01:32:52.430 [2024-12-09 05:27:34.770294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:52.430 [2024-12-09 05:27:34.770327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:52.430 [2024-12-09 05:27:34.770338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 01:32:52.430 [2024-12-09 05:27:34.770349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 01:32:52.430 [2024-12-09 05:27:34.770359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:52.430 [2024-12-09 05:27:34.770383] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 01:32:52.430 [2024-12-09 05:27:34.775895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:52.430 [2024-12-09 05:27:34.775927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 01:32:52.430 [2024-12-09 05:27:34.775939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.526 ms 01:32:52.430 [2024-12-09 05:27:34.775950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:52.430 [2024-12-09 05:27:34.776001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:52.430 [2024-12-09 05:27:34.776015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 01:32:52.430 [2024-12-09 05:27:34.776026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 01:32:52.430 [2024-12-09 05:27:34.776037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:52.430 [2024-12-09 05:27:34.776065] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 01:32:52.430 [2024-12-09 05:27:34.776090] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 01:32:52.430 [2024-12-09 05:27:34.776128] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 01:32:52.430 [2024-12-09 05:27:34.776149] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 01:32:52.430 [2024-12-09 05:27:34.776244] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 01:32:52.430 [2024-12-09 05:27:34.776258] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 01:32:52.430 [2024-12-09 05:27:34.776272] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 01:32:52.430 [2024-12-09 05:27:34.776290] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 01:32:52.430 [2024-12-09 05:27:34.776303] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 01:32:52.430 [2024-12-09 05:27:34.776315] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 01:32:52.430 [2024-12-09 05:27:34.776326] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 01:32:52.430 [2024-12-09 05:27:34.776338] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 01:32:52.430 [2024-12-09 05:27:34.776349] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 01:32:52.430 [2024-12-09 05:27:34.776361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:52.430 [2024-12-09 05:27:34.776372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 01:32:52.430 [2024-12-09 05:27:34.776384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.301 ms 01:32:52.430 [2024-12-09 05:27:34.776395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:52.430 [2024-12-09 05:27:34.776490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:52.430 [2024-12-09 05:27:34.776508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 01:32:52.430 [2024-12-09 05:27:34.776519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 01:32:52.430 [2024-12-09 05:27:34.776530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:52.430 [2024-12-09 05:27:34.776621] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 01:32:52.430 [2024-12-09 05:27:34.776635] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 01:32:52.430 [2024-12-09 05:27:34.776645] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 01:32:52.430 [2024-12-09 05:27:34.776655] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:32:52.430 [2024-12-09 05:27:34.776666] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 01:32:52.430 [2024-12-09 05:27:34.776675] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 01:32:52.430 [2024-12-09 05:27:34.776684] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 01:32:52.430 [2024-12-09 05:27:34.776695] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 01:32:52.430 [2024-12-09 05:27:34.776705] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 01:32:52.430 [2024-12-09 05:27:34.776714] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 01:32:52.430 [2024-12-09 05:27:34.776723] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 01:32:52.431 [2024-12-09 05:27:34.776746] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 01:32:52.431 [2024-12-09 05:27:34.776756] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 01:32:52.431 [2024-12-09 05:27:34.776766] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 01:32:52.431 [2024-12-09 05:27:34.776775] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 01:32:52.431 [2024-12-09 05:27:34.776785] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:32:52.431 [2024-12-09 05:27:34.776794] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 01:32:52.431 [2024-12-09 05:27:34.776803] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 01:32:52.431 [2024-12-09 05:27:34.776812] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:32:52.431 [2024-12-09 05:27:34.776822] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 01:32:52.431 [2024-12-09 05:27:34.776832] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 01:32:52.431 [2024-12-09 05:27:34.776841] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:32:52.431 [2024-12-09 05:27:34.776850] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 01:32:52.431 [2024-12-09 05:27:34.776860] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 01:32:52.431 [2024-12-09 05:27:34.776869] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:32:52.431 [2024-12-09 05:27:34.776878] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 01:32:52.431 [2024-12-09 05:27:34.776887] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 01:32:52.431 [2024-12-09 05:27:34.776895] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:32:52.431 [2024-12-09 05:27:34.776903] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 01:32:52.431 [2024-12-09 05:27:34.776912] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 01:32:52.431 [2024-12-09 05:27:34.776920] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:32:52.431 [2024-12-09 05:27:34.776929] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 01:32:52.431 [2024-12-09 05:27:34.776938] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 01:32:52.431 [2024-12-09 05:27:34.776947] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 01:32:52.431 [2024-12-09 05:27:34.776956] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 01:32:52.431 [2024-12-09 05:27:34.776965] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 01:32:52.431 [2024-12-09 05:27:34.776973] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 01:32:52.431 [2024-12-09 05:27:34.776982] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 01:32:52.431 [2024-12-09 05:27:34.776992] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 01:32:52.431 [2024-12-09 05:27:34.777000] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:32:52.431 [2024-12-09 05:27:34.777009] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 01:32:52.431 [2024-12-09 05:27:34.777018] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 01:32:52.431 [2024-12-09 05:27:34.777026] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:32:52.431 [2024-12-09 05:27:34.777034] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 01:32:52.431 [2024-12-09 05:27:34.777045] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 01:32:52.431 [2024-12-09 05:27:34.777059] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 01:32:52.431 [2024-12-09 05:27:34.777069] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:32:52.431 [2024-12-09 05:27:34.777079] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 01:32:52.431 [2024-12-09 05:27:34.777089] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 01:32:52.431 [2024-12-09 05:27:34.777098] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 01:32:52.431 [2024-12-09 05:27:34.777106] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 01:32:52.431 [2024-12-09 05:27:34.777115] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 01:32:52.431 [2024-12-09 05:27:34.777124] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 01:32:52.431 [2024-12-09 05:27:34.777135] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 01:32:52.431 [2024-12-09 05:27:34.777147] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 01:32:52.431 [2024-12-09 05:27:34.777158] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 01:32:52.431 [2024-12-09 05:27:34.777168] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 01:32:52.431 [2024-12-09 05:27:34.777178] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 01:32:52.431 [2024-12-09 05:27:34.777187] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 01:32:52.431 [2024-12-09 05:27:34.777198] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 01:32:52.431 [2024-12-09 05:27:34.777208] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 01:32:52.431 [2024-12-09 05:27:34.777218] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 01:32:52.431 [2024-12-09 05:27:34.777228] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 01:32:52.431 [2024-12-09 05:27:34.777240] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 01:32:52.431 [2024-12-09 05:27:34.777250] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 01:32:52.431 [2024-12-09 05:27:34.777261] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 01:32:52.431 [2024-12-09 05:27:34.777271] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 01:32:52.431 [2024-12-09 05:27:34.777281] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 01:32:52.431 [2024-12-09 05:27:34.777292] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 01:32:52.431 [2024-12-09 05:27:34.777302] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 01:32:52.431 [2024-12-09 05:27:34.777313] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 01:32:52.431 [2024-12-09 05:27:34.777324] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 01:32:52.431 [2024-12-09 05:27:34.777334] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 01:32:52.431 [2024-12-09 05:27:34.777343] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 01:32:52.431 [2024-12-09 05:27:34.777353] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 01:32:52.431 [2024-12-09 05:27:34.777363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:52.431 [2024-12-09 05:27:34.777380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 01:32:52.431 [2024-12-09 05:27:34.777390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.794 ms 01:32:52.431 [2024-12-09 05:27:34.777400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:52.431 [2024-12-09 05:27:34.825745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:52.431 [2024-12-09 05:27:34.825784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 01:32:52.431 [2024-12-09 05:27:34.825798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.366 ms 01:32:52.432 [2024-12-09 05:27:34.825809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:52.432 [2024-12-09 05:27:34.825966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:52.432 [2024-12-09 05:27:34.825979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 01:32:52.432 [2024-12-09 05:27:34.825991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 01:32:52.432 [2024-12-09 05:27:34.826001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:52.692 [2024-12-09 05:27:34.896810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:52.692 [2024-12-09 05:27:34.896854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 01:32:52.692 [2024-12-09 05:27:34.896868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 70.898 ms 01:32:52.692 [2024-12-09 05:27:34.896879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:52.692 [2024-12-09 05:27:34.896965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:52.692 [2024-12-09 05:27:34.896978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 01:32:52.692 [2024-12-09 05:27:34.896990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 01:32:52.692 [2024-12-09 05:27:34.897001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:52.692 [2024-12-09 05:27:34.897776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:52.692 [2024-12-09 05:27:34.897798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 01:32:52.692 [2024-12-09 05:27:34.897817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.753 ms 01:32:52.692 [2024-12-09 05:27:34.897827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:52.692 [2024-12-09 05:27:34.897958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:52.692 [2024-12-09 05:27:34.897973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 01:32:52.692 [2024-12-09 05:27:34.897984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 01:32:52.692 [2024-12-09 05:27:34.897994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:52.692 [2024-12-09 05:27:34.919753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:52.692 [2024-12-09 05:27:34.919787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 01:32:52.692 [2024-12-09 05:27:34.919801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.769 ms 01:32:52.693 [2024-12-09 05:27:34.919812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:52.693 [2024-12-09 05:27:34.938321] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 01:32:52.693 [2024-12-09 05:27:34.938360] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 01:32:52.693 [2024-12-09 05:27:34.938377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:52.693 [2024-12-09 05:27:34.938388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 01:32:52.693 [2024-12-09 05:27:34.938400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.476 ms 01:32:52.693 [2024-12-09 05:27:34.938411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:52.693 [2024-12-09 05:27:34.967516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:52.693 [2024-12-09 05:27:34.967557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 01:32:52.693 [2024-12-09 05:27:34.967571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.036 ms 01:32:52.693 [2024-12-09 05:27:34.967582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:52.693 [2024-12-09 05:27:34.985114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:52.693 [2024-12-09 05:27:34.985151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 01:32:52.693 [2024-12-09 05:27:34.985165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.479 ms 01:32:52.693 [2024-12-09 05:27:34.985175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:52.693 [2024-12-09 05:27:35.002818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:52.693 [2024-12-09 05:27:35.002854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 01:32:52.693 [2024-12-09 05:27:35.002868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.593 ms 01:32:52.693 [2024-12-09 05:27:35.002878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:52.693 [2024-12-09 05:27:35.003668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:52.693 [2024-12-09 05:27:35.003701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 01:32:52.693 [2024-12-09 05:27:35.003715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.669 ms 01:32:52.693 [2024-12-09 05:27:35.003726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:52.693 [2024-12-09 05:27:35.099236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:52.693 [2024-12-09 05:27:35.099294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 01:32:52.693 [2024-12-09 05:27:35.099311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 95.632 ms 01:32:52.693 [2024-12-09 05:27:35.099323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:52.693 [2024-12-09 05:27:35.109414] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 01:32:52.693 [2024-12-09 05:27:35.133210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:52.693 [2024-12-09 05:27:35.133253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 01:32:52.693 [2024-12-09 05:27:35.133269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.823 ms 01:32:52.693 [2024-12-09 05:27:35.133287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:52.693 [2024-12-09 05:27:35.133394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:52.693 [2024-12-09 05:27:35.133409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 01:32:52.693 [2024-12-09 05:27:35.133422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 01:32:52.693 [2024-12-09 05:27:35.133433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:52.693 [2024-12-09 05:27:35.133520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:52.693 [2024-12-09 05:27:35.133534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 01:32:52.693 [2024-12-09 05:27:35.133545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 01:32:52.693 [2024-12-09 05:27:35.133561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:52.693 [2024-12-09 05:27:35.133605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:52.693 [2024-12-09 05:27:35.133620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 01:32:52.693 [2024-12-09 05:27:35.133631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 01:32:52.693 [2024-12-09 05:27:35.133642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:52.693 [2024-12-09 05:27:35.133684] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 01:32:52.693 [2024-12-09 05:27:35.133698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:52.693 [2024-12-09 05:27:35.133709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 01:32:52.693 [2024-12-09 05:27:35.133720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 01:32:52.693 [2024-12-09 05:27:35.133730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:52.952 [2024-12-09 05:27:35.169894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:52.952 [2024-12-09 05:27:35.169933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 01:32:52.952 [2024-12-09 05:27:35.169948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.194 ms 01:32:52.952 [2024-12-09 05:27:35.169959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:52.952 [2024-12-09 05:27:35.170082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:32:52.952 [2024-12-09 05:27:35.170096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 01:32:52.952 [2024-12-09 05:27:35.170108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 01:32:52.952 [2024-12-09 05:27:35.170118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:32:52.952 [2024-12-09 05:27:35.171422] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 01:32:52.952 [2024-12-09 05:27:35.175474] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 440.613 ms, result 0 01:32:52.952 [2024-12-09 05:27:35.176371] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 01:32:52.952 [2024-12-09 05:27:35.194215] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 01:32:53.888  [2024-12-09T05:27:37.281Z] Copying: 27/256 [MB] (27 MBps) [2024-12-09T05:27:38.219Z] Copying: 52/256 [MB] (24 MBps) [2024-12-09T05:27:39.605Z] Copying: 76/256 [MB] (24 MBps) [2024-12-09T05:27:40.539Z] Copying: 101/256 [MB] (24 MBps) [2024-12-09T05:27:41.476Z] Copying: 127/256 [MB] (25 MBps) [2024-12-09T05:27:42.414Z] Copying: 152/256 [MB] (25 MBps) [2024-12-09T05:27:43.352Z] Copying: 178/256 [MB] (26 MBps) [2024-12-09T05:27:44.290Z] Copying: 204/256 [MB] (25 MBps) [2024-12-09T05:27:45.228Z] Copying: 230/256 [MB] (25 MBps) [2024-12-09T05:27:45.228Z] Copying: 256/256 [MB] (average 25 MBps)[2024-12-09 05:27:45.163220] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 01:33:02.772 [2024-12-09 05:27:45.177596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:02.772 [2024-12-09 05:27:45.177638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 01:33:02.772 [2024-12-09 05:27:45.177662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 01:33:02.772 [2024-12-09 05:27:45.177672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:02.772 [2024-12-09 05:27:45.177696] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 01:33:02.772 [2024-12-09 05:27:45.182287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:02.772 [2024-12-09 05:27:45.182317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 01:33:02.772 [2024-12-09 05:27:45.182328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.582 ms 01:33:02.772 [2024-12-09 05:27:45.182338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:02.772 [2024-12-09 05:27:45.182571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:02.772 [2024-12-09 05:27:45.182586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 01:33:02.772 [2024-12-09 05:27:45.182597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.210 ms 01:33:02.772 [2024-12-09 05:27:45.182607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:02.772 [2024-12-09 05:27:45.185279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:02.772 [2024-12-09 05:27:45.185302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 01:33:02.772 [2024-12-09 05:27:45.185312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.656 ms 01:33:02.772 [2024-12-09 05:27:45.185323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:02.772 [2024-12-09 05:27:45.190504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:02.772 [2024-12-09 05:27:45.190538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 01:33:02.772 [2024-12-09 05:27:45.190563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.156 ms 01:33:02.772 [2024-12-09 05:27:45.190572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:02.772 [2024-12-09 05:27:45.224469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:02.772 [2024-12-09 05:27:45.224507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 01:33:02.772 [2024-12-09 05:27:45.224520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.884 ms 01:33:02.772 [2024-12-09 05:27:45.224530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:03.032 [2024-12-09 05:27:45.245931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:03.032 [2024-12-09 05:27:45.245976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 01:33:03.032 [2024-12-09 05:27:45.245997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.379 ms 01:33:03.032 [2024-12-09 05:27:45.246008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:03.032 [2024-12-09 05:27:45.246144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:03.032 [2024-12-09 05:27:45.246158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 01:33:03.032 [2024-12-09 05:27:45.246181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 01:33:03.032 [2024-12-09 05:27:45.246192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:03.032 [2024-12-09 05:27:45.281495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:03.032 [2024-12-09 05:27:45.281531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 01:33:03.032 [2024-12-09 05:27:45.281545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.343 ms 01:33:03.032 [2024-12-09 05:27:45.281555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:03.032 [2024-12-09 05:27:45.316901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:03.032 [2024-12-09 05:27:45.316940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 01:33:03.032 [2024-12-09 05:27:45.316953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.346 ms 01:33:03.032 [2024-12-09 05:27:45.316963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:03.032 [2024-12-09 05:27:45.351739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:03.032 [2024-12-09 05:27:45.351785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 01:33:03.032 [2024-12-09 05:27:45.351798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.774 ms 01:33:03.032 [2024-12-09 05:27:45.351809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:03.032 [2024-12-09 05:27:45.385647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:03.032 [2024-12-09 05:27:45.385682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 01:33:03.032 [2024-12-09 05:27:45.385695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.808 ms 01:33:03.032 [2024-12-09 05:27:45.385703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:03.033 [2024-12-09 05:27:45.385768] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 01:33:03.033 [2024-12-09 05:27:45.385785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 01:33:03.033 [2024-12-09 05:27:45.385799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 01:33:03.033 [2024-12-09 05:27:45.385811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 01:33:03.033 [2024-12-09 05:27:45.385824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 01:33:03.033 [2024-12-09 05:27:45.385835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 01:33:03.033 [2024-12-09 05:27:45.385846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 01:33:03.033 [2024-12-09 05:27:45.385856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 01:33:03.033 [2024-12-09 05:27:45.385867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 01:33:03.033 [2024-12-09 05:27:45.385878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 01:33:03.033 [2024-12-09 05:27:45.385889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 01:33:03.033 [2024-12-09 05:27:45.385899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 01:33:03.033 [2024-12-09 05:27:45.385909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 01:33:03.033 [2024-12-09 05:27:45.385919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 01:33:03.033 [2024-12-09 05:27:45.385929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 01:33:03.033 [2024-12-09 05:27:45.385939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 01:33:03.033 [2024-12-09 05:27:45.385949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 01:33:03.033 [2024-12-09 05:27:45.385959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 01:33:03.033 [2024-12-09 05:27:45.385969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 01:33:03.033 [2024-12-09 05:27:45.385978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 01:33:03.033 [2024-12-09 05:27:45.385988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 01:33:03.033 [2024-12-09 05:27:45.385997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 01:33:03.033 [2024-12-09 05:27:45.386007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 01:33:03.033 [2024-12-09 05:27:45.386017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 01:33:03.033 [2024-12-09 05:27:45.386027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 01:33:03.033 [2024-12-09 05:27:45.386037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 01:33:03.033 [2024-12-09 05:27:45.386047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 01:33:03.033 [2024-12-09 05:27:45.386056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 01:33:03.033 [2024-12-09 05:27:45.386066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 01:33:03.033 [2024-12-09 05:27:45.386076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 01:33:03.033 [2024-12-09 05:27:45.386086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 01:33:03.033 [2024-12-09 05:27:45.386096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 01:33:03.033 [2024-12-09 05:27:45.386107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 01:33:03.033 [2024-12-09 05:27:45.386117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 01:33:03.033 [2024-12-09 05:27:45.386127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 01:33:03.033 [2024-12-09 05:27:45.386136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 01:33:03.033 [2024-12-09 05:27:45.386146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 01:33:03.033 [2024-12-09 05:27:45.386156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 01:33:03.033 [2024-12-09 05:27:45.386166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 01:33:03.033 [2024-12-09 05:27:45.386177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 01:33:03.033 [2024-12-09 05:27:45.386187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 01:33:03.033 [2024-12-09 05:27:45.386197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 01:33:03.033 [2024-12-09 05:27:45.386206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 01:33:03.033 [2024-12-09 05:27:45.386216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 01:33:03.033 [2024-12-09 05:27:45.386226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 01:33:03.033 [2024-12-09 05:27:45.386236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 01:33:03.033 [2024-12-09 05:27:45.386245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 01:33:03.033 [2024-12-09 05:27:45.386255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 01:33:03.033 [2024-12-09 05:27:45.386264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 01:33:03.033 [2024-12-09 05:27:45.386274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 01:33:03.033 [2024-12-09 05:27:45.386284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 01:33:03.033 [2024-12-09 05:27:45.386294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 01:33:03.033 [2024-12-09 05:27:45.386304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 01:33:03.033 [2024-12-09 05:27:45.386313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 01:33:03.033 [2024-12-09 05:27:45.386323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 01:33:03.033 [2024-12-09 05:27:45.386333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 01:33:03.033 [2024-12-09 05:27:45.386342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 01:33:03.033 [2024-12-09 05:27:45.386352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 01:33:03.033 [2024-12-09 05:27:45.386362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 01:33:03.033 [2024-12-09 05:27:45.386371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 01:33:03.033 [2024-12-09 05:27:45.386382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 01:33:03.033 [2024-12-09 05:27:45.386392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 01:33:03.033 [2024-12-09 05:27:45.386401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 01:33:03.033 [2024-12-09 05:27:45.386411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 01:33:03.033 [2024-12-09 05:27:45.386421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 01:33:03.033 [2024-12-09 05:27:45.386431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 01:33:03.033 [2024-12-09 05:27:45.386441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 01:33:03.033 [2024-12-09 05:27:45.386451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 01:33:03.033 [2024-12-09 05:27:45.386478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 01:33:03.033 [2024-12-09 05:27:45.386489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 01:33:03.033 [2024-12-09 05:27:45.386500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 01:33:03.033 [2024-12-09 05:27:45.386511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 01:33:03.033 [2024-12-09 05:27:45.386522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 01:33:03.033 [2024-12-09 05:27:45.386532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 01:33:03.033 [2024-12-09 05:27:45.386542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 01:33:03.033 [2024-12-09 05:27:45.386552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 01:33:03.033 [2024-12-09 05:27:45.386562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 01:33:03.033 [2024-12-09 05:27:45.386572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 01:33:03.033 [2024-12-09 05:27:45.386582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 01:33:03.033 [2024-12-09 05:27:45.386591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 01:33:03.033 [2024-12-09 05:27:45.386601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 01:33:03.033 [2024-12-09 05:27:45.386611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 01:33:03.033 [2024-12-09 05:27:45.386621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 01:33:03.033 [2024-12-09 05:27:45.386630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 01:33:03.033 [2024-12-09 05:27:45.386640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 01:33:03.033 [2024-12-09 05:27:45.386649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 01:33:03.033 [2024-12-09 05:27:45.386659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 01:33:03.033 [2024-12-09 05:27:45.386668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 01:33:03.033 [2024-12-09 05:27:45.386678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 01:33:03.034 [2024-12-09 05:27:45.386689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 01:33:03.034 [2024-12-09 05:27:45.386699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 01:33:03.034 [2024-12-09 05:27:45.386709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 01:33:03.034 [2024-12-09 05:27:45.386719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 01:33:03.034 [2024-12-09 05:27:45.386729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 01:33:03.034 [2024-12-09 05:27:45.386753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 01:33:03.034 [2024-12-09 05:27:45.386765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 01:33:03.034 [2024-12-09 05:27:45.386775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 01:33:03.034 [2024-12-09 05:27:45.386786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 01:33:03.034 [2024-12-09 05:27:45.386797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 01:33:03.034 [2024-12-09 05:27:45.386807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 01:33:03.034 [2024-12-09 05:27:45.386818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 01:33:03.034 [2024-12-09 05:27:45.386836] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 01:33:03.034 [2024-12-09 05:27:45.386846] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ea97d973-01dc-423e-88bb-a65e4c614878 01:33:03.034 [2024-12-09 05:27:45.386857] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 01:33:03.034 [2024-12-09 05:27:45.386867] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 01:33:03.034 [2024-12-09 05:27:45.386877] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 01:33:03.034 [2024-12-09 05:27:45.386887] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 01:33:03.034 [2024-12-09 05:27:45.386897] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 01:33:03.034 [2024-12-09 05:27:45.386908] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 01:33:03.034 [2024-12-09 05:27:45.386922] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 01:33:03.034 [2024-12-09 05:27:45.386931] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 01:33:03.034 [2024-12-09 05:27:45.386939] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 01:33:03.034 [2024-12-09 05:27:45.386948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:03.034 [2024-12-09 05:27:45.386958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 01:33:03.034 [2024-12-09 05:27:45.386969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.183 ms 01:33:03.034 [2024-12-09 05:27:45.386979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:03.034 [2024-12-09 05:27:45.406897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:03.034 [2024-12-09 05:27:45.406928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 01:33:03.034 [2024-12-09 05:27:45.406941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.920 ms 01:33:03.034 [2024-12-09 05:27:45.406952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:03.034 [2024-12-09 05:27:45.407629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:03.034 [2024-12-09 05:27:45.407652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 01:33:03.034 [2024-12-09 05:27:45.407664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.626 ms 01:33:03.034 [2024-12-09 05:27:45.407674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:03.034 [2024-12-09 05:27:45.462749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:33:03.034 [2024-12-09 05:27:45.462782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 01:33:03.034 [2024-12-09 05:27:45.462795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:33:03.034 [2024-12-09 05:27:45.462811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:03.034 [2024-12-09 05:27:45.462898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:33:03.034 [2024-12-09 05:27:45.462911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 01:33:03.034 [2024-12-09 05:27:45.462922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:33:03.034 [2024-12-09 05:27:45.462932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:03.034 [2024-12-09 05:27:45.462993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:33:03.034 [2024-12-09 05:27:45.463008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 01:33:03.034 [2024-12-09 05:27:45.463019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:33:03.034 [2024-12-09 05:27:45.463029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:03.034 [2024-12-09 05:27:45.463054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:33:03.034 [2024-12-09 05:27:45.463066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 01:33:03.034 [2024-12-09 05:27:45.463076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:33:03.034 [2024-12-09 05:27:45.463086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:03.292 [2024-12-09 05:27:45.589430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:33:03.292 [2024-12-09 05:27:45.589496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 01:33:03.292 [2024-12-09 05:27:45.589511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:33:03.292 [2024-12-09 05:27:45.589522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:03.292 [2024-12-09 05:27:45.688142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:33:03.292 [2024-12-09 05:27:45.688199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 01:33:03.292 [2024-12-09 05:27:45.688215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:33:03.292 [2024-12-09 05:27:45.688227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:03.292 [2024-12-09 05:27:45.688310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:33:03.292 [2024-12-09 05:27:45.688323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 01:33:03.292 [2024-12-09 05:27:45.688334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:33:03.292 [2024-12-09 05:27:45.688345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:03.292 [2024-12-09 05:27:45.688378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:33:03.292 [2024-12-09 05:27:45.688396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 01:33:03.292 [2024-12-09 05:27:45.688408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:33:03.292 [2024-12-09 05:27:45.688418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:03.292 [2024-12-09 05:27:45.688558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:33:03.292 [2024-12-09 05:27:45.688573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 01:33:03.292 [2024-12-09 05:27:45.688584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:33:03.292 [2024-12-09 05:27:45.688595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:03.292 [2024-12-09 05:27:45.688636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:33:03.292 [2024-12-09 05:27:45.688648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 01:33:03.292 [2024-12-09 05:27:45.688665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:33:03.292 [2024-12-09 05:27:45.688675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:03.292 [2024-12-09 05:27:45.688725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:33:03.292 [2024-12-09 05:27:45.688737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 01:33:03.293 [2024-12-09 05:27:45.688748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:33:03.293 [2024-12-09 05:27:45.688759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:03.293 [2024-12-09 05:27:45.688810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:33:03.293 [2024-12-09 05:27:45.688827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 01:33:03.293 [2024-12-09 05:27:45.688838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:33:03.293 [2024-12-09 05:27:45.688848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:03.293 [2024-12-09 05:27:45.689021] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 512.239 ms, result 0 01:33:04.666 01:33:04.666 01:33:04.666 05:27:46 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 01:33:04.666 05:27:46 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 01:33:04.924 05:27:47 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 01:33:05.183 [2024-12-09 05:27:47.430204] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:33:05.183 [2024-12-09 05:27:47.430357] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78863 ] 01:33:05.183 [2024-12-09 05:27:47.633938] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:33:05.466 [2024-12-09 05:27:47.769180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:33:05.747 [2024-12-09 05:27:48.174545] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 01:33:05.747 [2024-12-09 05:27:48.174639] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 01:33:06.039 [2024-12-09 05:27:48.340092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:06.039 [2024-12-09 05:27:48.340148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 01:33:06.039 [2024-12-09 05:27:48.340166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 01:33:06.039 [2024-12-09 05:27:48.340176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:06.039 [2024-12-09 05:27:48.343559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:06.039 [2024-12-09 05:27:48.343598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 01:33:06.039 [2024-12-09 05:27:48.343610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.369 ms 01:33:06.039 [2024-12-09 05:27:48.343620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:06.039 [2024-12-09 05:27:48.343723] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 01:33:06.039 [2024-12-09 05:27:48.344647] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 01:33:06.039 [2024-12-09 05:27:48.344681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:06.039 [2024-12-09 05:27:48.344692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 01:33:06.039 [2024-12-09 05:27:48.344703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.969 ms 01:33:06.039 [2024-12-09 05:27:48.344714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:06.039 [2024-12-09 05:27:48.347158] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 01:33:06.039 [2024-12-09 05:27:48.366794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:06.039 [2024-12-09 05:27:48.366843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 01:33:06.039 [2024-12-09 05:27:48.366859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.669 ms 01:33:06.039 [2024-12-09 05:27:48.366870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:06.039 [2024-12-09 05:27:48.366974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:06.039 [2024-12-09 05:27:48.366997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 01:33:06.039 [2024-12-09 05:27:48.367009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 01:33:06.039 [2024-12-09 05:27:48.367019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:06.039 [2024-12-09 05:27:48.379203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:06.039 [2024-12-09 05:27:48.379230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 01:33:06.039 [2024-12-09 05:27:48.379242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.162 ms 01:33:06.039 [2024-12-09 05:27:48.379253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:06.039 [2024-12-09 05:27:48.379377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:06.039 [2024-12-09 05:27:48.379392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 01:33:06.039 [2024-12-09 05:27:48.379403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 01:33:06.039 [2024-12-09 05:27:48.379415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:06.039 [2024-12-09 05:27:48.379447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:06.039 [2024-12-09 05:27:48.379477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 01:33:06.039 [2024-12-09 05:27:48.379490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 01:33:06.039 [2024-12-09 05:27:48.379500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:06.039 [2024-12-09 05:27:48.379523] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 01:33:06.039 [2024-12-09 05:27:48.385021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:06.039 [2024-12-09 05:27:48.385054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 01:33:06.039 [2024-12-09 05:27:48.385065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.514 ms 01:33:06.039 [2024-12-09 05:27:48.385075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:06.039 [2024-12-09 05:27:48.385123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:06.039 [2024-12-09 05:27:48.385136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 01:33:06.039 [2024-12-09 05:27:48.385148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 01:33:06.039 [2024-12-09 05:27:48.385157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:06.039 [2024-12-09 05:27:48.385184] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 01:33:06.039 [2024-12-09 05:27:48.385209] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 01:33:06.039 [2024-12-09 05:27:48.385245] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 01:33:06.039 [2024-12-09 05:27:48.385263] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 01:33:06.039 [2024-12-09 05:27:48.385352] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 01:33:06.039 [2024-12-09 05:27:48.385366] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 01:33:06.039 [2024-12-09 05:27:48.385379] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 01:33:06.039 [2024-12-09 05:27:48.385396] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 01:33:06.039 [2024-12-09 05:27:48.385408] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 01:33:06.039 [2024-12-09 05:27:48.385421] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 01:33:06.039 [2024-12-09 05:27:48.385432] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 01:33:06.039 [2024-12-09 05:27:48.385442] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 01:33:06.039 [2024-12-09 05:27:48.385452] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 01:33:06.039 [2024-12-09 05:27:48.385474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:06.039 [2024-12-09 05:27:48.385485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 01:33:06.039 [2024-12-09 05:27:48.385496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.294 ms 01:33:06.039 [2024-12-09 05:27:48.385506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:06.039 [2024-12-09 05:27:48.385580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:06.039 [2024-12-09 05:27:48.385595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 01:33:06.039 [2024-12-09 05:27:48.385606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 01:33:06.039 [2024-12-09 05:27:48.385616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:06.039 [2024-12-09 05:27:48.385706] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 01:33:06.039 [2024-12-09 05:27:48.385719] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 01:33:06.039 [2024-12-09 05:27:48.385729] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 01:33:06.039 [2024-12-09 05:27:48.385741] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:33:06.039 [2024-12-09 05:27:48.385752] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 01:33:06.039 [2024-12-09 05:27:48.385761] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 01:33:06.039 [2024-12-09 05:27:48.385771] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 01:33:06.039 [2024-12-09 05:27:48.385781] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 01:33:06.039 [2024-12-09 05:27:48.385791] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 01:33:06.039 [2024-12-09 05:27:48.385802] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 01:33:06.039 [2024-12-09 05:27:48.385812] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 01:33:06.039 [2024-12-09 05:27:48.385834] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 01:33:06.039 [2024-12-09 05:27:48.385843] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 01:33:06.039 [2024-12-09 05:27:48.385852] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 01:33:06.039 [2024-12-09 05:27:48.385861] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 01:33:06.039 [2024-12-09 05:27:48.385870] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:33:06.039 [2024-12-09 05:27:48.385880] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 01:33:06.039 [2024-12-09 05:27:48.385889] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 01:33:06.039 [2024-12-09 05:27:48.385898] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:33:06.039 [2024-12-09 05:27:48.385907] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 01:33:06.039 [2024-12-09 05:27:48.385916] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 01:33:06.039 [2024-12-09 05:27:48.385925] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:33:06.039 [2024-12-09 05:27:48.385934] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 01:33:06.039 [2024-12-09 05:27:48.385943] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 01:33:06.039 [2024-12-09 05:27:48.385952] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:33:06.039 [2024-12-09 05:27:48.385961] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 01:33:06.039 [2024-12-09 05:27:48.385970] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 01:33:06.039 [2024-12-09 05:27:48.385978] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:33:06.039 [2024-12-09 05:27:48.385987] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 01:33:06.039 [2024-12-09 05:27:48.385997] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 01:33:06.039 [2024-12-09 05:27:48.386006] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:33:06.039 [2024-12-09 05:27:48.386015] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 01:33:06.039 [2024-12-09 05:27:48.386024] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 01:33:06.039 [2024-12-09 05:27:48.386033] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 01:33:06.039 [2024-12-09 05:27:48.386041] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 01:33:06.039 [2024-12-09 05:27:48.386049] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 01:33:06.039 [2024-12-09 05:27:48.386058] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 01:33:06.039 [2024-12-09 05:27:48.386068] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 01:33:06.039 [2024-12-09 05:27:48.386077] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 01:33:06.039 [2024-12-09 05:27:48.386085] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:33:06.039 [2024-12-09 05:27:48.386093] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 01:33:06.039 [2024-12-09 05:27:48.386103] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 01:33:06.039 [2024-12-09 05:27:48.386112] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:33:06.039 [2024-12-09 05:27:48.386120] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 01:33:06.039 [2024-12-09 05:27:48.386130] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 01:33:06.039 [2024-12-09 05:27:48.386144] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 01:33:06.039 [2024-12-09 05:27:48.386153] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:33:06.039 [2024-12-09 05:27:48.386163] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 01:33:06.039 [2024-12-09 05:27:48.386173] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 01:33:06.039 [2024-12-09 05:27:48.386182] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 01:33:06.039 [2024-12-09 05:27:48.386191] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 01:33:06.039 [2024-12-09 05:27:48.386200] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 01:33:06.039 [2024-12-09 05:27:48.386210] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 01:33:06.039 [2024-12-09 05:27:48.386220] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 01:33:06.039 [2024-12-09 05:27:48.386233] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 01:33:06.040 [2024-12-09 05:27:48.386243] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 01:33:06.040 [2024-12-09 05:27:48.386253] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 01:33:06.040 [2024-12-09 05:27:48.386262] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 01:33:06.040 [2024-12-09 05:27:48.386271] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 01:33:06.040 [2024-12-09 05:27:48.386281] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 01:33:06.040 [2024-12-09 05:27:48.386291] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 01:33:06.040 [2024-12-09 05:27:48.386301] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 01:33:06.040 [2024-12-09 05:27:48.386310] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 01:33:06.040 [2024-12-09 05:27:48.386320] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 01:33:06.040 [2024-12-09 05:27:48.386330] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 01:33:06.040 [2024-12-09 05:27:48.386340] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 01:33:06.040 [2024-12-09 05:27:48.386350] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 01:33:06.040 [2024-12-09 05:27:48.386360] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 01:33:06.040 [2024-12-09 05:27:48.386371] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 01:33:06.040 [2024-12-09 05:27:48.386381] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 01:33:06.040 [2024-12-09 05:27:48.386393] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 01:33:06.040 [2024-12-09 05:27:48.386404] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 01:33:06.040 [2024-12-09 05:27:48.386414] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 01:33:06.040 [2024-12-09 05:27:48.386424] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 01:33:06.040 [2024-12-09 05:27:48.386435] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 01:33:06.040 [2024-12-09 05:27:48.386444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:06.040 [2024-12-09 05:27:48.386469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 01:33:06.040 [2024-12-09 05:27:48.386480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.791 ms 01:33:06.040 [2024-12-09 05:27:48.386490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:06.040 [2024-12-09 05:27:48.435792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:06.040 [2024-12-09 05:27:48.435830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 01:33:06.040 [2024-12-09 05:27:48.435844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.322 ms 01:33:06.040 [2024-12-09 05:27:48.435856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:06.040 [2024-12-09 05:27:48.436005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:06.040 [2024-12-09 05:27:48.436019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 01:33:06.040 [2024-12-09 05:27:48.436031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 01:33:06.040 [2024-12-09 05:27:48.436042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:06.298 [2024-12-09 05:27:48.506973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:06.298 [2024-12-09 05:27:48.507025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 01:33:06.298 [2024-12-09 05:27:48.507038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 71.021 ms 01:33:06.298 [2024-12-09 05:27:48.507049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:06.298 [2024-12-09 05:27:48.507124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:06.298 [2024-12-09 05:27:48.507137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 01:33:06.298 [2024-12-09 05:27:48.507149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 01:33:06.298 [2024-12-09 05:27:48.507160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:06.298 [2024-12-09 05:27:48.507973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:06.298 [2024-12-09 05:27:48.507994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 01:33:06.298 [2024-12-09 05:27:48.508013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.791 ms 01:33:06.298 [2024-12-09 05:27:48.508025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:06.298 [2024-12-09 05:27:48.508171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:06.298 [2024-12-09 05:27:48.508186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 01:33:06.298 [2024-12-09 05:27:48.508197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.120 ms 01:33:06.298 [2024-12-09 05:27:48.508207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:06.298 [2024-12-09 05:27:48.531206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:06.298 [2024-12-09 05:27:48.531239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 01:33:06.298 [2024-12-09 05:27:48.531253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.011 ms 01:33:06.298 [2024-12-09 05:27:48.531264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:06.298 [2024-12-09 05:27:48.550247] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 01:33:06.298 [2024-12-09 05:27:48.550284] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 01:33:06.298 [2024-12-09 05:27:48.550300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:06.298 [2024-12-09 05:27:48.550311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 01:33:06.298 [2024-12-09 05:27:48.550323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.959 ms 01:33:06.298 [2024-12-09 05:27:48.550333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:06.298 [2024-12-09 05:27:48.579838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:06.298 [2024-12-09 05:27:48.579889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 01:33:06.298 [2024-12-09 05:27:48.579904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.467 ms 01:33:06.298 [2024-12-09 05:27:48.579915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:06.298 [2024-12-09 05:27:48.597186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:06.298 [2024-12-09 05:27:48.597222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 01:33:06.298 [2024-12-09 05:27:48.597235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.220 ms 01:33:06.298 [2024-12-09 05:27:48.597244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:06.298 [2024-12-09 05:27:48.614485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:06.298 [2024-12-09 05:27:48.614518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 01:33:06.298 [2024-12-09 05:27:48.614530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.193 ms 01:33:06.298 [2024-12-09 05:27:48.614540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:06.298 [2024-12-09 05:27:48.615229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:06.298 [2024-12-09 05:27:48.615259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 01:33:06.298 [2024-12-09 05:27:48.615271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.586 ms 01:33:06.298 [2024-12-09 05:27:48.615281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:06.298 [2024-12-09 05:27:48.705797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:06.298 [2024-12-09 05:27:48.705854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 01:33:06.298 [2024-12-09 05:27:48.705872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 90.635 ms 01:33:06.298 [2024-12-09 05:27:48.705883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:06.298 [2024-12-09 05:27:48.716249] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 01:33:06.298 [2024-12-09 05:27:48.740098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:06.298 [2024-12-09 05:27:48.740141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 01:33:06.298 [2024-12-09 05:27:48.740157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.182 ms 01:33:06.298 [2024-12-09 05:27:48.740186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:06.298 [2024-12-09 05:27:48.740298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:06.298 [2024-12-09 05:27:48.740313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 01:33:06.298 [2024-12-09 05:27:48.740324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 01:33:06.298 [2024-12-09 05:27:48.740335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:06.298 [2024-12-09 05:27:48.740401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:06.298 [2024-12-09 05:27:48.740412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 01:33:06.298 [2024-12-09 05:27:48.740424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 01:33:06.298 [2024-12-09 05:27:48.740439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:06.298 [2024-12-09 05:27:48.740498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:06.298 [2024-12-09 05:27:48.740513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 01:33:06.298 [2024-12-09 05:27:48.740524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 01:33:06.298 [2024-12-09 05:27:48.740533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:06.298 [2024-12-09 05:27:48.740578] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 01:33:06.298 [2024-12-09 05:27:48.740591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:06.298 [2024-12-09 05:27:48.740602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 01:33:06.298 [2024-12-09 05:27:48.740615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 01:33:06.298 [2024-12-09 05:27:48.740625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:06.556 [2024-12-09 05:27:48.775252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:06.556 [2024-12-09 05:27:48.775292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 01:33:06.556 [2024-12-09 05:27:48.775306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.660 ms 01:33:06.556 [2024-12-09 05:27:48.775316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:06.556 [2024-12-09 05:27:48.775442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:06.556 [2024-12-09 05:27:48.775456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 01:33:06.556 [2024-12-09 05:27:48.775485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 01:33:06.556 [2024-12-09 05:27:48.775496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:06.556 [2024-12-09 05:27:48.776860] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 01:33:06.556 [2024-12-09 05:27:48.780757] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 437.108 ms, result 0 01:33:06.556 [2024-12-09 05:27:48.781759] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 01:33:06.556 [2024-12-09 05:27:48.799627] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 01:33:06.556  [2024-12-09T05:27:49.012Z] Copying: 4096/4096 [kB] (average 23 MBps)[2024-12-09 05:27:48.970358] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 01:33:06.556 [2024-12-09 05:27:48.984235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:06.556 [2024-12-09 05:27:48.984282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 01:33:06.556 [2024-12-09 05:27:48.984302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 01:33:06.556 [2024-12-09 05:27:48.984312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:06.556 [2024-12-09 05:27:48.984334] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 01:33:06.556 [2024-12-09 05:27:48.988917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:06.556 [2024-12-09 05:27:48.988957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 01:33:06.556 [2024-12-09 05:27:48.988969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.575 ms 01:33:06.556 [2024-12-09 05:27:48.988979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:06.557 [2024-12-09 05:27:48.991051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:06.557 [2024-12-09 05:27:48.991087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 01:33:06.557 [2024-12-09 05:27:48.991100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.051 ms 01:33:06.557 [2024-12-09 05:27:48.991110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:06.557 [2024-12-09 05:27:48.994243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:06.557 [2024-12-09 05:27:48.994273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 01:33:06.557 [2024-12-09 05:27:48.994284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.115 ms 01:33:06.557 [2024-12-09 05:27:48.994295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:06.557 [2024-12-09 05:27:48.999621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:06.557 [2024-12-09 05:27:48.999651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 01:33:06.557 [2024-12-09 05:27:48.999662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.306 ms 01:33:06.557 [2024-12-09 05:27:48.999672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:06.816 [2024-12-09 05:27:49.034176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:06.816 [2024-12-09 05:27:49.034212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 01:33:06.816 [2024-12-09 05:27:49.034225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.508 ms 01:33:06.816 [2024-12-09 05:27:49.034235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:06.816 [2024-12-09 05:27:49.054745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:06.816 [2024-12-09 05:27:49.054787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 01:33:06.816 [2024-12-09 05:27:49.054800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.491 ms 01:33:06.816 [2024-12-09 05:27:49.054811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:06.816 [2024-12-09 05:27:49.054970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:06.816 [2024-12-09 05:27:49.054984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 01:33:06.816 [2024-12-09 05:27:49.055015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 01:33:06.816 [2024-12-09 05:27:49.055024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:06.816 [2024-12-09 05:27:49.089218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:06.816 [2024-12-09 05:27:49.089253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 01:33:06.816 [2024-12-09 05:27:49.089265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.232 ms 01:33:06.816 [2024-12-09 05:27:49.089274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:06.816 [2024-12-09 05:27:49.122433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:06.816 [2024-12-09 05:27:49.122474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 01:33:06.816 [2024-12-09 05:27:49.122487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.161 ms 01:33:06.816 [2024-12-09 05:27:49.122496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:06.816 [2024-12-09 05:27:49.156106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:06.816 [2024-12-09 05:27:49.156140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 01:33:06.816 [2024-12-09 05:27:49.156152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.612 ms 01:33:06.816 [2024-12-09 05:27:49.156161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:06.816 [2024-12-09 05:27:49.189615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:06.816 [2024-12-09 05:27:49.189649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 01:33:06.816 [2024-12-09 05:27:49.189661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.419 ms 01:33:06.816 [2024-12-09 05:27:49.189670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:06.816 [2024-12-09 05:27:49.189722] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 01:33:06.817 [2024-12-09 05:27:49.189740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 01:33:06.817 [2024-12-09 05:27:49.189752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 01:33:06.817 [2024-12-09 05:27:49.189764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 01:33:06.817 [2024-12-09 05:27:49.189775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 01:33:06.817 [2024-12-09 05:27:49.189785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 01:33:06.817 [2024-12-09 05:27:49.189796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 01:33:06.817 [2024-12-09 05:27:49.189806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 01:33:06.817 [2024-12-09 05:27:49.189816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 01:33:06.817 [2024-12-09 05:27:49.189827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 01:33:06.817 [2024-12-09 05:27:49.189838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 01:33:06.817 [2024-12-09 05:27:49.189847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 01:33:06.817 [2024-12-09 05:27:49.189859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 01:33:06.817 [2024-12-09 05:27:49.189868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 01:33:06.817 [2024-12-09 05:27:49.189878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 01:33:06.817 [2024-12-09 05:27:49.189888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 01:33:06.817 [2024-12-09 05:27:49.189898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 01:33:06.817 [2024-12-09 05:27:49.189907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 01:33:06.817 [2024-12-09 05:27:49.189917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 01:33:06.817 [2024-12-09 05:27:49.189927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 01:33:06.817 [2024-12-09 05:27:49.189936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 01:33:06.817 [2024-12-09 05:27:49.189946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 01:33:06.817 [2024-12-09 05:27:49.189956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 01:33:06.817 [2024-12-09 05:27:49.189965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 01:33:06.817 [2024-12-09 05:27:49.189974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 01:33:06.817 [2024-12-09 05:27:49.189984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 01:33:06.817 [2024-12-09 05:27:49.189993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 01:33:06.817 [2024-12-09 05:27:49.190002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 01:33:06.817 [2024-12-09 05:27:49.190012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 01:33:06.817 [2024-12-09 05:27:49.190021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 01:33:06.817 [2024-12-09 05:27:49.190032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 01:33:06.817 [2024-12-09 05:27:49.190044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 01:33:06.817 [2024-12-09 05:27:49.190054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 01:33:06.817 [2024-12-09 05:27:49.190065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 01:33:06.817 [2024-12-09 05:27:49.190076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 01:33:06.817 [2024-12-09 05:27:49.190086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 01:33:06.817 [2024-12-09 05:27:49.190096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 01:33:06.817 [2024-12-09 05:27:49.190106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 01:33:06.817 [2024-12-09 05:27:49.190116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 01:33:06.817 [2024-12-09 05:27:49.190126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 01:33:06.817 [2024-12-09 05:27:49.190135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 01:33:06.817 [2024-12-09 05:27:49.190145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 01:33:06.817 [2024-12-09 05:27:49.190155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 01:33:06.817 [2024-12-09 05:27:49.190165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 01:33:06.817 [2024-12-09 05:27:49.190175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 01:33:06.817 [2024-12-09 05:27:49.190185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 01:33:06.817 [2024-12-09 05:27:49.190194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 01:33:06.817 [2024-12-09 05:27:49.190203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 01:33:06.817 [2024-12-09 05:27:49.190212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 01:33:06.817 [2024-12-09 05:27:49.190222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 01:33:06.817 [2024-12-09 05:27:49.190231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 01:33:06.817 [2024-12-09 05:27:49.190241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 01:33:06.817 [2024-12-09 05:27:49.190251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 01:33:06.817 [2024-12-09 05:27:49.190260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 01:33:06.817 [2024-12-09 05:27:49.190269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 01:33:06.817 [2024-12-09 05:27:49.190279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 01:33:06.817 [2024-12-09 05:27:49.190289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 01:33:06.817 [2024-12-09 05:27:49.190298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 01:33:06.817 [2024-12-09 05:27:49.190307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 01:33:06.817 [2024-12-09 05:27:49.190317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 01:33:06.817 [2024-12-09 05:27:49.190326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 01:33:06.817 [2024-12-09 05:27:49.190336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 01:33:06.817 [2024-12-09 05:27:49.190346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 01:33:06.817 [2024-12-09 05:27:49.190358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 01:33:06.817 [2024-12-09 05:27:49.190368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 01:33:06.817 [2024-12-09 05:27:49.190378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 01:33:06.817 [2024-12-09 05:27:49.190389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 01:33:06.817 [2024-12-09 05:27:49.190399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 01:33:06.817 [2024-12-09 05:27:49.190410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 01:33:06.817 [2024-12-09 05:27:49.190420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 01:33:06.817 [2024-12-09 05:27:49.190431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 01:33:06.817 [2024-12-09 05:27:49.190440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 01:33:06.817 [2024-12-09 05:27:49.190450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 01:33:06.817 [2024-12-09 05:27:49.190471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 01:33:06.817 [2024-12-09 05:27:49.190482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 01:33:06.817 [2024-12-09 05:27:49.190492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 01:33:06.817 [2024-12-09 05:27:49.190502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 01:33:06.817 [2024-12-09 05:27:49.190512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 01:33:06.817 [2024-12-09 05:27:49.190522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 01:33:06.817 [2024-12-09 05:27:49.190532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 01:33:06.817 [2024-12-09 05:27:49.190542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 01:33:06.817 [2024-12-09 05:27:49.190552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 01:33:06.817 [2024-12-09 05:27:49.190562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 01:33:06.817 [2024-12-09 05:27:49.190572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 01:33:06.817 [2024-12-09 05:27:49.190581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 01:33:06.817 [2024-12-09 05:27:49.190590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 01:33:06.817 [2024-12-09 05:27:49.190600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 01:33:06.817 [2024-12-09 05:27:49.190609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 01:33:06.817 [2024-12-09 05:27:49.190619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 01:33:06.817 [2024-12-09 05:27:49.190629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 01:33:06.817 [2024-12-09 05:27:49.190639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 01:33:06.818 [2024-12-09 05:27:49.190648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 01:33:06.818 [2024-12-09 05:27:49.190658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 01:33:06.818 [2024-12-09 05:27:49.190667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 01:33:06.818 [2024-12-09 05:27:49.190706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 01:33:06.818 [2024-12-09 05:27:49.190717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 01:33:06.818 [2024-12-09 05:27:49.190728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 01:33:06.818 [2024-12-09 05:27:49.190739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 01:33:06.818 [2024-12-09 05:27:49.190749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 01:33:06.818 [2024-12-09 05:27:49.190759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 01:33:06.818 [2024-12-09 05:27:49.190769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 01:33:06.818 [2024-12-09 05:27:49.190786] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 01:33:06.818 [2024-12-09 05:27:49.190797] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ea97d973-01dc-423e-88bb-a65e4c614878 01:33:06.818 [2024-12-09 05:27:49.190808] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 01:33:06.818 [2024-12-09 05:27:49.190818] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 01:33:06.818 [2024-12-09 05:27:49.190828] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 01:33:06.818 [2024-12-09 05:27:49.190838] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 01:33:06.818 [2024-12-09 05:27:49.190847] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 01:33:06.818 [2024-12-09 05:27:49.190857] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 01:33:06.818 [2024-12-09 05:27:49.190874] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 01:33:06.818 [2024-12-09 05:27:49.190883] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 01:33:06.818 [2024-12-09 05:27:49.190892] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 01:33:06.818 [2024-12-09 05:27:49.190901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:06.818 [2024-12-09 05:27:49.190911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 01:33:06.818 [2024-12-09 05:27:49.190922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.183 ms 01:33:06.818 [2024-12-09 05:27:49.190931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:06.818 [2024-12-09 05:27:49.210784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:06.818 [2024-12-09 05:27:49.210815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 01:33:06.818 [2024-12-09 05:27:49.210827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.854 ms 01:33:06.818 [2024-12-09 05:27:49.210836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:06.818 [2024-12-09 05:27:49.211444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:06.818 [2024-12-09 05:27:49.211483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 01:33:06.818 [2024-12-09 05:27:49.211495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.567 ms 01:33:06.818 [2024-12-09 05:27:49.211505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:06.818 [2024-12-09 05:27:49.267143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:33:06.818 [2024-12-09 05:27:49.267179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 01:33:06.818 [2024-12-09 05:27:49.267193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:33:06.818 [2024-12-09 05:27:49.267210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:06.818 [2024-12-09 05:27:49.267299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:33:06.818 [2024-12-09 05:27:49.267310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 01:33:06.818 [2024-12-09 05:27:49.267321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:33:06.818 [2024-12-09 05:27:49.267334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:06.818 [2024-12-09 05:27:49.267384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:33:06.818 [2024-12-09 05:27:49.267396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 01:33:06.818 [2024-12-09 05:27:49.267407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:33:06.818 [2024-12-09 05:27:49.267417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:06.818 [2024-12-09 05:27:49.267441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:33:06.818 [2024-12-09 05:27:49.267452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 01:33:06.818 [2024-12-09 05:27:49.267475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:33:06.818 [2024-12-09 05:27:49.267486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:07.076 [2024-12-09 05:27:49.391991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:33:07.076 [2024-12-09 05:27:49.392052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 01:33:07.076 [2024-12-09 05:27:49.392067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:33:07.076 [2024-12-09 05:27:49.392085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:07.076 [2024-12-09 05:27:49.493522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:33:07.076 [2024-12-09 05:27:49.493572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 01:33:07.076 [2024-12-09 05:27:49.493586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:33:07.076 [2024-12-09 05:27:49.493598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:07.076 [2024-12-09 05:27:49.493697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:33:07.076 [2024-12-09 05:27:49.493713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 01:33:07.076 [2024-12-09 05:27:49.493724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:33:07.076 [2024-12-09 05:27:49.493735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:07.077 [2024-12-09 05:27:49.493766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:33:07.077 [2024-12-09 05:27:49.493785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 01:33:07.077 [2024-12-09 05:27:49.493795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:33:07.077 [2024-12-09 05:27:49.493806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:07.077 [2024-12-09 05:27:49.493933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:33:07.077 [2024-12-09 05:27:49.493948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 01:33:07.077 [2024-12-09 05:27:49.493958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:33:07.077 [2024-12-09 05:27:49.493969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:07.077 [2024-12-09 05:27:49.494009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:33:07.077 [2024-12-09 05:27:49.494023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 01:33:07.077 [2024-12-09 05:27:49.494040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:33:07.077 [2024-12-09 05:27:49.494051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:07.077 [2024-12-09 05:27:49.494099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:33:07.077 [2024-12-09 05:27:49.494111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 01:33:07.077 [2024-12-09 05:27:49.494121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:33:07.077 [2024-12-09 05:27:49.494132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:07.077 [2024-12-09 05:27:49.494183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:33:07.077 [2024-12-09 05:27:49.494200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 01:33:07.077 [2024-12-09 05:27:49.494210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:33:07.077 [2024-12-09 05:27:49.494220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:07.077 [2024-12-09 05:27:49.494388] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 510.958 ms, result 0 01:33:08.449 01:33:08.449 01:33:08.449 05:27:50 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 01:33:08.449 05:27:50 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=78905 01:33:08.449 05:27:50 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 78905 01:33:08.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:33:08.449 05:27:50 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 78905 ']' 01:33:08.449 05:27:50 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:33:08.449 05:27:50 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 01:33:08.449 05:27:50 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:33:08.449 05:27:50 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 01:33:08.449 05:27:50 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 01:33:08.449 [2024-12-09 05:27:50.825815] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:33:08.449 [2024-12-09 05:27:50.826237] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78905 ] 01:33:08.707 [2024-12-09 05:27:51.014785] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:33:08.707 [2024-12-09 05:27:51.144357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:33:10.081 05:27:52 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:33:10.081 05:27:52 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 01:33:10.081 05:27:52 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 01:33:10.081 [2024-12-09 05:27:52.326800] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 01:33:10.081 [2024-12-09 05:27:52.326875] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 01:33:10.081 [2024-12-09 05:27:52.510160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:10.081 [2024-12-09 05:27:52.510212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 01:33:10.081 [2024-12-09 05:27:52.510236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 01:33:10.081 [2024-12-09 05:27:52.510247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:10.081 [2024-12-09 05:27:52.513793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:10.081 [2024-12-09 05:27:52.513829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 01:33:10.081 [2024-12-09 05:27:52.513844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.521 ms 01:33:10.081 [2024-12-09 05:27:52.513854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:10.081 [2024-12-09 05:27:52.513964] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 01:33:10.081 [2024-12-09 05:27:52.514985] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 01:33:10.081 [2024-12-09 05:27:52.515025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:10.081 [2024-12-09 05:27:52.515036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 01:33:10.081 [2024-12-09 05:27:52.515050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.075 ms 01:33:10.081 [2024-12-09 05:27:52.515062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:10.081 [2024-12-09 05:27:52.517895] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 01:33:10.341 [2024-12-09 05:27:52.538894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:10.341 [2024-12-09 05:27:52.538938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 01:33:10.341 [2024-12-09 05:27:52.538953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.038 ms 01:33:10.341 [2024-12-09 05:27:52.538968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:10.341 [2024-12-09 05:27:52.539074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:10.341 [2024-12-09 05:27:52.539092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 01:33:10.341 [2024-12-09 05:27:52.539104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 01:33:10.341 [2024-12-09 05:27:52.539118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:10.341 [2024-12-09 05:27:52.551607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:10.341 [2024-12-09 05:27:52.551646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 01:33:10.341 [2024-12-09 05:27:52.551660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.455 ms 01:33:10.341 [2024-12-09 05:27:52.551675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:10.341 [2024-12-09 05:27:52.551816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:10.341 [2024-12-09 05:27:52.551834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 01:33:10.341 [2024-12-09 05:27:52.551854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 01:33:10.341 [2024-12-09 05:27:52.551874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:10.341 [2024-12-09 05:27:52.551906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:10.341 [2024-12-09 05:27:52.551921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 01:33:10.341 [2024-12-09 05:27:52.551932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 01:33:10.341 [2024-12-09 05:27:52.551945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:10.341 [2024-12-09 05:27:52.551974] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 01:33:10.341 [2024-12-09 05:27:52.557596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:10.341 [2024-12-09 05:27:52.557625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 01:33:10.341 [2024-12-09 05:27:52.557640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.636 ms 01:33:10.341 [2024-12-09 05:27:52.557651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:10.341 [2024-12-09 05:27:52.557713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:10.341 [2024-12-09 05:27:52.557725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 01:33:10.341 [2024-12-09 05:27:52.557763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 01:33:10.341 [2024-12-09 05:27:52.557775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:10.341 [2024-12-09 05:27:52.557802] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 01:33:10.341 [2024-12-09 05:27:52.557827] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 01:33:10.341 [2024-12-09 05:27:52.557876] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 01:33:10.341 [2024-12-09 05:27:52.557898] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 01:33:10.341 [2024-12-09 05:27:52.557995] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 01:33:10.341 [2024-12-09 05:27:52.558008] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 01:33:10.341 [2024-12-09 05:27:52.558032] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 01:33:10.341 [2024-12-09 05:27:52.558047] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 01:33:10.341 [2024-12-09 05:27:52.558064] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 01:33:10.341 [2024-12-09 05:27:52.558076] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 01:33:10.341 [2024-12-09 05:27:52.558089] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 01:33:10.341 [2024-12-09 05:27:52.558100] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 01:33:10.341 [2024-12-09 05:27:52.558118] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 01:33:10.341 [2024-12-09 05:27:52.558128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:10.341 [2024-12-09 05:27:52.558142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 01:33:10.341 [2024-12-09 05:27:52.558153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.335 ms 01:33:10.341 [2024-12-09 05:27:52.558170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:10.341 [2024-12-09 05:27:52.558244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:10.341 [2024-12-09 05:27:52.558281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 01:33:10.341 [2024-12-09 05:27:52.558292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 01:33:10.341 [2024-12-09 05:27:52.558305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:10.341 [2024-12-09 05:27:52.558397] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 01:33:10.341 [2024-12-09 05:27:52.558413] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 01:33:10.341 [2024-12-09 05:27:52.558425] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 01:33:10.341 [2024-12-09 05:27:52.558439] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:33:10.341 [2024-12-09 05:27:52.558451] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 01:33:10.341 [2024-12-09 05:27:52.558481] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 01:33:10.341 [2024-12-09 05:27:52.558492] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 01:33:10.341 [2024-12-09 05:27:52.558512] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 01:33:10.341 [2024-12-09 05:27:52.558521] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 01:33:10.341 [2024-12-09 05:27:52.558534] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 01:33:10.341 [2024-12-09 05:27:52.558544] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 01:33:10.341 [2024-12-09 05:27:52.558557] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 01:33:10.341 [2024-12-09 05:27:52.558566] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 01:33:10.341 [2024-12-09 05:27:52.558578] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 01:33:10.341 [2024-12-09 05:27:52.558588] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 01:33:10.341 [2024-12-09 05:27:52.558600] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:33:10.341 [2024-12-09 05:27:52.558609] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 01:33:10.341 [2024-12-09 05:27:52.558622] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 01:33:10.341 [2024-12-09 05:27:52.558651] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:33:10.341 [2024-12-09 05:27:52.558664] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 01:33:10.341 [2024-12-09 05:27:52.558673] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 01:33:10.341 [2024-12-09 05:27:52.558685] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:33:10.341 [2024-12-09 05:27:52.558694] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 01:33:10.341 [2024-12-09 05:27:52.558709] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 01:33:10.341 [2024-12-09 05:27:52.558718] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:33:10.341 [2024-12-09 05:27:52.558729] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 01:33:10.341 [2024-12-09 05:27:52.558738] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 01:33:10.341 [2024-12-09 05:27:52.558750] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:33:10.341 [2024-12-09 05:27:52.558760] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 01:33:10.341 [2024-12-09 05:27:52.558772] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 01:33:10.341 [2024-12-09 05:27:52.558781] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:33:10.342 [2024-12-09 05:27:52.558792] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 01:33:10.342 [2024-12-09 05:27:52.558801] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 01:33:10.342 [2024-12-09 05:27:52.558815] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 01:33:10.342 [2024-12-09 05:27:52.558824] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 01:33:10.342 [2024-12-09 05:27:52.558836] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 01:33:10.342 [2024-12-09 05:27:52.558844] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 01:33:10.342 [2024-12-09 05:27:52.558857] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 01:33:10.342 [2024-12-09 05:27:52.558867] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 01:33:10.342 [2024-12-09 05:27:52.558882] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:33:10.342 [2024-12-09 05:27:52.558891] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 01:33:10.342 [2024-12-09 05:27:52.558903] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 01:33:10.342 [2024-12-09 05:27:52.558912] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:33:10.342 [2024-12-09 05:27:52.558923] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 01:33:10.342 [2024-12-09 05:27:52.558936] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 01:33:10.342 [2024-12-09 05:27:52.558949] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 01:33:10.342 [2024-12-09 05:27:52.558958] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:33:10.342 [2024-12-09 05:27:52.558971] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 01:33:10.342 [2024-12-09 05:27:52.558980] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 01:33:10.342 [2024-12-09 05:27:52.559001] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 01:33:10.342 [2024-12-09 05:27:52.559011] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 01:33:10.342 [2024-12-09 05:27:52.559023] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 01:33:10.342 [2024-12-09 05:27:52.559032] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 01:33:10.342 [2024-12-09 05:27:52.559046] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 01:33:10.342 [2024-12-09 05:27:52.559059] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 01:33:10.342 [2024-12-09 05:27:52.559077] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 01:33:10.342 [2024-12-09 05:27:52.559089] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 01:33:10.342 [2024-12-09 05:27:52.559104] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 01:33:10.342 [2024-12-09 05:27:52.559114] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 01:33:10.342 [2024-12-09 05:27:52.559127] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 01:33:10.342 [2024-12-09 05:27:52.559137] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 01:33:10.342 [2024-12-09 05:27:52.559150] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 01:33:10.342 [2024-12-09 05:27:52.559159] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 01:33:10.342 [2024-12-09 05:27:52.559173] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 01:33:10.342 [2024-12-09 05:27:52.559184] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 01:33:10.342 [2024-12-09 05:27:52.559197] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 01:33:10.342 [2024-12-09 05:27:52.559206] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 01:33:10.342 [2024-12-09 05:27:52.559218] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 01:33:10.342 [2024-12-09 05:27:52.559228] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 01:33:10.342 [2024-12-09 05:27:52.559242] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 01:33:10.342 [2024-12-09 05:27:52.559255] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 01:33:10.342 [2024-12-09 05:27:52.559273] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 01:33:10.342 [2024-12-09 05:27:52.559283] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 01:33:10.342 [2024-12-09 05:27:52.559296] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 01:33:10.342 [2024-12-09 05:27:52.559305] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 01:33:10.342 [2024-12-09 05:27:52.559320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:10.342 [2024-12-09 05:27:52.559330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 01:33:10.342 [2024-12-09 05:27:52.559343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.974 ms 01:33:10.342 [2024-12-09 05:27:52.559356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:10.342 [2024-12-09 05:27:52.608798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:10.342 [2024-12-09 05:27:52.608830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 01:33:10.342 [2024-12-09 05:27:52.608848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.456 ms 01:33:10.342 [2024-12-09 05:27:52.608862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:10.342 [2024-12-09 05:27:52.609011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:10.342 [2024-12-09 05:27:52.609025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 01:33:10.342 [2024-12-09 05:27:52.609039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 01:33:10.342 [2024-12-09 05:27:52.609049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:10.342 [2024-12-09 05:27:52.660527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:10.342 [2024-12-09 05:27:52.660561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 01:33:10.342 [2024-12-09 05:27:52.660578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.530 ms 01:33:10.342 [2024-12-09 05:27:52.660589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:10.342 [2024-12-09 05:27:52.660665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:10.342 [2024-12-09 05:27:52.660676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 01:33:10.342 [2024-12-09 05:27:52.660691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 01:33:10.342 [2024-12-09 05:27:52.660701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:10.342 [2024-12-09 05:27:52.661434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:10.342 [2024-12-09 05:27:52.661456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 01:33:10.342 [2024-12-09 05:27:52.661486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.708 ms 01:33:10.342 [2024-12-09 05:27:52.661496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:10.342 [2024-12-09 05:27:52.661627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:10.342 [2024-12-09 05:27:52.661641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 01:33:10.342 [2024-12-09 05:27:52.661656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 01:33:10.342 [2024-12-09 05:27:52.661666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:10.342 [2024-12-09 05:27:52.686729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:10.342 [2024-12-09 05:27:52.686764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 01:33:10.342 [2024-12-09 05:27:52.686781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.073 ms 01:33:10.342 [2024-12-09 05:27:52.686792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:10.342 [2024-12-09 05:27:52.718488] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 01:33:10.342 [2024-12-09 05:27:52.718550] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 01:33:10.342 [2024-12-09 05:27:52.718576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:10.342 [2024-12-09 05:27:52.718589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 01:33:10.342 [2024-12-09 05:27:52.718608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.690 ms 01:33:10.342 [2024-12-09 05:27:52.718631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:10.342 [2024-12-09 05:27:52.750358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:10.342 [2024-12-09 05:27:52.750426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 01:33:10.342 [2024-12-09 05:27:52.750449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.614 ms 01:33:10.342 [2024-12-09 05:27:52.750468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:10.342 [2024-12-09 05:27:52.769357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:10.342 [2024-12-09 05:27:52.769396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 01:33:10.342 [2024-12-09 05:27:52.769418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.758 ms 01:33:10.342 [2024-12-09 05:27:52.769428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:10.342 [2024-12-09 05:27:52.786379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:10.342 [2024-12-09 05:27:52.786410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 01:33:10.342 [2024-12-09 05:27:52.786425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.874 ms 01:33:10.342 [2024-12-09 05:27:52.786435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:10.342 [2024-12-09 05:27:52.787299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:10.342 [2024-12-09 05:27:52.787330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 01:33:10.342 [2024-12-09 05:27:52.787346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.740 ms 01:33:10.342 [2024-12-09 05:27:52.787357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:10.601 [2024-12-09 05:27:52.882261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:10.601 [2024-12-09 05:27:52.882328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 01:33:10.601 [2024-12-09 05:27:52.882351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 95.019 ms 01:33:10.601 [2024-12-09 05:27:52.882362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:10.601 [2024-12-09 05:27:52.893915] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 01:33:10.601 [2024-12-09 05:27:52.919285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:10.601 [2024-12-09 05:27:52.919338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 01:33:10.601 [2024-12-09 05:27:52.919355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.813 ms 01:33:10.601 [2024-12-09 05:27:52.919368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:10.601 [2024-12-09 05:27:52.919501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:10.601 [2024-12-09 05:27:52.919519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 01:33:10.601 [2024-12-09 05:27:52.919532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 01:33:10.601 [2024-12-09 05:27:52.919545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:10.601 [2024-12-09 05:27:52.919614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:10.601 [2024-12-09 05:27:52.919629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 01:33:10.601 [2024-12-09 05:27:52.919640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 01:33:10.601 [2024-12-09 05:27:52.919657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:10.601 [2024-12-09 05:27:52.919685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:10.601 [2024-12-09 05:27:52.919699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 01:33:10.601 [2024-12-09 05:27:52.919709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 01:33:10.601 [2024-12-09 05:27:52.919722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:10.601 [2024-12-09 05:27:52.919767] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 01:33:10.601 [2024-12-09 05:27:52.919786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:10.601 [2024-12-09 05:27:52.919800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 01:33:10.601 [2024-12-09 05:27:52.919813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 01:33:10.601 [2024-12-09 05:27:52.919826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:10.601 [2024-12-09 05:27:52.955554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:10.601 [2024-12-09 05:27:52.955592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 01:33:10.601 [2024-12-09 05:27:52.955609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.753 ms 01:33:10.601 [2024-12-09 05:27:52.955620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:10.601 [2024-12-09 05:27:52.955745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:10.601 [2024-12-09 05:27:52.955759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 01:33:10.601 [2024-12-09 05:27:52.955778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 01:33:10.602 [2024-12-09 05:27:52.955788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:10.602 [2024-12-09 05:27:52.957063] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 01:33:10.602 [2024-12-09 05:27:52.961123] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 447.271 ms, result 0 01:33:10.602 [2024-12-09 05:27:52.962444] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 01:33:10.602 Some configs were skipped because the RPC state that can call them passed over. 01:33:10.602 05:27:53 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 01:33:10.860 [2024-12-09 05:27:53.205483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:10.860 [2024-12-09 05:27:53.205565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 01:33:10.860 [2024-12-09 05:27:53.205584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.758 ms 01:33:10.860 [2024-12-09 05:27:53.205599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:10.860 [2024-12-09 05:27:53.205642] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.925 ms, result 0 01:33:10.860 true 01:33:10.860 05:27:53 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 01:33:11.119 [2024-12-09 05:27:53.409075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:11.119 [2024-12-09 05:27:53.409148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 01:33:11.119 [2024-12-09 05:27:53.409171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.360 ms 01:33:11.119 [2024-12-09 05:27:53.409181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:11.119 [2024-12-09 05:27:53.409229] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.524 ms, result 0 01:33:11.119 true 01:33:11.119 05:27:53 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 78905 01:33:11.119 05:27:53 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78905 ']' 01:33:11.119 05:27:53 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78905 01:33:11.119 05:27:53 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 01:33:11.119 05:27:53 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:33:11.119 05:27:53 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78905 01:33:11.119 05:27:53 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:33:11.119 killing process with pid 78905 01:33:11.119 05:27:53 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:33:11.119 05:27:53 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78905' 01:33:11.119 05:27:53 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 78905 01:33:11.119 05:27:53 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 78905 01:33:12.495 [2024-12-09 05:27:54.643981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:12.495 [2024-12-09 05:27:54.644048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 01:33:12.495 [2024-12-09 05:27:54.644065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 01:33:12.495 [2024-12-09 05:27:54.644078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:12.496 [2024-12-09 05:27:54.644108] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 01:33:12.496 [2024-12-09 05:27:54.648864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:12.496 [2024-12-09 05:27:54.648897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 01:33:12.496 [2024-12-09 05:27:54.648916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.740 ms 01:33:12.496 [2024-12-09 05:27:54.648927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:12.496 [2024-12-09 05:27:54.649202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:12.496 [2024-12-09 05:27:54.649216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 01:33:12.496 [2024-12-09 05:27:54.649228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.220 ms 01:33:12.496 [2024-12-09 05:27:54.649238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:12.496 [2024-12-09 05:27:54.652557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:12.496 [2024-12-09 05:27:54.652595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 01:33:12.496 [2024-12-09 05:27:54.652609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.299 ms 01:33:12.496 [2024-12-09 05:27:54.652620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:12.496 [2024-12-09 05:27:54.657996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:12.496 [2024-12-09 05:27:54.658032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 01:33:12.496 [2024-12-09 05:27:54.658047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.340 ms 01:33:12.496 [2024-12-09 05:27:54.658057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:12.496 [2024-12-09 05:27:54.672681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:12.496 [2024-12-09 05:27:54.672725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 01:33:12.496 [2024-12-09 05:27:54.672744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.578 ms 01:33:12.496 [2024-12-09 05:27:54.672754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:12.496 [2024-12-09 05:27:54.684080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:12.496 [2024-12-09 05:27:54.684121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 01:33:12.496 [2024-12-09 05:27:54.684137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.278 ms 01:33:12.496 [2024-12-09 05:27:54.684147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:12.496 [2024-12-09 05:27:54.684288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:12.496 [2024-12-09 05:27:54.684302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 01:33:12.496 [2024-12-09 05:27:54.684315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 01:33:12.496 [2024-12-09 05:27:54.684325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:12.496 [2024-12-09 05:27:54.699897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:12.496 [2024-12-09 05:27:54.699934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 01:33:12.496 [2024-12-09 05:27:54.699950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.570 ms 01:33:12.496 [2024-12-09 05:27:54.699962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:12.496 [2024-12-09 05:27:54.714014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:12.496 [2024-12-09 05:27:54.714049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 01:33:12.496 [2024-12-09 05:27:54.714068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.018 ms 01:33:12.496 [2024-12-09 05:27:54.714078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:12.496 [2024-12-09 05:27:54.727913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:12.496 [2024-12-09 05:27:54.727947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 01:33:12.496 [2024-12-09 05:27:54.727962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.797 ms 01:33:12.496 [2024-12-09 05:27:54.727972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:12.496 [2024-12-09 05:27:54.741968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:12.496 [2024-12-09 05:27:54.742003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 01:33:12.496 [2024-12-09 05:27:54.742019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.935 ms 01:33:12.496 [2024-12-09 05:27:54.742029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:12.496 [2024-12-09 05:27:54.742097] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 01:33:12.496 [2024-12-09 05:27:54.742115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 01:33:12.496 [2024-12-09 05:27:54.742137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 01:33:12.496 [2024-12-09 05:27:54.742149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 01:33:12.496 [2024-12-09 05:27:54.742164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 01:33:12.496 [2024-12-09 05:27:54.742175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 01:33:12.496 [2024-12-09 05:27:54.742193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 01:33:12.496 [2024-12-09 05:27:54.742205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 01:33:12.496 [2024-12-09 05:27:54.742220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 01:33:12.496 [2024-12-09 05:27:54.742230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 01:33:12.496 [2024-12-09 05:27:54.742245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 01:33:12.496 [2024-12-09 05:27:54.742256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 01:33:12.496 [2024-12-09 05:27:54.742270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 01:33:12.496 [2024-12-09 05:27:54.742280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 01:33:12.496 [2024-12-09 05:27:54.742294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 01:33:12.496 [2024-12-09 05:27:54.742304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 01:33:12.496 [2024-12-09 05:27:54.742320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 01:33:12.496 [2024-12-09 05:27:54.742331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 01:33:12.496 [2024-12-09 05:27:54.742345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 01:33:12.496 [2024-12-09 05:27:54.742356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 01:33:12.496 [2024-12-09 05:27:54.742370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 01:33:12.496 [2024-12-09 05:27:54.742380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 01:33:12.496 [2024-12-09 05:27:54.742397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 01:33:12.496 [2024-12-09 05:27:54.742407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 01:33:12.496 [2024-12-09 05:27:54.742422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 01:33:12.496 [2024-12-09 05:27:54.742433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 01:33:12.496 [2024-12-09 05:27:54.742448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 01:33:12.496 [2024-12-09 05:27:54.742459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 01:33:12.496 [2024-12-09 05:27:54.742485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 01:33:12.496 [2024-12-09 05:27:54.742495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 01:33:12.496 [2024-12-09 05:27:54.742509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 01:33:12.496 [2024-12-09 05:27:54.742521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 01:33:12.496 [2024-12-09 05:27:54.742536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 01:33:12.496 [2024-12-09 05:27:54.742547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 01:33:12.496 [2024-12-09 05:27:54.742561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 01:33:12.496 [2024-12-09 05:27:54.742572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 01:33:12.496 [2024-12-09 05:27:54.742586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 01:33:12.496 [2024-12-09 05:27:54.742597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 01:33:12.496 [2024-12-09 05:27:54.742614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 01:33:12.496 [2024-12-09 05:27:54.742625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 01:33:12.496 [2024-12-09 05:27:54.742638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 01:33:12.496 [2024-12-09 05:27:54.742650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 01:33:12.496 [2024-12-09 05:27:54.742665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 01:33:12.496 [2024-12-09 05:27:54.742675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 01:33:12.496 [2024-12-09 05:27:54.742689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 01:33:12.496 [2024-12-09 05:27:54.742699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 01:33:12.496 [2024-12-09 05:27:54.742712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 01:33:12.496 [2024-12-09 05:27:54.742723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 01:33:12.496 [2024-12-09 05:27:54.742737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 01:33:12.496 [2024-12-09 05:27:54.742747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 01:33:12.496 [2024-12-09 05:27:54.742760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 01:33:12.496 [2024-12-09 05:27:54.742770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 01:33:12.497 [2024-12-09 05:27:54.742783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 01:33:12.497 [2024-12-09 05:27:54.742793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 01:33:12.497 [2024-12-09 05:27:54.742809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 01:33:12.497 [2024-12-09 05:27:54.742820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 01:33:12.497 [2024-12-09 05:27:54.742833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 01:33:12.497 [2024-12-09 05:27:54.742843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 01:33:12.497 [2024-12-09 05:27:54.742856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 01:33:12.497 [2024-12-09 05:27:54.742866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 01:33:12.497 [2024-12-09 05:27:54.742880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 01:33:12.497 [2024-12-09 05:27:54.742891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 01:33:12.497 [2024-12-09 05:27:54.742905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 01:33:12.497 [2024-12-09 05:27:54.742916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 01:33:12.497 [2024-12-09 05:27:54.742930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 01:33:12.497 [2024-12-09 05:27:54.742941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 01:33:12.497 [2024-12-09 05:27:54.742955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 01:33:12.497 [2024-12-09 05:27:54.742966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 01:33:12.497 [2024-12-09 05:27:54.742979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 01:33:12.497 [2024-12-09 05:27:54.742998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 01:33:12.497 [2024-12-09 05:27:54.743015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 01:33:12.497 [2024-12-09 05:27:54.743027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 01:33:12.497 [2024-12-09 05:27:54.743041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 01:33:12.497 [2024-12-09 05:27:54.743053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 01:33:12.497 [2024-12-09 05:27:54.743067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 01:33:12.497 [2024-12-09 05:27:54.743078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 01:33:12.497 [2024-12-09 05:27:54.743091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 01:33:12.497 [2024-12-09 05:27:54.743102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 01:33:12.497 [2024-12-09 05:27:54.743117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 01:33:12.497 [2024-12-09 05:27:54.743129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 01:33:12.497 [2024-12-09 05:27:54.743142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 01:33:12.497 [2024-12-09 05:27:54.743152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 01:33:12.497 [2024-12-09 05:27:54.743165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 01:33:12.497 [2024-12-09 05:27:54.743176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 01:33:12.497 [2024-12-09 05:27:54.743189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 01:33:12.497 [2024-12-09 05:27:54.743199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 01:33:12.497 [2024-12-09 05:27:54.743215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 01:33:12.497 [2024-12-09 05:27:54.743225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 01:33:12.497 [2024-12-09 05:27:54.743238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 01:33:12.497 [2024-12-09 05:27:54.743249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 01:33:12.497 [2024-12-09 05:27:54.743262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 01:33:12.497 [2024-12-09 05:27:54.743273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 01:33:12.497 [2024-12-09 05:27:54.743286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 01:33:12.497 [2024-12-09 05:27:54.743298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 01:33:12.497 [2024-12-09 05:27:54.743311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 01:33:12.497 [2024-12-09 05:27:54.743324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 01:33:12.497 [2024-12-09 05:27:54.743341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 01:33:12.497 [2024-12-09 05:27:54.743352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 01:33:12.497 [2024-12-09 05:27:54.743366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 01:33:12.497 [2024-12-09 05:27:54.743379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 01:33:12.497 [2024-12-09 05:27:54.743393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 01:33:12.497 [2024-12-09 05:27:54.743425] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 01:33:12.497 [2024-12-09 05:27:54.743442] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ea97d973-01dc-423e-88bb-a65e4c614878 01:33:12.497 [2024-12-09 05:27:54.743458] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 01:33:12.497 [2024-12-09 05:27:54.743482] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 01:33:12.497 [2024-12-09 05:27:54.743494] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 01:33:12.497 [2024-12-09 05:27:54.743509] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 01:33:12.497 [2024-12-09 05:27:54.743519] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 01:33:12.497 [2024-12-09 05:27:54.743533] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 01:33:12.497 [2024-12-09 05:27:54.743543] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 01:33:12.497 [2024-12-09 05:27:54.743556] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 01:33:12.497 [2024-12-09 05:27:54.743566] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 01:33:12.497 [2024-12-09 05:27:54.743579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:12.497 [2024-12-09 05:27:54.743590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 01:33:12.497 [2024-12-09 05:27:54.743604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.488 ms 01:33:12.497 [2024-12-09 05:27:54.743617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:12.497 [2024-12-09 05:27:54.763768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:12.497 [2024-12-09 05:27:54.763801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 01:33:12.497 [2024-12-09 05:27:54.763821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.144 ms 01:33:12.497 [2024-12-09 05:27:54.763831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:12.497 [2024-12-09 05:27:54.764449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:12.497 [2024-12-09 05:27:54.764496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 01:33:12.497 [2024-12-09 05:27:54.764514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.561 ms 01:33:12.497 [2024-12-09 05:27:54.764525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:12.497 [2024-12-09 05:27:54.835548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:33:12.497 [2024-12-09 05:27:54.835607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 01:33:12.497 [2024-12-09 05:27:54.835627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:33:12.497 [2024-12-09 05:27:54.835638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:12.497 [2024-12-09 05:27:54.835800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:33:12.497 [2024-12-09 05:27:54.835814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 01:33:12.497 [2024-12-09 05:27:54.835833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:33:12.497 [2024-12-09 05:27:54.835843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:12.497 [2024-12-09 05:27:54.835916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:33:12.497 [2024-12-09 05:27:54.835930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 01:33:12.497 [2024-12-09 05:27:54.835949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:33:12.497 [2024-12-09 05:27:54.835959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:12.497 [2024-12-09 05:27:54.835985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:33:12.497 [2024-12-09 05:27:54.835996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 01:33:12.497 [2024-12-09 05:27:54.836010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:33:12.497 [2024-12-09 05:27:54.836024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:12.757 [2024-12-09 05:27:54.966288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:33:12.757 [2024-12-09 05:27:54.966354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 01:33:12.757 [2024-12-09 05:27:54.966375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:33:12.757 [2024-12-09 05:27:54.966387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:12.757 [2024-12-09 05:27:55.069986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:33:12.757 [2024-12-09 05:27:55.070060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 01:33:12.757 [2024-12-09 05:27:55.070084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:33:12.757 [2024-12-09 05:27:55.070095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:12.757 [2024-12-09 05:27:55.070251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:33:12.757 [2024-12-09 05:27:55.070264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 01:33:12.757 [2024-12-09 05:27:55.070283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:33:12.757 [2024-12-09 05:27:55.070293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:12.757 [2024-12-09 05:27:55.070330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:33:12.757 [2024-12-09 05:27:55.070341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 01:33:12.757 [2024-12-09 05:27:55.070356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:33:12.757 [2024-12-09 05:27:55.070365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:12.757 [2024-12-09 05:27:55.070525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:33:12.757 [2024-12-09 05:27:55.070541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 01:33:12.757 [2024-12-09 05:27:55.070555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:33:12.757 [2024-12-09 05:27:55.070565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:12.757 [2024-12-09 05:27:55.070614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:33:12.757 [2024-12-09 05:27:55.070627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 01:33:12.757 [2024-12-09 05:27:55.070640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:33:12.757 [2024-12-09 05:27:55.070650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:12.757 [2024-12-09 05:27:55.070705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:33:12.757 [2024-12-09 05:27:55.070718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 01:33:12.757 [2024-12-09 05:27:55.070735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:33:12.757 [2024-12-09 05:27:55.070745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:12.757 [2024-12-09 05:27:55.070805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:33:12.757 [2024-12-09 05:27:55.070817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 01:33:12.757 [2024-12-09 05:27:55.070832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:33:12.757 [2024-12-09 05:27:55.070841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:12.757 [2024-12-09 05:27:55.071023] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 427.694 ms, result 0 01:33:14.128 05:27:56 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 01:33:14.128 [2024-12-09 05:27:56.323162] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:33:14.128 [2024-12-09 05:27:56.323296] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78977 ] 01:33:14.128 [2024-12-09 05:27:56.509475] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:33:14.385 [2024-12-09 05:27:56.639992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:33:14.643 [2024-12-09 05:27:57.056356] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 01:33:14.643 [2024-12-09 05:27:57.056446] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 01:33:14.901 [2024-12-09 05:27:57.222955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:14.901 [2024-12-09 05:27:57.223021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 01:33:14.901 [2024-12-09 05:27:57.223040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 01:33:14.901 [2024-12-09 05:27:57.223051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:14.901 [2024-12-09 05:27:57.226561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:14.901 [2024-12-09 05:27:57.226598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 01:33:14.901 [2024-12-09 05:27:57.226611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.494 ms 01:33:14.901 [2024-12-09 05:27:57.226621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:14.901 [2024-12-09 05:27:57.226726] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 01:33:14.901 [2024-12-09 05:27:57.227696] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 01:33:14.901 [2024-12-09 05:27:57.227730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:14.901 [2024-12-09 05:27:57.227742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 01:33:14.901 [2024-12-09 05:27:57.227754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.014 ms 01:33:14.901 [2024-12-09 05:27:57.227765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:14.901 [2024-12-09 05:27:57.230267] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 01:33:14.901 [2024-12-09 05:27:57.249932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:14.901 [2024-12-09 05:27:57.249971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 01:33:14.901 [2024-12-09 05:27:57.249988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.699 ms 01:33:14.901 [2024-12-09 05:27:57.249998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:14.901 [2024-12-09 05:27:57.250103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:14.901 [2024-12-09 05:27:57.250118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 01:33:14.901 [2024-12-09 05:27:57.250130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 01:33:14.901 [2024-12-09 05:27:57.250140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:14.901 [2024-12-09 05:27:57.262031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:14.901 [2024-12-09 05:27:57.262059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 01:33:14.901 [2024-12-09 05:27:57.262072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.867 ms 01:33:14.901 [2024-12-09 05:27:57.262082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:14.901 [2024-12-09 05:27:57.262205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:14.901 [2024-12-09 05:27:57.262222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 01:33:14.901 [2024-12-09 05:27:57.262234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 01:33:14.901 [2024-12-09 05:27:57.262245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:14.901 [2024-12-09 05:27:57.262279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:14.901 [2024-12-09 05:27:57.262290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 01:33:14.901 [2024-12-09 05:27:57.262302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 01:33:14.901 [2024-12-09 05:27:57.262313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:14.901 [2024-12-09 05:27:57.262337] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 01:33:14.901 [2024-12-09 05:27:57.267885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:14.901 [2024-12-09 05:27:57.267918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 01:33:14.901 [2024-12-09 05:27:57.267932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.564 ms 01:33:14.901 [2024-12-09 05:27:57.267943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:14.901 [2024-12-09 05:27:57.267995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:14.901 [2024-12-09 05:27:57.268009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 01:33:14.901 [2024-12-09 05:27:57.268020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 01:33:14.901 [2024-12-09 05:27:57.268030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:14.901 [2024-12-09 05:27:57.268058] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 01:33:14.901 [2024-12-09 05:27:57.268084] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 01:33:14.901 [2024-12-09 05:27:57.268121] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 01:33:14.901 [2024-12-09 05:27:57.268151] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 01:33:14.901 [2024-12-09 05:27:57.268241] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 01:33:14.901 [2024-12-09 05:27:57.268256] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 01:33:14.901 [2024-12-09 05:27:57.268269] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 01:33:14.901 [2024-12-09 05:27:57.268286] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 01:33:14.901 [2024-12-09 05:27:57.268299] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 01:33:14.901 [2024-12-09 05:27:57.268311] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 01:33:14.901 [2024-12-09 05:27:57.268321] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 01:33:14.901 [2024-12-09 05:27:57.268332] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 01:33:14.901 [2024-12-09 05:27:57.268342] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 01:33:14.901 [2024-12-09 05:27:57.268353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:14.901 [2024-12-09 05:27:57.268363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 01:33:14.901 [2024-12-09 05:27:57.268374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.300 ms 01:33:14.901 [2024-12-09 05:27:57.268384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:14.901 [2024-12-09 05:27:57.268457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:14.901 [2024-12-09 05:27:57.268486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 01:33:14.901 [2024-12-09 05:27:57.268498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 01:33:14.901 [2024-12-09 05:27:57.268507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:14.901 [2024-12-09 05:27:57.268599] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 01:33:14.901 [2024-12-09 05:27:57.268618] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 01:33:14.901 [2024-12-09 05:27:57.268629] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 01:33:14.901 [2024-12-09 05:27:57.268640] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:33:14.901 [2024-12-09 05:27:57.268651] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 01:33:14.901 [2024-12-09 05:27:57.268660] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 01:33:14.901 [2024-12-09 05:27:57.268669] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 01:33:14.901 [2024-12-09 05:27:57.268679] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 01:33:14.901 [2024-12-09 05:27:57.268689] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 01:33:14.901 [2024-12-09 05:27:57.268698] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 01:33:14.901 [2024-12-09 05:27:57.268710] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 01:33:14.901 [2024-12-09 05:27:57.268731] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 01:33:14.901 [2024-12-09 05:27:57.268741] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 01:33:14.901 [2024-12-09 05:27:57.268750] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 01:33:14.901 [2024-12-09 05:27:57.268760] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 01:33:14.902 [2024-12-09 05:27:57.268769] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:33:14.902 [2024-12-09 05:27:57.268778] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 01:33:14.902 [2024-12-09 05:27:57.268788] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 01:33:14.902 [2024-12-09 05:27:57.268796] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:33:14.902 [2024-12-09 05:27:57.268805] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 01:33:14.902 [2024-12-09 05:27:57.268814] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 01:33:14.902 [2024-12-09 05:27:57.268824] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:33:14.902 [2024-12-09 05:27:57.268832] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 01:33:14.902 [2024-12-09 05:27:57.268842] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 01:33:14.902 [2024-12-09 05:27:57.268850] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:33:14.902 [2024-12-09 05:27:57.268859] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 01:33:14.902 [2024-12-09 05:27:57.268867] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 01:33:14.902 [2024-12-09 05:27:57.268876] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:33:14.902 [2024-12-09 05:27:57.268884] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 01:33:14.902 [2024-12-09 05:27:57.268893] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 01:33:14.902 [2024-12-09 05:27:57.268901] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:33:14.902 [2024-12-09 05:27:57.268911] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 01:33:14.902 [2024-12-09 05:27:57.268919] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 01:33:14.902 [2024-12-09 05:27:57.268928] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 01:33:14.902 [2024-12-09 05:27:57.268937] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 01:33:14.902 [2024-12-09 05:27:57.268945] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 01:33:14.902 [2024-12-09 05:27:57.268953] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 01:33:14.902 [2024-12-09 05:27:57.268962] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 01:33:14.902 [2024-12-09 05:27:57.268971] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 01:33:14.902 [2024-12-09 05:27:57.268979] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:33:14.902 [2024-12-09 05:27:57.268988] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 01:33:14.902 [2024-12-09 05:27:57.268997] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 01:33:14.902 [2024-12-09 05:27:57.269007] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:33:14.902 [2024-12-09 05:27:57.269016] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 01:33:14.902 [2024-12-09 05:27:57.269027] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 01:33:14.902 [2024-12-09 05:27:57.269041] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 01:33:14.902 [2024-12-09 05:27:57.269051] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:33:14.902 [2024-12-09 05:27:57.269060] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 01:33:14.902 [2024-12-09 05:27:57.269070] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 01:33:14.902 [2024-12-09 05:27:57.269079] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 01:33:14.902 [2024-12-09 05:27:57.269087] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 01:33:14.902 [2024-12-09 05:27:57.269096] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 01:33:14.902 [2024-12-09 05:27:57.269105] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 01:33:14.902 [2024-12-09 05:27:57.269115] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 01:33:14.902 [2024-12-09 05:27:57.269127] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 01:33:14.902 [2024-12-09 05:27:57.269139] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 01:33:14.902 [2024-12-09 05:27:57.269150] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 01:33:14.902 [2024-12-09 05:27:57.269160] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 01:33:14.902 [2024-12-09 05:27:57.269170] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 01:33:14.902 [2024-12-09 05:27:57.269181] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 01:33:14.902 [2024-12-09 05:27:57.269191] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 01:33:14.902 [2024-12-09 05:27:57.269202] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 01:33:14.902 [2024-12-09 05:27:57.269212] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 01:33:14.902 [2024-12-09 05:27:57.269224] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 01:33:14.902 [2024-12-09 05:27:57.269235] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 01:33:14.902 [2024-12-09 05:27:57.269245] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 01:33:14.902 [2024-12-09 05:27:57.269255] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 01:33:14.902 [2024-12-09 05:27:57.269264] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 01:33:14.902 [2024-12-09 05:27:57.269274] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 01:33:14.902 [2024-12-09 05:27:57.269283] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 01:33:14.902 [2024-12-09 05:27:57.269294] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 01:33:14.902 [2024-12-09 05:27:57.269304] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 01:33:14.902 [2024-12-09 05:27:57.269314] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 01:33:14.902 [2024-12-09 05:27:57.269323] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 01:33:14.902 [2024-12-09 05:27:57.269334] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 01:33:14.902 [2024-12-09 05:27:57.269345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:14.902 [2024-12-09 05:27:57.269360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 01:33:14.902 [2024-12-09 05:27:57.269370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.799 ms 01:33:14.902 [2024-12-09 05:27:57.269380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:14.902 [2024-12-09 05:27:57.318134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:14.902 [2024-12-09 05:27:57.318169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 01:33:14.902 [2024-12-09 05:27:57.318183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.770 ms 01:33:14.902 [2024-12-09 05:27:57.318195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:14.902 [2024-12-09 05:27:57.318353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:14.902 [2024-12-09 05:27:57.318367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 01:33:14.902 [2024-12-09 05:27:57.318378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 01:33:14.902 [2024-12-09 05:27:57.318388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:15.160 [2024-12-09 05:27:57.400975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:15.160 [2024-12-09 05:27:57.401020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 01:33:15.160 [2024-12-09 05:27:57.401035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 82.696 ms 01:33:15.160 [2024-12-09 05:27:57.401045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:15.160 [2024-12-09 05:27:57.401139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:15.160 [2024-12-09 05:27:57.401156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 01:33:15.160 [2024-12-09 05:27:57.401168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 01:33:15.160 [2024-12-09 05:27:57.401179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:15.160 [2024-12-09 05:27:57.401962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:15.160 [2024-12-09 05:27:57.401983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 01:33:15.160 [2024-12-09 05:27:57.402003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.760 ms 01:33:15.160 [2024-12-09 05:27:57.402013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:15.160 [2024-12-09 05:27:57.402148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:15.160 [2024-12-09 05:27:57.402162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 01:33:15.160 [2024-12-09 05:27:57.402173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 01:33:15.160 [2024-12-09 05:27:57.402183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:15.160 [2024-12-09 05:27:57.423639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:15.160 [2024-12-09 05:27:57.423677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 01:33:15.161 [2024-12-09 05:27:57.423691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.466 ms 01:33:15.161 [2024-12-09 05:27:57.423703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:15.161 [2024-12-09 05:27:57.443323] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 01:33:15.161 [2024-12-09 05:27:57.443364] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 01:33:15.161 [2024-12-09 05:27:57.443379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:15.161 [2024-12-09 05:27:57.443391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 01:33:15.161 [2024-12-09 05:27:57.443403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.565 ms 01:33:15.161 [2024-12-09 05:27:57.443413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:15.161 [2024-12-09 05:27:57.472632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:15.161 [2024-12-09 05:27:57.472670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 01:33:15.161 [2024-12-09 05:27:57.472684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.166 ms 01:33:15.161 [2024-12-09 05:27:57.472695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:15.161 [2024-12-09 05:27:57.490368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:15.161 [2024-12-09 05:27:57.490403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 01:33:15.161 [2024-12-09 05:27:57.490416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.621 ms 01:33:15.161 [2024-12-09 05:27:57.490426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:15.161 [2024-12-09 05:27:57.507225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:15.161 [2024-12-09 05:27:57.507260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 01:33:15.161 [2024-12-09 05:27:57.507273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.744 ms 01:33:15.161 [2024-12-09 05:27:57.507283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:15.161 [2024-12-09 05:27:57.508027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:15.161 [2024-12-09 05:27:57.508056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 01:33:15.161 [2024-12-09 05:27:57.508068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.639 ms 01:33:15.161 [2024-12-09 05:27:57.508079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:15.161 [2024-12-09 05:27:57.601171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:15.161 [2024-12-09 05:27:57.601231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 01:33:15.161 [2024-12-09 05:27:57.601248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 93.213 ms 01:33:15.161 [2024-12-09 05:27:57.601261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:15.161 [2024-12-09 05:27:57.611634] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 01:33:15.419 [2024-12-09 05:27:57.635632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:15.419 [2024-12-09 05:27:57.635675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 01:33:15.419 [2024-12-09 05:27:57.635692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.338 ms 01:33:15.419 [2024-12-09 05:27:57.635710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:15.419 [2024-12-09 05:27:57.635859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:15.419 [2024-12-09 05:27:57.635874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 01:33:15.419 [2024-12-09 05:27:57.635887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 01:33:15.419 [2024-12-09 05:27:57.635899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:15.419 [2024-12-09 05:27:57.635967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:15.419 [2024-12-09 05:27:57.635979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 01:33:15.419 [2024-12-09 05:27:57.635990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 01:33:15.419 [2024-12-09 05:27:57.636005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:15.419 [2024-12-09 05:27:57.636047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:15.419 [2024-12-09 05:27:57.636061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 01:33:15.419 [2024-12-09 05:27:57.636072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 01:33:15.419 [2024-12-09 05:27:57.636082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:15.419 [2024-12-09 05:27:57.636126] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 01:33:15.419 [2024-12-09 05:27:57.636138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:15.419 [2024-12-09 05:27:57.636149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 01:33:15.419 [2024-12-09 05:27:57.636170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 01:33:15.420 [2024-12-09 05:27:57.636180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:15.420 [2024-12-09 05:27:57.671193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:15.420 [2024-12-09 05:27:57.671233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 01:33:15.420 [2024-12-09 05:27:57.671247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.045 ms 01:33:15.420 [2024-12-09 05:27:57.671258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:15.420 [2024-12-09 05:27:57.671381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:15.420 [2024-12-09 05:27:57.671395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 01:33:15.420 [2024-12-09 05:27:57.671407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 01:33:15.420 [2024-12-09 05:27:57.671418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:15.420 [2024-12-09 05:27:57.672788] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 01:33:15.420 [2024-12-09 05:27:57.676877] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 450.199 ms, result 0 01:33:15.420 [2024-12-09 05:27:57.677861] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 01:33:15.420 [2024-12-09 05:27:57.695673] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 01:33:16.355  [2024-12-09T05:28:00.187Z] Copying: 27/256 [MB] (27 MBps) [2024-12-09T05:28:01.125Z] Copying: 51/256 [MB] (24 MBps) [2024-12-09T05:28:02.061Z] Copying: 75/256 [MB] (24 MBps) [2024-12-09T05:28:02.997Z] Copying: 100/256 [MB] (24 MBps) [2024-12-09T05:28:03.978Z] Copying: 126/256 [MB] (25 MBps) [2024-12-09T05:28:04.940Z] Copying: 151/256 [MB] (24 MBps) [2024-12-09T05:28:05.877Z] Copying: 175/256 [MB] (24 MBps) [2024-12-09T05:28:06.813Z] Copying: 199/256 [MB] (24 MBps) [2024-12-09T05:28:07.748Z] Copying: 223/256 [MB] (23 MBps) [2024-12-09T05:28:08.314Z] Copying: 246/256 [MB] (23 MBps) [2024-12-09T05:28:08.572Z] Copying: 256/256 [MB] (average 24 MBps)[2024-12-09 05:28:08.438526] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 01:33:26.116 [2024-12-09 05:28:08.456168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:26.116 [2024-12-09 05:28:08.456213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 01:33:26.116 [2024-12-09 05:28:08.456241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 01:33:26.116 [2024-12-09 05:28:08.456252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:26.116 [2024-12-09 05:28:08.456281] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 01:33:26.116 [2024-12-09 05:28:08.460683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:26.116 [2024-12-09 05:28:08.460716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 01:33:26.116 [2024-12-09 05:28:08.460729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.390 ms 01:33:26.116 [2024-12-09 05:28:08.460740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:26.116 [2024-12-09 05:28:08.461007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:26.116 [2024-12-09 05:28:08.461021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 01:33:26.116 [2024-12-09 05:28:08.461033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.229 ms 01:33:26.116 [2024-12-09 05:28:08.461044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:26.116 [2024-12-09 05:28:08.464368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:26.116 [2024-12-09 05:28:08.464401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 01:33:26.116 [2024-12-09 05:28:08.464414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.304 ms 01:33:26.116 [2024-12-09 05:28:08.464426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:26.116 [2024-12-09 05:28:08.470204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:26.116 [2024-12-09 05:28:08.470241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 01:33:26.116 [2024-12-09 05:28:08.470253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.756 ms 01:33:26.116 [2024-12-09 05:28:08.470263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:26.116 [2024-12-09 05:28:08.505281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:26.116 [2024-12-09 05:28:08.505322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 01:33:26.116 [2024-12-09 05:28:08.505336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.984 ms 01:33:26.116 [2024-12-09 05:28:08.505348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:26.116 [2024-12-09 05:28:08.543706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:26.116 [2024-12-09 05:28:08.543799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 01:33:26.116 [2024-12-09 05:28:08.543837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.348 ms 01:33:26.116 [2024-12-09 05:28:08.543853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:26.116 [2024-12-09 05:28:08.544112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:26.116 [2024-12-09 05:28:08.544133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 01:33:26.116 [2024-12-09 05:28:08.544166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 01:33:26.116 [2024-12-09 05:28:08.544182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:26.375 [2024-12-09 05:28:08.581001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:26.375 [2024-12-09 05:28:08.581038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 01:33:26.375 [2024-12-09 05:28:08.581051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.852 ms 01:33:26.375 [2024-12-09 05:28:08.581062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:26.375 [2024-12-09 05:28:08.616240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:26.375 [2024-12-09 05:28:08.616278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 01:33:26.375 [2024-12-09 05:28:08.616291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.160 ms 01:33:26.375 [2024-12-09 05:28:08.616301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:26.375 [2024-12-09 05:28:08.651004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:26.375 [2024-12-09 05:28:08.651056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 01:33:26.375 [2024-12-09 05:28:08.651069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.701 ms 01:33:26.375 [2024-12-09 05:28:08.651079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:26.375 [2024-12-09 05:28:08.685850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:26.375 [2024-12-09 05:28:08.685886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 01:33:26.375 [2024-12-09 05:28:08.685900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.706 ms 01:33:26.375 [2024-12-09 05:28:08.685910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:26.375 [2024-12-09 05:28:08.685981] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 01:33:26.375 [2024-12-09 05:28:08.686000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 01:33:26.375 [2024-12-09 05:28:08.686014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 01:33:26.375 [2024-12-09 05:28:08.686028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 01:33:26.375 [2024-12-09 05:28:08.686039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 01:33:26.375 [2024-12-09 05:28:08.686050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 01:33:26.375 [2024-12-09 05:28:08.686061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 01:33:26.375 [2024-12-09 05:28:08.686072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 01:33:26.375 [2024-12-09 05:28:08.686083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 01:33:26.375 [2024-12-09 05:28:08.686093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 01:33:26.375 [2024-12-09 05:28:08.686104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 01:33:26.375 [2024-12-09 05:28:08.686115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 01:33:26.376 [2024-12-09 05:28:08.686125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 01:33:26.376 [2024-12-09 05:28:08.686136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 01:33:26.376 [2024-12-09 05:28:08.686146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 01:33:26.376 [2024-12-09 05:28:08.686157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 01:33:26.376 [2024-12-09 05:28:08.686168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 01:33:26.376 [2024-12-09 05:28:08.686179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 01:33:26.376 [2024-12-09 05:28:08.686190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 01:33:26.376 [2024-12-09 05:28:08.686200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 01:33:26.376 [2024-12-09 05:28:08.686211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 01:33:26.376 [2024-12-09 05:28:08.686222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 01:33:26.376 [2024-12-09 05:28:08.686233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 01:33:26.376 [2024-12-09 05:28:08.686243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 01:33:26.376 [2024-12-09 05:28:08.686253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 01:33:26.376 [2024-12-09 05:28:08.686263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 01:33:26.376 [2024-12-09 05:28:08.686273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 01:33:26.376 [2024-12-09 05:28:08.686283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 01:33:26.376 [2024-12-09 05:28:08.686294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 01:33:26.376 [2024-12-09 05:28:08.686304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 01:33:26.376 [2024-12-09 05:28:08.686317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 01:33:26.376 [2024-12-09 05:28:08.686328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 01:33:26.376 [2024-12-09 05:28:08.686339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 01:33:26.376 [2024-12-09 05:28:08.686350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 01:33:26.376 [2024-12-09 05:28:08.686361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 01:33:26.376 [2024-12-09 05:28:08.686372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 01:33:26.376 [2024-12-09 05:28:08.686382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 01:33:26.376 [2024-12-09 05:28:08.686393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 01:33:26.376 [2024-12-09 05:28:08.686404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 01:33:26.376 [2024-12-09 05:28:08.686415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 01:33:26.376 [2024-12-09 05:28:08.686426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 01:33:26.376 [2024-12-09 05:28:08.686436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 01:33:26.376 [2024-12-09 05:28:08.686447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 01:33:26.376 [2024-12-09 05:28:08.686457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 01:33:26.376 [2024-12-09 05:28:08.686482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 01:33:26.376 [2024-12-09 05:28:08.686493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 01:33:26.376 [2024-12-09 05:28:08.686503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 01:33:26.376 [2024-12-09 05:28:08.686514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 01:33:26.376 [2024-12-09 05:28:08.686524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 01:33:26.376 [2024-12-09 05:28:08.686535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 01:33:26.376 [2024-12-09 05:28:08.686545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 01:33:26.376 [2024-12-09 05:28:08.686556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 01:33:26.376 [2024-12-09 05:28:08.686568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 01:33:26.376 [2024-12-09 05:28:08.686578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 01:33:26.376 [2024-12-09 05:28:08.686589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 01:33:26.376 [2024-12-09 05:28:08.686600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 01:33:26.376 [2024-12-09 05:28:08.686611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 01:33:26.376 [2024-12-09 05:28:08.686621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 01:33:26.376 [2024-12-09 05:28:08.686632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 01:33:26.376 [2024-12-09 05:28:08.686642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 01:33:26.376 [2024-12-09 05:28:08.686652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 01:33:26.376 [2024-12-09 05:28:08.686663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 01:33:26.376 [2024-12-09 05:28:08.686676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 01:33:26.376 [2024-12-09 05:28:08.686688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 01:33:26.376 [2024-12-09 05:28:08.686699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 01:33:26.376 [2024-12-09 05:28:08.686710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 01:33:26.376 [2024-12-09 05:28:08.686721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 01:33:26.376 [2024-12-09 05:28:08.686731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 01:33:26.376 [2024-12-09 05:28:08.686742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 01:33:26.376 [2024-12-09 05:28:08.686754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 01:33:26.376 [2024-12-09 05:28:08.686764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 01:33:26.376 [2024-12-09 05:28:08.686775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 01:33:26.376 [2024-12-09 05:28:08.686785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 01:33:26.376 [2024-12-09 05:28:08.686796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 01:33:26.376 [2024-12-09 05:28:08.686806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 01:33:26.376 [2024-12-09 05:28:08.686816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 01:33:26.376 [2024-12-09 05:28:08.686827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 01:33:26.376 [2024-12-09 05:28:08.686837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 01:33:26.376 [2024-12-09 05:28:08.686848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 01:33:26.376 [2024-12-09 05:28:08.686858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 01:33:26.376 [2024-12-09 05:28:08.686868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 01:33:26.376 [2024-12-09 05:28:08.686878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 01:33:26.376 [2024-12-09 05:28:08.686889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 01:33:26.376 [2024-12-09 05:28:08.686900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 01:33:26.376 [2024-12-09 05:28:08.686910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 01:33:26.376 [2024-12-09 05:28:08.686920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 01:33:26.376 [2024-12-09 05:28:08.686931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 01:33:26.376 [2024-12-09 05:28:08.686942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 01:33:26.376 [2024-12-09 05:28:08.686952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 01:33:26.376 [2024-12-09 05:28:08.686962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 01:33:26.376 [2024-12-09 05:28:08.686973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 01:33:26.376 [2024-12-09 05:28:08.686984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 01:33:26.376 [2024-12-09 05:28:08.687001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 01:33:26.376 [2024-12-09 05:28:08.687012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 01:33:26.376 [2024-12-09 05:28:08.687035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 01:33:26.376 [2024-12-09 05:28:08.687046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 01:33:26.376 [2024-12-09 05:28:08.687057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 01:33:26.376 [2024-12-09 05:28:08.687067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 01:33:26.376 [2024-12-09 05:28:08.687079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 01:33:26.376 [2024-12-09 05:28:08.687090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 01:33:26.376 [2024-12-09 05:28:08.687100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 01:33:26.376 [2024-12-09 05:28:08.687118] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 01:33:26.377 [2024-12-09 05:28:08.687128] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ea97d973-01dc-423e-88bb-a65e4c614878 01:33:26.377 [2024-12-09 05:28:08.687139] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 01:33:26.377 [2024-12-09 05:28:08.687149] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 01:33:26.377 [2024-12-09 05:28:08.687159] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 01:33:26.377 [2024-12-09 05:28:08.687169] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 01:33:26.377 [2024-12-09 05:28:08.687180] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 01:33:26.377 [2024-12-09 05:28:08.687190] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 01:33:26.377 [2024-12-09 05:28:08.687204] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 01:33:26.377 [2024-12-09 05:28:08.687214] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 01:33:26.377 [2024-12-09 05:28:08.687222] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 01:33:26.377 [2024-12-09 05:28:08.687233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:26.377 [2024-12-09 05:28:08.687243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 01:33:26.377 [2024-12-09 05:28:08.687254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.254 ms 01:33:26.377 [2024-12-09 05:28:08.687264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:26.377 [2024-12-09 05:28:08.706781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:26.377 [2024-12-09 05:28:08.706813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 01:33:26.377 [2024-12-09 05:28:08.706842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.527 ms 01:33:26.377 [2024-12-09 05:28:08.706852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:26.377 [2024-12-09 05:28:08.707432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:26.377 [2024-12-09 05:28:08.707455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 01:33:26.377 [2024-12-09 05:28:08.707482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.537 ms 01:33:26.377 [2024-12-09 05:28:08.707493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:26.377 [2024-12-09 05:28:08.761189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:33:26.377 [2024-12-09 05:28:08.761224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 01:33:26.377 [2024-12-09 05:28:08.761237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:33:26.377 [2024-12-09 05:28:08.761251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:26.377 [2024-12-09 05:28:08.761364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:33:26.377 [2024-12-09 05:28:08.761376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 01:33:26.377 [2024-12-09 05:28:08.761386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:33:26.377 [2024-12-09 05:28:08.761397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:26.377 [2024-12-09 05:28:08.761445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:33:26.377 [2024-12-09 05:28:08.761459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 01:33:26.377 [2024-12-09 05:28:08.761468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:33:26.377 [2024-12-09 05:28:08.761488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:26.377 [2024-12-09 05:28:08.761510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:33:26.377 [2024-12-09 05:28:08.761521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 01:33:26.377 [2024-12-09 05:28:08.761531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:33:26.377 [2024-12-09 05:28:08.761541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:26.634 [2024-12-09 05:28:08.882934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:33:26.634 [2024-12-09 05:28:08.883017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 01:33:26.634 [2024-12-09 05:28:08.883034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:33:26.634 [2024-12-09 05:28:08.883045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:26.634 [2024-12-09 05:28:08.983781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:33:26.634 [2024-12-09 05:28:08.983842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 01:33:26.634 [2024-12-09 05:28:08.983858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:33:26.634 [2024-12-09 05:28:08.983870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:26.634 [2024-12-09 05:28:08.983968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:33:26.634 [2024-12-09 05:28:08.983980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 01:33:26.634 [2024-12-09 05:28:08.983991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:33:26.634 [2024-12-09 05:28:08.984002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:26.634 [2024-12-09 05:28:08.984032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:33:26.634 [2024-12-09 05:28:08.984050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 01:33:26.634 [2024-12-09 05:28:08.984062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:33:26.634 [2024-12-09 05:28:08.984072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:26.634 [2024-12-09 05:28:08.984192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:33:26.634 [2024-12-09 05:28:08.984207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 01:33:26.634 [2024-12-09 05:28:08.984218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:33:26.634 [2024-12-09 05:28:08.984229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:26.634 [2024-12-09 05:28:08.984266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:33:26.634 [2024-12-09 05:28:08.984278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 01:33:26.634 [2024-12-09 05:28:08.984292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:33:26.634 [2024-12-09 05:28:08.984303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:26.634 [2024-12-09 05:28:08.984348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:33:26.634 [2024-12-09 05:28:08.984360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 01:33:26.634 [2024-12-09 05:28:08.984369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:33:26.634 [2024-12-09 05:28:08.984379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:26.634 [2024-12-09 05:28:08.984428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:33:26.634 [2024-12-09 05:28:08.984445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 01:33:26.634 [2024-12-09 05:28:08.984455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:33:26.634 [2024-12-09 05:28:08.984479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:26.634 [2024-12-09 05:28:08.984632] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 529.331 ms, result 0 01:33:28.011 01:33:28.011 01:33:28.011 05:28:10 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 01:33:28.271 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 01:33:28.271 05:28:10 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 01:33:28.271 05:28:10 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 01:33:28.271 05:28:10 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 01:33:28.271 05:28:10 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 01:33:28.271 05:28:10 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 01:33:28.271 05:28:10 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 01:33:28.271 Process with pid 78905 is not found 01:33:28.271 05:28:10 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 78905 01:33:28.271 05:28:10 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78905 ']' 01:33:28.271 05:28:10 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78905 01:33:28.271 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (78905) - No such process 01:33:28.271 05:28:10 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 78905 is not found' 01:33:28.271 01:33:28.271 real 1m12.589s 01:33:28.271 user 1m35.065s 01:33:28.271 sys 0m8.063s 01:33:28.271 05:28:10 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 01:33:28.271 ************************************ 01:33:28.271 END TEST ftl_trim 01:33:28.271 ************************************ 01:33:28.271 05:28:10 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 01:33:28.271 05:28:10 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 01:33:28.271 05:28:10 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 01:33:28.271 05:28:10 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 01:33:28.271 05:28:10 ftl -- common/autotest_common.sh@10 -- # set +x 01:33:28.531 ************************************ 01:33:28.531 START TEST ftl_restore 01:33:28.531 ************************************ 01:33:28.531 05:28:10 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 01:33:28.531 * Looking for test storage... 01:33:28.531 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 01:33:28.531 05:28:10 ftl.ftl_restore -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:33:28.531 05:28:10 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lcov --version 01:33:28.531 05:28:10 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:33:28.531 05:28:10 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:33:28.531 05:28:10 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:33:28.531 05:28:10 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 01:33:28.531 05:28:10 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 01:33:28.531 05:28:10 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 01:33:28.531 05:28:10 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 01:33:28.531 05:28:10 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 01:33:28.531 05:28:10 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 01:33:28.531 05:28:10 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 01:33:28.531 05:28:10 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 01:33:28.531 05:28:10 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 01:33:28.531 05:28:10 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:33:28.531 05:28:10 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 01:33:28.531 05:28:10 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 01:33:28.531 05:28:10 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 01:33:28.531 05:28:10 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:33:28.531 05:28:10 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 01:33:28.531 05:28:10 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 01:33:28.531 05:28:10 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:33:28.531 05:28:10 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 01:33:28.531 05:28:10 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 01:33:28.531 05:28:10 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 01:33:28.531 05:28:10 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 01:33:28.531 05:28:10 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:33:28.531 05:28:10 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 01:33:28.531 05:28:10 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 01:33:28.531 05:28:10 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:33:28.531 05:28:10 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:33:28.531 05:28:10 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 01:33:28.531 05:28:10 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:33:28.531 05:28:10 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:33:28.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:33:28.531 --rc genhtml_branch_coverage=1 01:33:28.531 --rc genhtml_function_coverage=1 01:33:28.531 --rc genhtml_legend=1 01:33:28.531 --rc geninfo_all_blocks=1 01:33:28.531 --rc geninfo_unexecuted_blocks=1 01:33:28.531 01:33:28.531 ' 01:33:28.531 05:28:10 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:33:28.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:33:28.531 --rc genhtml_branch_coverage=1 01:33:28.531 --rc genhtml_function_coverage=1 01:33:28.531 --rc genhtml_legend=1 01:33:28.531 --rc geninfo_all_blocks=1 01:33:28.531 --rc geninfo_unexecuted_blocks=1 01:33:28.531 01:33:28.531 ' 01:33:28.531 05:28:10 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:33:28.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:33:28.531 --rc genhtml_branch_coverage=1 01:33:28.531 --rc genhtml_function_coverage=1 01:33:28.531 --rc genhtml_legend=1 01:33:28.531 --rc geninfo_all_blocks=1 01:33:28.531 --rc geninfo_unexecuted_blocks=1 01:33:28.531 01:33:28.531 ' 01:33:28.531 05:28:10 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:33:28.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:33:28.531 --rc genhtml_branch_coverage=1 01:33:28.531 --rc genhtml_function_coverage=1 01:33:28.531 --rc genhtml_legend=1 01:33:28.531 --rc geninfo_all_blocks=1 01:33:28.531 --rc geninfo_unexecuted_blocks=1 01:33:28.531 01:33:28.531 ' 01:33:28.531 05:28:10 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 01:33:28.531 05:28:10 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 01:33:28.531 05:28:10 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 01:33:28.531 05:28:10 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 01:33:28.531 05:28:10 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 01:33:28.531 05:28:10 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 01:33:28.531 05:28:10 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:33:28.531 05:28:10 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 01:33:28.791 05:28:10 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 01:33:28.791 05:28:10 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:33:28.791 05:28:10 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:33:28.791 05:28:10 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 01:33:28.791 05:28:10 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 01:33:28.791 05:28:10 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 01:33:28.791 05:28:10 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 01:33:28.791 05:28:10 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 01:33:28.791 05:28:10 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 01:33:28.791 05:28:10 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:33:28.791 05:28:10 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:33:28.791 05:28:10 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 01:33:28.791 05:28:10 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 01:33:28.791 05:28:10 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 01:33:28.791 05:28:10 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 01:33:28.791 05:28:10 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 01:33:28.791 05:28:10 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 01:33:28.791 05:28:10 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 01:33:28.791 05:28:10 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 01:33:28.791 05:28:10 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:33:28.791 05:28:10 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:33:28.791 05:28:10 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:33:28.791 05:28:10 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 01:33:28.791 05:28:10 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.HtnXj6MrFx 01:33:28.791 05:28:11 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 01:33:28.791 05:28:11 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 01:33:28.791 05:28:11 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 01:33:28.791 05:28:11 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 01:33:28.791 05:28:11 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 01:33:28.791 05:28:11 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 01:33:28.791 05:28:11 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 01:33:28.791 05:28:11 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 01:33:28.791 05:28:11 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=79186 01:33:28.791 05:28:11 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:33:28.791 05:28:11 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 79186 01:33:28.791 05:28:11 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 79186 ']' 01:33:28.791 05:28:11 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:33:28.791 05:28:11 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 01:33:28.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:33:28.791 05:28:11 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:33:28.791 05:28:11 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 01:33:28.791 05:28:11 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 01:33:28.791 [2024-12-09 05:28:11.114389] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:33:28.791 [2024-12-09 05:28:11.114928] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79186 ] 01:33:29.050 [2024-12-09 05:28:11.300765] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:33:29.050 [2024-12-09 05:28:11.411909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:33:29.986 05:28:12 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:33:29.986 05:28:12 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 01:33:29.986 05:28:12 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 01:33:29.986 05:28:12 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 01:33:29.986 05:28:12 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 01:33:29.986 05:28:12 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 01:33:29.986 05:28:12 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 01:33:29.986 05:28:12 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 01:33:30.246 05:28:12 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 01:33:30.246 05:28:12 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 01:33:30.246 05:28:12 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 01:33:30.246 05:28:12 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 01:33:30.246 05:28:12 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 01:33:30.246 05:28:12 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 01:33:30.246 05:28:12 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 01:33:30.246 05:28:12 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 01:33:30.505 05:28:12 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 01:33:30.505 { 01:33:30.505 "name": "nvme0n1", 01:33:30.505 "aliases": [ 01:33:30.505 "3fc17a44-a42a-4e26-816e-8e1eb9aedeeb" 01:33:30.505 ], 01:33:30.505 "product_name": "NVMe disk", 01:33:30.505 "block_size": 4096, 01:33:30.505 "num_blocks": 1310720, 01:33:30.505 "uuid": "3fc17a44-a42a-4e26-816e-8e1eb9aedeeb", 01:33:30.505 "numa_id": -1, 01:33:30.505 "assigned_rate_limits": { 01:33:30.505 "rw_ios_per_sec": 0, 01:33:30.505 "rw_mbytes_per_sec": 0, 01:33:30.505 "r_mbytes_per_sec": 0, 01:33:30.505 "w_mbytes_per_sec": 0 01:33:30.505 }, 01:33:30.505 "claimed": true, 01:33:30.506 "claim_type": "read_many_write_one", 01:33:30.506 "zoned": false, 01:33:30.506 "supported_io_types": { 01:33:30.506 "read": true, 01:33:30.506 "write": true, 01:33:30.506 "unmap": true, 01:33:30.506 "flush": true, 01:33:30.506 "reset": true, 01:33:30.506 "nvme_admin": true, 01:33:30.506 "nvme_io": true, 01:33:30.506 "nvme_io_md": false, 01:33:30.506 "write_zeroes": true, 01:33:30.506 "zcopy": false, 01:33:30.506 "get_zone_info": false, 01:33:30.506 "zone_management": false, 01:33:30.506 "zone_append": false, 01:33:30.506 "compare": true, 01:33:30.506 "compare_and_write": false, 01:33:30.506 "abort": true, 01:33:30.506 "seek_hole": false, 01:33:30.506 "seek_data": false, 01:33:30.506 "copy": true, 01:33:30.506 "nvme_iov_md": false 01:33:30.506 }, 01:33:30.506 "driver_specific": { 01:33:30.506 "nvme": [ 01:33:30.506 { 01:33:30.506 "pci_address": "0000:00:11.0", 01:33:30.506 "trid": { 01:33:30.506 "trtype": "PCIe", 01:33:30.506 "traddr": "0000:00:11.0" 01:33:30.506 }, 01:33:30.506 "ctrlr_data": { 01:33:30.506 "cntlid": 0, 01:33:30.506 "vendor_id": "0x1b36", 01:33:30.506 "model_number": "QEMU NVMe Ctrl", 01:33:30.506 "serial_number": "12341", 01:33:30.506 "firmware_revision": "8.0.0", 01:33:30.506 "subnqn": "nqn.2019-08.org.qemu:12341", 01:33:30.506 "oacs": { 01:33:30.506 "security": 0, 01:33:30.506 "format": 1, 01:33:30.506 "firmware": 0, 01:33:30.506 "ns_manage": 1 01:33:30.506 }, 01:33:30.506 "multi_ctrlr": false, 01:33:30.506 "ana_reporting": false 01:33:30.506 }, 01:33:30.506 "vs": { 01:33:30.506 "nvme_version": "1.4" 01:33:30.506 }, 01:33:30.506 "ns_data": { 01:33:30.506 "id": 1, 01:33:30.506 "can_share": false 01:33:30.506 } 01:33:30.506 } 01:33:30.506 ], 01:33:30.506 "mp_policy": "active_passive" 01:33:30.506 } 01:33:30.506 } 01:33:30.506 ]' 01:33:30.506 05:28:12 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 01:33:30.506 05:28:12 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 01:33:30.506 05:28:12 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 01:33:30.506 05:28:12 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 01:33:30.506 05:28:12 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 01:33:30.506 05:28:12 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 01:33:30.506 05:28:12 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 01:33:30.506 05:28:12 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 01:33:30.506 05:28:12 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 01:33:30.506 05:28:12 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 01:33:30.506 05:28:12 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 01:33:30.765 05:28:12 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=8fb93c42-fdd1-418a-a35f-b87c841c6766 01:33:30.766 05:28:12 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 01:33:30.766 05:28:12 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8fb93c42-fdd1-418a-a35f-b87c841c6766 01:33:31.025 05:28:13 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 01:33:31.025 05:28:13 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=a29adb7a-9dd2-4660-96bc-ed4cfdf2fb47 01:33:31.025 05:28:13 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u a29adb7a-9dd2-4660-96bc-ed4cfdf2fb47 01:33:31.284 05:28:13 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=051047a7-1e6a-4fee-8846-b815c09bced2 01:33:31.284 05:28:13 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 01:33:31.284 05:28:13 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 051047a7-1e6a-4fee-8846-b815c09bced2 01:33:31.284 05:28:13 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 01:33:31.284 05:28:13 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 01:33:31.284 05:28:13 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=051047a7-1e6a-4fee-8846-b815c09bced2 01:33:31.284 05:28:13 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 01:33:31.284 05:28:13 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 051047a7-1e6a-4fee-8846-b815c09bced2 01:33:31.284 05:28:13 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=051047a7-1e6a-4fee-8846-b815c09bced2 01:33:31.284 05:28:13 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 01:33:31.284 05:28:13 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 01:33:31.284 05:28:13 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 01:33:31.284 05:28:13 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 051047a7-1e6a-4fee-8846-b815c09bced2 01:33:31.544 05:28:13 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 01:33:31.544 { 01:33:31.544 "name": "051047a7-1e6a-4fee-8846-b815c09bced2", 01:33:31.544 "aliases": [ 01:33:31.544 "lvs/nvme0n1p0" 01:33:31.544 ], 01:33:31.544 "product_name": "Logical Volume", 01:33:31.544 "block_size": 4096, 01:33:31.544 "num_blocks": 26476544, 01:33:31.544 "uuid": "051047a7-1e6a-4fee-8846-b815c09bced2", 01:33:31.544 "assigned_rate_limits": { 01:33:31.544 "rw_ios_per_sec": 0, 01:33:31.544 "rw_mbytes_per_sec": 0, 01:33:31.544 "r_mbytes_per_sec": 0, 01:33:31.544 "w_mbytes_per_sec": 0 01:33:31.544 }, 01:33:31.544 "claimed": false, 01:33:31.544 "zoned": false, 01:33:31.544 "supported_io_types": { 01:33:31.544 "read": true, 01:33:31.544 "write": true, 01:33:31.544 "unmap": true, 01:33:31.544 "flush": false, 01:33:31.544 "reset": true, 01:33:31.544 "nvme_admin": false, 01:33:31.544 "nvme_io": false, 01:33:31.544 "nvme_io_md": false, 01:33:31.544 "write_zeroes": true, 01:33:31.544 "zcopy": false, 01:33:31.544 "get_zone_info": false, 01:33:31.544 "zone_management": false, 01:33:31.544 "zone_append": false, 01:33:31.544 "compare": false, 01:33:31.544 "compare_and_write": false, 01:33:31.544 "abort": false, 01:33:31.544 "seek_hole": true, 01:33:31.544 "seek_data": true, 01:33:31.544 "copy": false, 01:33:31.544 "nvme_iov_md": false 01:33:31.544 }, 01:33:31.544 "driver_specific": { 01:33:31.544 "lvol": { 01:33:31.544 "lvol_store_uuid": "a29adb7a-9dd2-4660-96bc-ed4cfdf2fb47", 01:33:31.544 "base_bdev": "nvme0n1", 01:33:31.544 "thin_provision": true, 01:33:31.544 "num_allocated_clusters": 0, 01:33:31.544 "snapshot": false, 01:33:31.544 "clone": false, 01:33:31.544 "esnap_clone": false 01:33:31.544 } 01:33:31.544 } 01:33:31.544 } 01:33:31.544 ]' 01:33:31.544 05:28:13 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 01:33:31.544 05:28:13 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 01:33:31.544 05:28:13 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 01:33:31.544 05:28:13 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 01:33:31.544 05:28:13 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 01:33:31.544 05:28:13 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 01:33:31.544 05:28:13 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 01:33:31.544 05:28:13 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 01:33:31.544 05:28:13 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 01:33:31.803 05:28:14 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 01:33:31.803 05:28:14 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 01:33:31.803 05:28:14 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 051047a7-1e6a-4fee-8846-b815c09bced2 01:33:31.803 05:28:14 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=051047a7-1e6a-4fee-8846-b815c09bced2 01:33:31.803 05:28:14 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 01:33:31.803 05:28:14 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 01:33:31.803 05:28:14 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 01:33:31.803 05:28:14 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 051047a7-1e6a-4fee-8846-b815c09bced2 01:33:32.062 05:28:14 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 01:33:32.062 { 01:33:32.062 "name": "051047a7-1e6a-4fee-8846-b815c09bced2", 01:33:32.062 "aliases": [ 01:33:32.062 "lvs/nvme0n1p0" 01:33:32.062 ], 01:33:32.062 "product_name": "Logical Volume", 01:33:32.062 "block_size": 4096, 01:33:32.062 "num_blocks": 26476544, 01:33:32.062 "uuid": "051047a7-1e6a-4fee-8846-b815c09bced2", 01:33:32.062 "assigned_rate_limits": { 01:33:32.062 "rw_ios_per_sec": 0, 01:33:32.062 "rw_mbytes_per_sec": 0, 01:33:32.062 "r_mbytes_per_sec": 0, 01:33:32.062 "w_mbytes_per_sec": 0 01:33:32.062 }, 01:33:32.062 "claimed": false, 01:33:32.062 "zoned": false, 01:33:32.062 "supported_io_types": { 01:33:32.062 "read": true, 01:33:32.062 "write": true, 01:33:32.062 "unmap": true, 01:33:32.062 "flush": false, 01:33:32.062 "reset": true, 01:33:32.062 "nvme_admin": false, 01:33:32.062 "nvme_io": false, 01:33:32.062 "nvme_io_md": false, 01:33:32.062 "write_zeroes": true, 01:33:32.062 "zcopy": false, 01:33:32.062 "get_zone_info": false, 01:33:32.062 "zone_management": false, 01:33:32.062 "zone_append": false, 01:33:32.062 "compare": false, 01:33:32.062 "compare_and_write": false, 01:33:32.062 "abort": false, 01:33:32.062 "seek_hole": true, 01:33:32.062 "seek_data": true, 01:33:32.062 "copy": false, 01:33:32.062 "nvme_iov_md": false 01:33:32.062 }, 01:33:32.062 "driver_specific": { 01:33:32.062 "lvol": { 01:33:32.062 "lvol_store_uuid": "a29adb7a-9dd2-4660-96bc-ed4cfdf2fb47", 01:33:32.062 "base_bdev": "nvme0n1", 01:33:32.062 "thin_provision": true, 01:33:32.062 "num_allocated_clusters": 0, 01:33:32.062 "snapshot": false, 01:33:32.062 "clone": false, 01:33:32.062 "esnap_clone": false 01:33:32.062 } 01:33:32.062 } 01:33:32.062 } 01:33:32.062 ]' 01:33:32.062 05:28:14 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 01:33:32.062 05:28:14 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 01:33:32.062 05:28:14 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 01:33:32.062 05:28:14 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 01:33:32.062 05:28:14 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 01:33:32.062 05:28:14 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 01:33:32.062 05:28:14 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 01:33:32.062 05:28:14 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 01:33:32.321 05:28:14 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 01:33:32.321 05:28:14 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 051047a7-1e6a-4fee-8846-b815c09bced2 01:33:32.321 05:28:14 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=051047a7-1e6a-4fee-8846-b815c09bced2 01:33:32.321 05:28:14 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 01:33:32.321 05:28:14 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 01:33:32.321 05:28:14 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 01:33:32.321 05:28:14 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 051047a7-1e6a-4fee-8846-b815c09bced2 01:33:32.580 05:28:14 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 01:33:32.580 { 01:33:32.580 "name": "051047a7-1e6a-4fee-8846-b815c09bced2", 01:33:32.580 "aliases": [ 01:33:32.580 "lvs/nvme0n1p0" 01:33:32.580 ], 01:33:32.580 "product_name": "Logical Volume", 01:33:32.580 "block_size": 4096, 01:33:32.580 "num_blocks": 26476544, 01:33:32.580 "uuid": "051047a7-1e6a-4fee-8846-b815c09bced2", 01:33:32.580 "assigned_rate_limits": { 01:33:32.580 "rw_ios_per_sec": 0, 01:33:32.580 "rw_mbytes_per_sec": 0, 01:33:32.580 "r_mbytes_per_sec": 0, 01:33:32.580 "w_mbytes_per_sec": 0 01:33:32.580 }, 01:33:32.580 "claimed": false, 01:33:32.580 "zoned": false, 01:33:32.580 "supported_io_types": { 01:33:32.580 "read": true, 01:33:32.580 "write": true, 01:33:32.580 "unmap": true, 01:33:32.580 "flush": false, 01:33:32.580 "reset": true, 01:33:32.580 "nvme_admin": false, 01:33:32.580 "nvme_io": false, 01:33:32.580 "nvme_io_md": false, 01:33:32.580 "write_zeroes": true, 01:33:32.580 "zcopy": false, 01:33:32.580 "get_zone_info": false, 01:33:32.580 "zone_management": false, 01:33:32.580 "zone_append": false, 01:33:32.580 "compare": false, 01:33:32.580 "compare_and_write": false, 01:33:32.580 "abort": false, 01:33:32.580 "seek_hole": true, 01:33:32.580 "seek_data": true, 01:33:32.580 "copy": false, 01:33:32.580 "nvme_iov_md": false 01:33:32.580 }, 01:33:32.580 "driver_specific": { 01:33:32.580 "lvol": { 01:33:32.580 "lvol_store_uuid": "a29adb7a-9dd2-4660-96bc-ed4cfdf2fb47", 01:33:32.580 "base_bdev": "nvme0n1", 01:33:32.580 "thin_provision": true, 01:33:32.580 "num_allocated_clusters": 0, 01:33:32.580 "snapshot": false, 01:33:32.580 "clone": false, 01:33:32.580 "esnap_clone": false 01:33:32.580 } 01:33:32.580 } 01:33:32.580 } 01:33:32.580 ]' 01:33:32.580 05:28:14 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 01:33:32.580 05:28:14 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 01:33:32.580 05:28:14 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 01:33:32.580 05:28:14 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 01:33:32.580 05:28:14 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 01:33:32.580 05:28:14 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 01:33:32.580 05:28:14 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 01:33:32.580 05:28:14 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 051047a7-1e6a-4fee-8846-b815c09bced2 --l2p_dram_limit 10' 01:33:32.580 05:28:14 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 01:33:32.580 05:28:14 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 01:33:32.580 05:28:14 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 01:33:32.580 05:28:14 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 01:33:32.580 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 01:33:32.580 05:28:14 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 051047a7-1e6a-4fee-8846-b815c09bced2 --l2p_dram_limit 10 -c nvc0n1p0 01:33:32.839 [2024-12-09 05:28:15.145284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:32.839 [2024-12-09 05:28:15.145333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 01:33:32.839 [2024-12-09 05:28:15.145352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 01:33:32.839 [2024-12-09 05:28:15.145363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:32.839 [2024-12-09 05:28:15.145435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:32.839 [2024-12-09 05:28:15.145448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 01:33:32.839 [2024-12-09 05:28:15.145473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 01:33:32.839 [2024-12-09 05:28:15.145485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:32.839 [2024-12-09 05:28:15.145508] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 01:33:32.839 [2024-12-09 05:28:15.146508] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 01:33:32.839 [2024-12-09 05:28:15.146538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:32.839 [2024-12-09 05:28:15.146549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 01:33:32.839 [2024-12-09 05:28:15.146563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.032 ms 01:33:32.839 [2024-12-09 05:28:15.146573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:32.839 [2024-12-09 05:28:15.146668] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 28d9320f-13b5-493c-9ba2-857532a9178f 01:33:32.839 [2024-12-09 05:28:15.148126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:32.839 [2024-12-09 05:28:15.148153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 01:33:32.839 [2024-12-09 05:28:15.148166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 01:33:32.839 [2024-12-09 05:28:15.148180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:32.839 [2024-12-09 05:28:15.155724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:32.839 [2024-12-09 05:28:15.155758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 01:33:32.839 [2024-12-09 05:28:15.155771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.505 ms 01:33:32.839 [2024-12-09 05:28:15.155784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:32.839 [2024-12-09 05:28:15.155886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:32.839 [2024-12-09 05:28:15.155904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 01:33:32.839 [2024-12-09 05:28:15.155915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 01:33:32.839 [2024-12-09 05:28:15.155932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:32.839 [2024-12-09 05:28:15.155981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:32.839 [2024-12-09 05:28:15.155996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 01:33:32.839 [2024-12-09 05:28:15.156010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 01:33:32.839 [2024-12-09 05:28:15.156022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:32.839 [2024-12-09 05:28:15.156047] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 01:33:32.839 [2024-12-09 05:28:15.161059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:32.839 [2024-12-09 05:28:15.161087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 01:33:32.839 [2024-12-09 05:28:15.161103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.024 ms 01:33:32.839 [2024-12-09 05:28:15.161112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:32.839 [2024-12-09 05:28:15.161151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:32.839 [2024-12-09 05:28:15.161162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 01:33:32.839 [2024-12-09 05:28:15.161175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 01:33:32.839 [2024-12-09 05:28:15.161184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:32.839 [2024-12-09 05:28:15.161238] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 01:33:32.839 [2024-12-09 05:28:15.161363] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 01:33:32.839 [2024-12-09 05:28:15.161383] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 01:33:32.839 [2024-12-09 05:28:15.161396] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 01:33:32.839 [2024-12-09 05:28:15.161414] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 01:33:32.839 [2024-12-09 05:28:15.161426] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 01:33:32.839 [2024-12-09 05:28:15.161440] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 01:33:32.839 [2024-12-09 05:28:15.161452] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 01:33:32.839 [2024-12-09 05:28:15.161480] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 01:33:32.839 [2024-12-09 05:28:15.161490] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 01:33:32.839 [2024-12-09 05:28:15.161503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:32.839 [2024-12-09 05:28:15.161524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 01:33:32.839 [2024-12-09 05:28:15.161538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.267 ms 01:33:32.839 [2024-12-09 05:28:15.161548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:32.839 [2024-12-09 05:28:15.161624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:32.839 [2024-12-09 05:28:15.161634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 01:33:32.839 [2024-12-09 05:28:15.161647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 01:33:32.839 [2024-12-09 05:28:15.161657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:32.839 [2024-12-09 05:28:15.161752] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 01:33:32.839 [2024-12-09 05:28:15.161765] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 01:33:32.839 [2024-12-09 05:28:15.161778] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 01:33:32.839 [2024-12-09 05:28:15.161789] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:33:32.839 [2024-12-09 05:28:15.161802] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 01:33:32.839 [2024-12-09 05:28:15.161811] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 01:33:32.839 [2024-12-09 05:28:15.161823] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 01:33:32.839 [2024-12-09 05:28:15.161832] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 01:33:32.839 [2024-12-09 05:28:15.161844] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 01:33:32.839 [2024-12-09 05:28:15.161853] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 01:33:32.839 [2024-12-09 05:28:15.161865] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 01:33:32.839 [2024-12-09 05:28:15.161875] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 01:33:32.839 [2024-12-09 05:28:15.161887] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 01:33:32.839 [2024-12-09 05:28:15.161897] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 01:33:32.839 [2024-12-09 05:28:15.161909] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 01:33:32.839 [2024-12-09 05:28:15.161919] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:33:32.839 [2024-12-09 05:28:15.161933] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 01:33:32.839 [2024-12-09 05:28:15.161942] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 01:33:32.839 [2024-12-09 05:28:15.161955] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:33:32.839 [2024-12-09 05:28:15.161965] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 01:33:32.839 [2024-12-09 05:28:15.161977] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 01:33:32.839 [2024-12-09 05:28:15.161986] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:33:32.839 [2024-12-09 05:28:15.161997] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 01:33:32.839 [2024-12-09 05:28:15.162006] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 01:33:32.839 [2024-12-09 05:28:15.162018] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:33:32.839 [2024-12-09 05:28:15.162027] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 01:33:32.839 [2024-12-09 05:28:15.162038] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 01:33:32.839 [2024-12-09 05:28:15.162047] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:33:32.839 [2024-12-09 05:28:15.162059] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 01:33:32.839 [2024-12-09 05:28:15.162068] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 01:33:32.839 [2024-12-09 05:28:15.162079] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:33:32.839 [2024-12-09 05:28:15.162088] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 01:33:32.839 [2024-12-09 05:28:15.162102] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 01:33:32.839 [2024-12-09 05:28:15.162110] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 01:33:32.839 [2024-12-09 05:28:15.162122] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 01:33:32.839 [2024-12-09 05:28:15.162131] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 01:33:32.839 [2024-12-09 05:28:15.162143] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 01:33:32.839 [2024-12-09 05:28:15.162152] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 01:33:32.839 [2024-12-09 05:28:15.162164] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 01:33:32.839 [2024-12-09 05:28:15.162173] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:33:32.839 [2024-12-09 05:28:15.162185] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 01:33:32.839 [2024-12-09 05:28:15.162194] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 01:33:32.839 [2024-12-09 05:28:15.162205] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:33:32.839 [2024-12-09 05:28:15.162213] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 01:33:32.839 [2024-12-09 05:28:15.162226] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 01:33:32.839 [2024-12-09 05:28:15.162236] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 01:33:32.839 [2024-12-09 05:28:15.162250] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:33:32.839 [2024-12-09 05:28:15.162261] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 01:33:32.839 [2024-12-09 05:28:15.162275] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 01:33:32.840 [2024-12-09 05:28:15.162284] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 01:33:32.840 [2024-12-09 05:28:15.162296] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 01:33:32.840 [2024-12-09 05:28:15.162305] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 01:33:32.840 [2024-12-09 05:28:15.162316] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 01:33:32.840 [2024-12-09 05:28:15.162330] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 01:33:32.840 [2024-12-09 05:28:15.162348] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 01:33:32.840 [2024-12-09 05:28:15.162359] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 01:33:32.840 [2024-12-09 05:28:15.162372] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 01:33:32.840 [2024-12-09 05:28:15.162382] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 01:33:32.840 [2024-12-09 05:28:15.162395] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 01:33:32.840 [2024-12-09 05:28:15.162405] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 01:33:32.840 [2024-12-09 05:28:15.162418] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 01:33:32.840 [2024-12-09 05:28:15.162428] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 01:33:32.840 [2024-12-09 05:28:15.162441] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 01:33:32.840 [2024-12-09 05:28:15.162451] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 01:33:32.840 [2024-12-09 05:28:15.162477] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 01:33:32.840 [2024-12-09 05:28:15.162488] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 01:33:32.840 [2024-12-09 05:28:15.162500] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 01:33:32.840 [2024-12-09 05:28:15.162510] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 01:33:32.840 [2024-12-09 05:28:15.162524] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 01:33:32.840 [2024-12-09 05:28:15.162534] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 01:33:32.840 [2024-12-09 05:28:15.162547] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 01:33:32.840 [2024-12-09 05:28:15.162558] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 01:33:32.840 [2024-12-09 05:28:15.162571] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 01:33:32.840 [2024-12-09 05:28:15.162582] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 01:33:32.840 [2024-12-09 05:28:15.162595] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 01:33:32.840 [2024-12-09 05:28:15.162605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:32.840 [2024-12-09 05:28:15.162617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 01:33:32.840 [2024-12-09 05:28:15.162629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.912 ms 01:33:32.840 [2024-12-09 05:28:15.162641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:32.840 [2024-12-09 05:28:15.162683] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 01:33:32.840 [2024-12-09 05:28:15.162711] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 01:33:37.026 [2024-12-09 05:28:18.845892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:37.026 [2024-12-09 05:28:18.845964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 01:33:37.026 [2024-12-09 05:28:18.845982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3689.188 ms 01:33:37.026 [2024-12-09 05:28:18.845996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:37.026 [2024-12-09 05:28:18.883916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:37.026 [2024-12-09 05:28:18.883972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 01:33:37.026 [2024-12-09 05:28:18.883989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.676 ms 01:33:37.026 [2024-12-09 05:28:18.884003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:37.026 [2024-12-09 05:28:18.884165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:37.026 [2024-12-09 05:28:18.884182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 01:33:37.026 [2024-12-09 05:28:18.884193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 01:33:37.026 [2024-12-09 05:28:18.884213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:37.026 [2024-12-09 05:28:18.930521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:37.026 [2024-12-09 05:28:18.930568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 01:33:37.026 [2024-12-09 05:28:18.930599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.324 ms 01:33:37.026 [2024-12-09 05:28:18.930611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:37.026 [2024-12-09 05:28:18.930656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:37.026 [2024-12-09 05:28:18.930670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 01:33:37.026 [2024-12-09 05:28:18.930682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 01:33:37.026 [2024-12-09 05:28:18.930705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:37.026 [2024-12-09 05:28:18.931205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:37.026 [2024-12-09 05:28:18.931230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 01:33:37.026 [2024-12-09 05:28:18.931242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.444 ms 01:33:37.026 [2024-12-09 05:28:18.931254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:37.026 [2024-12-09 05:28:18.931356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:37.026 [2024-12-09 05:28:18.931376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 01:33:37.026 [2024-12-09 05:28:18.931388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 01:33:37.026 [2024-12-09 05:28:18.931547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:37.026 [2024-12-09 05:28:18.952016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:37.026 [2024-12-09 05:28:18.952056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 01:33:37.026 [2024-12-09 05:28:18.952069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.477 ms 01:33:37.026 [2024-12-09 05:28:18.952083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:37.026 [2024-12-09 05:28:18.977894] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 01:33:37.026 [2024-12-09 05:28:18.981401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:37.026 [2024-12-09 05:28:18.981426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 01:33:37.026 [2024-12-09 05:28:18.981441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.274 ms 01:33:37.026 [2024-12-09 05:28:18.981451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:37.026 [2024-12-09 05:28:19.075942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:37.026 [2024-12-09 05:28:19.075999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 01:33:37.026 [2024-12-09 05:28:19.076019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 94.595 ms 01:33:37.026 [2024-12-09 05:28:19.076031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:37.026 [2024-12-09 05:28:19.076227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:37.026 [2024-12-09 05:28:19.076241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 01:33:37.026 [2024-12-09 05:28:19.076259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.146 ms 01:33:37.026 [2024-12-09 05:28:19.076269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:37.026 [2024-12-09 05:28:19.113805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:37.026 [2024-12-09 05:28:19.113842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 01:33:37.026 [2024-12-09 05:28:19.113860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.540 ms 01:33:37.026 [2024-12-09 05:28:19.113871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:37.026 [2024-12-09 05:28:19.149823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:37.026 [2024-12-09 05:28:19.149867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 01:33:37.026 [2024-12-09 05:28:19.149888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.961 ms 01:33:37.026 [2024-12-09 05:28:19.149898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:37.026 [2024-12-09 05:28:19.150659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:37.026 [2024-12-09 05:28:19.150680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 01:33:37.026 [2024-12-09 05:28:19.150697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.720 ms 01:33:37.026 [2024-12-09 05:28:19.150708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:37.026 [2024-12-09 05:28:19.251878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:37.026 [2024-12-09 05:28:19.251939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 01:33:37.026 [2024-12-09 05:28:19.251964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 101.258 ms 01:33:37.026 [2024-12-09 05:28:19.251975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:37.026 [2024-12-09 05:28:19.288652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:37.026 [2024-12-09 05:28:19.288687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 01:33:37.026 [2024-12-09 05:28:19.288704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.647 ms 01:33:37.026 [2024-12-09 05:28:19.288715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:37.026 [2024-12-09 05:28:19.324722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:37.026 [2024-12-09 05:28:19.324756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 01:33:37.026 [2024-12-09 05:28:19.324772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.019 ms 01:33:37.026 [2024-12-09 05:28:19.324781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:37.026 [2024-12-09 05:28:19.361699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:37.026 [2024-12-09 05:28:19.361734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 01:33:37.026 [2024-12-09 05:28:19.361751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.932 ms 01:33:37.027 [2024-12-09 05:28:19.361762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:37.027 [2024-12-09 05:28:19.361809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:37.027 [2024-12-09 05:28:19.361822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 01:33:37.027 [2024-12-09 05:28:19.361839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 01:33:37.027 [2024-12-09 05:28:19.361850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:37.027 [2024-12-09 05:28:19.361955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:37.027 [2024-12-09 05:28:19.361971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 01:33:37.027 [2024-12-09 05:28:19.361984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 01:33:37.027 [2024-12-09 05:28:19.361994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:37.027 [2024-12-09 05:28:19.363099] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4224.209 ms, result 0 01:33:37.027 { 01:33:37.027 "name": "ftl0", 01:33:37.027 "uuid": "28d9320f-13b5-493c-9ba2-857532a9178f" 01:33:37.027 } 01:33:37.027 05:28:19 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 01:33:37.027 05:28:19 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 01:33:37.285 05:28:19 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 01:33:37.285 05:28:19 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 01:33:37.547 [2024-12-09 05:28:19.765760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:37.547 [2024-12-09 05:28:19.765820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 01:33:37.547 [2024-12-09 05:28:19.765836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 01:33:37.547 [2024-12-09 05:28:19.765849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:37.547 [2024-12-09 05:28:19.765877] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 01:33:37.547 [2024-12-09 05:28:19.770160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:37.547 [2024-12-09 05:28:19.770189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 01:33:37.547 [2024-12-09 05:28:19.770205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.266 ms 01:33:37.547 [2024-12-09 05:28:19.770215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:37.547 [2024-12-09 05:28:19.770487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:37.547 [2024-12-09 05:28:19.770501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 01:33:37.547 [2024-12-09 05:28:19.770515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.233 ms 01:33:37.547 [2024-12-09 05:28:19.770525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:37.547 [2024-12-09 05:28:19.773039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:37.547 [2024-12-09 05:28:19.773058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 01:33:37.547 [2024-12-09 05:28:19.773073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.498 ms 01:33:37.547 [2024-12-09 05:28:19.773084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:37.547 [2024-12-09 05:28:19.778127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:37.547 [2024-12-09 05:28:19.778160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 01:33:37.547 [2024-12-09 05:28:19.778175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.028 ms 01:33:37.547 [2024-12-09 05:28:19.778184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:37.547 [2024-12-09 05:28:19.816880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:37.547 [2024-12-09 05:28:19.816919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 01:33:37.547 [2024-12-09 05:28:19.816937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.685 ms 01:33:37.547 [2024-12-09 05:28:19.816947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:37.547 [2024-12-09 05:28:19.839308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:37.547 [2024-12-09 05:28:19.839344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 01:33:37.547 [2024-12-09 05:28:19.839361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.342 ms 01:33:37.547 [2024-12-09 05:28:19.839371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:37.547 [2024-12-09 05:28:19.839540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:37.547 [2024-12-09 05:28:19.839556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 01:33:37.547 [2024-12-09 05:28:19.839570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 01:33:37.547 [2024-12-09 05:28:19.839581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:37.547 [2024-12-09 05:28:19.875975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:37.547 [2024-12-09 05:28:19.876024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 01:33:37.547 [2024-12-09 05:28:19.876042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.424 ms 01:33:37.547 [2024-12-09 05:28:19.876052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:37.547 [2024-12-09 05:28:19.912272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:37.547 [2024-12-09 05:28:19.912304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 01:33:37.547 [2024-12-09 05:28:19.912320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.231 ms 01:33:37.547 [2024-12-09 05:28:19.912330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:37.547 [2024-12-09 05:28:19.947582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:37.547 [2024-12-09 05:28:19.947615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 01:33:37.547 [2024-12-09 05:28:19.947630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.262 ms 01:33:37.547 [2024-12-09 05:28:19.947640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:37.547 [2024-12-09 05:28:19.983360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:37.547 [2024-12-09 05:28:19.983406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 01:33:37.547 [2024-12-09 05:28:19.983423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.677 ms 01:33:37.547 [2024-12-09 05:28:19.983433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:37.547 [2024-12-09 05:28:19.983484] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 01:33:37.547 [2024-12-09 05:28:19.983502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 01:33:37.547 [2024-12-09 05:28:19.983521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 01:33:37.547 [2024-12-09 05:28:19.983533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 01:33:37.547 [2024-12-09 05:28:19.983546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 01:33:37.547 [2024-12-09 05:28:19.983557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 01:33:37.547 [2024-12-09 05:28:19.983570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 01:33:37.547 [2024-12-09 05:28:19.983581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 01:33:37.547 [2024-12-09 05:28:19.983597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 01:33:37.547 [2024-12-09 05:28:19.983608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 01:33:37.547 [2024-12-09 05:28:19.983621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 01:33:37.547 [2024-12-09 05:28:19.983632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 01:33:37.548 [2024-12-09 05:28:19.983645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 01:33:37.548 [2024-12-09 05:28:19.983655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 01:33:37.548 [2024-12-09 05:28:19.983668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 01:33:37.548 [2024-12-09 05:28:19.983678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 01:33:37.548 [2024-12-09 05:28:19.983691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 01:33:37.548 [2024-12-09 05:28:19.983701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 01:33:37.548 [2024-12-09 05:28:19.983716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 01:33:37.548 [2024-12-09 05:28:19.983727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 01:33:37.548 [2024-12-09 05:28:19.983739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 01:33:37.548 [2024-12-09 05:28:19.983750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 01:33:37.548 [2024-12-09 05:28:19.983763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 01:33:37.548 [2024-12-09 05:28:19.983774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 01:33:37.548 [2024-12-09 05:28:19.983790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 01:33:37.548 [2024-12-09 05:28:19.983800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 01:33:37.548 [2024-12-09 05:28:19.983812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 01:33:37.548 [2024-12-09 05:28:19.983824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 01:33:37.548 [2024-12-09 05:28:19.983838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 01:33:37.548 [2024-12-09 05:28:19.983849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 01:33:37.548 [2024-12-09 05:28:19.983863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 01:33:37.548 [2024-12-09 05:28:19.983874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 01:33:37.548 [2024-12-09 05:28:19.983886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 01:33:37.548 [2024-12-09 05:28:19.983897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 01:33:37.548 [2024-12-09 05:28:19.983910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 01:33:37.548 [2024-12-09 05:28:19.983920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 01:33:37.548 [2024-12-09 05:28:19.983933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 01:33:37.548 [2024-12-09 05:28:19.983944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 01:33:37.548 [2024-12-09 05:28:19.983957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 01:33:37.548 [2024-12-09 05:28:19.983967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 01:33:37.548 [2024-12-09 05:28:19.983983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 01:33:37.548 [2024-12-09 05:28:19.983993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 01:33:37.548 [2024-12-09 05:28:19.984006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 01:33:37.548 [2024-12-09 05:28:19.984017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 01:33:37.548 [2024-12-09 05:28:19.984032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 01:33:37.548 [2024-12-09 05:28:19.984042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 01:33:37.548 [2024-12-09 05:28:19.984055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 01:33:37.548 [2024-12-09 05:28:19.984065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 01:33:37.548 [2024-12-09 05:28:19.984078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 01:33:37.548 [2024-12-09 05:28:19.984088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 01:33:37.548 [2024-12-09 05:28:19.984101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 01:33:37.548 [2024-12-09 05:28:19.984112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 01:33:37.548 [2024-12-09 05:28:19.984124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 01:33:37.548 [2024-12-09 05:28:19.984135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 01:33:37.548 [2024-12-09 05:28:19.984148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 01:33:37.548 [2024-12-09 05:28:19.984159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 01:33:37.548 [2024-12-09 05:28:19.984186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 01:33:37.548 [2024-12-09 05:28:19.984197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 01:33:37.548 [2024-12-09 05:28:19.984210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 01:33:37.548 [2024-12-09 05:28:19.984221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 01:33:37.548 [2024-12-09 05:28:19.984234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 01:33:37.548 [2024-12-09 05:28:19.984245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 01:33:37.548 [2024-12-09 05:28:19.984258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 01:33:37.548 [2024-12-09 05:28:19.984269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 01:33:37.548 [2024-12-09 05:28:19.984282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 01:33:37.548 [2024-12-09 05:28:19.984293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 01:33:37.548 [2024-12-09 05:28:19.984306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 01:33:37.548 [2024-12-09 05:28:19.984316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 01:33:37.548 [2024-12-09 05:28:19.984329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 01:33:37.548 [2024-12-09 05:28:19.984339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 01:33:37.548 [2024-12-09 05:28:19.984353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 01:33:37.548 [2024-12-09 05:28:19.984364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 01:33:37.548 [2024-12-09 05:28:19.984381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 01:33:37.548 [2024-12-09 05:28:19.984392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 01:33:37.548 [2024-12-09 05:28:19.984405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 01:33:37.548 [2024-12-09 05:28:19.984416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 01:33:37.548 [2024-12-09 05:28:19.984429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 01:33:37.548 [2024-12-09 05:28:19.984439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 01:33:37.548 [2024-12-09 05:28:19.984452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 01:33:37.548 [2024-12-09 05:28:19.984471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 01:33:37.548 [2024-12-09 05:28:19.984485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 01:33:37.548 [2024-12-09 05:28:19.984495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 01:33:37.548 [2024-12-09 05:28:19.984508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 01:33:37.548 [2024-12-09 05:28:19.984519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 01:33:37.548 [2024-12-09 05:28:19.984532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 01:33:37.548 [2024-12-09 05:28:19.984543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 01:33:37.548 [2024-12-09 05:28:19.984555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 01:33:37.548 [2024-12-09 05:28:19.984566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 01:33:37.548 [2024-12-09 05:28:19.984581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 01:33:37.548 [2024-12-09 05:28:19.984592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 01:33:37.548 [2024-12-09 05:28:19.984605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 01:33:37.548 [2024-12-09 05:28:19.984619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 01:33:37.548 [2024-12-09 05:28:19.984631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 01:33:37.548 [2024-12-09 05:28:19.984642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 01:33:37.548 [2024-12-09 05:28:19.984655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 01:33:37.548 [2024-12-09 05:28:19.984666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 01:33:37.548 [2024-12-09 05:28:19.984680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 01:33:37.548 [2024-12-09 05:28:19.984690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 01:33:37.548 [2024-12-09 05:28:19.984703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 01:33:37.548 [2024-12-09 05:28:19.984713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 01:33:37.548 [2024-12-09 05:28:19.984726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 01:33:37.549 [2024-12-09 05:28:19.984744] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 01:33:37.549 [2024-12-09 05:28:19.984757] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 28d9320f-13b5-493c-9ba2-857532a9178f 01:33:37.549 [2024-12-09 05:28:19.984768] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 01:33:37.549 [2024-12-09 05:28:19.984783] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 01:33:37.549 [2024-12-09 05:28:19.984796] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 01:33:37.549 [2024-12-09 05:28:19.984810] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 01:33:37.549 [2024-12-09 05:28:19.984819] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 01:33:37.549 [2024-12-09 05:28:19.984832] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 01:33:37.549 [2024-12-09 05:28:19.984842] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 01:33:37.549 [2024-12-09 05:28:19.984854] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 01:33:37.549 [2024-12-09 05:28:19.984863] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 01:33:37.549 [2024-12-09 05:28:19.984875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:37.549 [2024-12-09 05:28:19.984886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 01:33:37.549 [2024-12-09 05:28:19.984899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.396 ms 01:33:37.549 [2024-12-09 05:28:19.984911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:37.820 [2024-12-09 05:28:20.005184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:37.820 [2024-12-09 05:28:20.005218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 01:33:37.820 [2024-12-09 05:28:20.005235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.246 ms 01:33:37.820 [2024-12-09 05:28:20.005247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:37.820 [2024-12-09 05:28:20.005856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:37.820 [2024-12-09 05:28:20.005872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 01:33:37.820 [2024-12-09 05:28:20.005889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.578 ms 01:33:37.820 [2024-12-09 05:28:20.005899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:37.820 [2024-12-09 05:28:20.071814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:33:37.820 [2024-12-09 05:28:20.071863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 01:33:37.820 [2024-12-09 05:28:20.071881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:33:37.820 [2024-12-09 05:28:20.071892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:37.820 [2024-12-09 05:28:20.071969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:33:37.820 [2024-12-09 05:28:20.071981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 01:33:37.820 [2024-12-09 05:28:20.071998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:33:37.820 [2024-12-09 05:28:20.072008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:37.820 [2024-12-09 05:28:20.072137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:33:37.820 [2024-12-09 05:28:20.072152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 01:33:37.820 [2024-12-09 05:28:20.072165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:33:37.820 [2024-12-09 05:28:20.072175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:37.820 [2024-12-09 05:28:20.072201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:33:37.820 [2024-12-09 05:28:20.072211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 01:33:37.820 [2024-12-09 05:28:20.072224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:33:37.820 [2024-12-09 05:28:20.072237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:37.820 [2024-12-09 05:28:20.195668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:33:37.820 [2024-12-09 05:28:20.195729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 01:33:37.820 [2024-12-09 05:28:20.195747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:33:37.820 [2024-12-09 05:28:20.195757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:38.093 [2024-12-09 05:28:20.296536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:33:38.094 [2024-12-09 05:28:20.296596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 01:33:38.094 [2024-12-09 05:28:20.296618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:33:38.094 [2024-12-09 05:28:20.296628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:38.094 [2024-12-09 05:28:20.296757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:33:38.094 [2024-12-09 05:28:20.296770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 01:33:38.094 [2024-12-09 05:28:20.296783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:33:38.094 [2024-12-09 05:28:20.296793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:38.094 [2024-12-09 05:28:20.296856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:33:38.094 [2024-12-09 05:28:20.296868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 01:33:38.094 [2024-12-09 05:28:20.296881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:33:38.094 [2024-12-09 05:28:20.296891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:38.094 [2024-12-09 05:28:20.297021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:33:38.094 [2024-12-09 05:28:20.297034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 01:33:38.094 [2024-12-09 05:28:20.297048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:33:38.094 [2024-12-09 05:28:20.297057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:38.094 [2024-12-09 05:28:20.297103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:33:38.094 [2024-12-09 05:28:20.297116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 01:33:38.094 [2024-12-09 05:28:20.297129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:33:38.094 [2024-12-09 05:28:20.297139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:38.094 [2024-12-09 05:28:20.297184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:33:38.094 [2024-12-09 05:28:20.297195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 01:33:38.094 [2024-12-09 05:28:20.297208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:33:38.094 [2024-12-09 05:28:20.297217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:38.094 [2024-12-09 05:28:20.297266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:33:38.094 [2024-12-09 05:28:20.297278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 01:33:38.094 [2024-12-09 05:28:20.297290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:33:38.094 [2024-12-09 05:28:20.297300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:38.094 [2024-12-09 05:28:20.297433] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 532.503 ms, result 0 01:33:38.094 true 01:33:38.094 05:28:20 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 79186 01:33:38.094 05:28:20 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 79186 ']' 01:33:38.094 05:28:20 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 79186 01:33:38.094 05:28:20 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 01:33:38.094 05:28:20 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:33:38.094 05:28:20 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79186 01:33:38.094 05:28:20 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:33:38.094 05:28:20 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:33:38.094 killing process with pid 79186 01:33:38.094 05:28:20 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79186' 01:33:38.094 05:28:20 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 79186 01:33:38.094 05:28:20 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 79186 01:33:43.367 05:28:25 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 01:33:47.551 262144+0 records in 01:33:47.551 262144+0 records out 01:33:47.551 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.09852 s, 262 MB/s 01:33:47.551 05:28:29 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 01:33:48.926 05:28:31 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 01:33:48.926 [2024-12-09 05:28:31.190603] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:33:48.926 [2024-12-09 05:28:31.190718] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79429 ] 01:33:48.926 [2024-12-09 05:28:31.369972] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:33:49.185 [2024-12-09 05:28:31.481720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:33:49.443 [2024-12-09 05:28:31.876243] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 01:33:49.443 [2024-12-09 05:28:31.876322] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 01:33:49.703 [2024-12-09 05:28:32.040627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:49.703 [2024-12-09 05:28:32.040687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 01:33:49.703 [2024-12-09 05:28:32.040702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 01:33:49.703 [2024-12-09 05:28:32.040728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:49.703 [2024-12-09 05:28:32.040777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:49.703 [2024-12-09 05:28:32.040795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 01:33:49.703 [2024-12-09 05:28:32.040806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 01:33:49.703 [2024-12-09 05:28:32.040815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:49.703 [2024-12-09 05:28:32.040836] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 01:33:49.703 [2024-12-09 05:28:32.041808] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 01:33:49.703 [2024-12-09 05:28:32.041831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:49.703 [2024-12-09 05:28:32.041841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 01:33:49.703 [2024-12-09 05:28:32.041852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.000 ms 01:33:49.703 [2024-12-09 05:28:32.041862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:49.703 [2024-12-09 05:28:32.043344] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 01:33:49.703 [2024-12-09 05:28:32.062898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:49.703 [2024-12-09 05:28:32.063072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 01:33:49.703 [2024-12-09 05:28:32.063095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.587 ms 01:33:49.703 [2024-12-09 05:28:32.063107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:49.703 [2024-12-09 05:28:32.063170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:49.703 [2024-12-09 05:28:32.063182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 01:33:49.703 [2024-12-09 05:28:32.063193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 01:33:49.703 [2024-12-09 05:28:32.063203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:49.703 [2024-12-09 05:28:32.070120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:49.703 [2024-12-09 05:28:32.070263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 01:33:49.703 [2024-12-09 05:28:32.070282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.856 ms 01:33:49.703 [2024-12-09 05:28:32.070298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:49.703 [2024-12-09 05:28:32.070381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:49.703 [2024-12-09 05:28:32.070394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 01:33:49.703 [2024-12-09 05:28:32.070405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 01:33:49.703 [2024-12-09 05:28:32.070415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:49.703 [2024-12-09 05:28:32.070457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:49.703 [2024-12-09 05:28:32.070489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 01:33:49.703 [2024-12-09 05:28:32.070500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 01:33:49.704 [2024-12-09 05:28:32.070509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:49.704 [2024-12-09 05:28:32.070538] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 01:33:49.704 [2024-12-09 05:28:32.075311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:49.704 [2024-12-09 05:28:32.075343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 01:33:49.704 [2024-12-09 05:28:32.075359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.785 ms 01:33:49.704 [2024-12-09 05:28:32.075370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:49.704 [2024-12-09 05:28:32.075401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:49.704 [2024-12-09 05:28:32.075412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 01:33:49.704 [2024-12-09 05:28:32.075423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 01:33:49.704 [2024-12-09 05:28:32.075432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:49.704 [2024-12-09 05:28:32.075498] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 01:33:49.704 [2024-12-09 05:28:32.075523] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 01:33:49.704 [2024-12-09 05:28:32.075558] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 01:33:49.704 [2024-12-09 05:28:32.075579] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 01:33:49.704 [2024-12-09 05:28:32.075669] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 01:33:49.704 [2024-12-09 05:28:32.075682] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 01:33:49.704 [2024-12-09 05:28:32.075696] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 01:33:49.704 [2024-12-09 05:28:32.075708] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 01:33:49.704 [2024-12-09 05:28:32.075720] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 01:33:49.704 [2024-12-09 05:28:32.075731] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 01:33:49.704 [2024-12-09 05:28:32.075741] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 01:33:49.704 [2024-12-09 05:28:32.075754] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 01:33:49.704 [2024-12-09 05:28:32.075764] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 01:33:49.704 [2024-12-09 05:28:32.075775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:49.704 [2024-12-09 05:28:32.075785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 01:33:49.704 [2024-12-09 05:28:32.075795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.280 ms 01:33:49.704 [2024-12-09 05:28:32.075805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:49.704 [2024-12-09 05:28:32.075881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:49.704 [2024-12-09 05:28:32.075892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 01:33:49.704 [2024-12-09 05:28:32.075901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 01:33:49.704 [2024-12-09 05:28:32.075912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:49.704 [2024-12-09 05:28:32.076006] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 01:33:49.704 [2024-12-09 05:28:32.076021] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 01:33:49.704 [2024-12-09 05:28:32.076032] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 01:33:49.704 [2024-12-09 05:28:32.076042] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:33:49.704 [2024-12-09 05:28:32.076053] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 01:33:49.704 [2024-12-09 05:28:32.076062] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 01:33:49.704 [2024-12-09 05:28:32.076071] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 01:33:49.704 [2024-12-09 05:28:32.076081] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 01:33:49.704 [2024-12-09 05:28:32.076090] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 01:33:49.704 [2024-12-09 05:28:32.076100] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 01:33:49.704 [2024-12-09 05:28:32.076111] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 01:33:49.704 [2024-12-09 05:28:32.076120] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 01:33:49.704 [2024-12-09 05:28:32.076128] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 01:33:49.704 [2024-12-09 05:28:32.076147] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 01:33:49.704 [2024-12-09 05:28:32.076156] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 01:33:49.704 [2024-12-09 05:28:32.076166] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:33:49.704 [2024-12-09 05:28:32.076175] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 01:33:49.704 [2024-12-09 05:28:32.076183] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 01:33:49.704 [2024-12-09 05:28:32.076192] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:33:49.704 [2024-12-09 05:28:32.076201] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 01:33:49.704 [2024-12-09 05:28:32.076210] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 01:33:49.704 [2024-12-09 05:28:32.076219] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:33:49.704 [2024-12-09 05:28:32.076229] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 01:33:49.704 [2024-12-09 05:28:32.076238] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 01:33:49.704 [2024-12-09 05:28:32.076247] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:33:49.704 [2024-12-09 05:28:32.076256] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 01:33:49.704 [2024-12-09 05:28:32.076265] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 01:33:49.704 [2024-12-09 05:28:32.076275] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:33:49.704 [2024-12-09 05:28:32.076283] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 01:33:49.704 [2024-12-09 05:28:32.076292] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 01:33:49.704 [2024-12-09 05:28:32.076301] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:33:49.704 [2024-12-09 05:28:32.076310] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 01:33:49.704 [2024-12-09 05:28:32.076319] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 01:33:49.704 [2024-12-09 05:28:32.076327] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 01:33:49.704 [2024-12-09 05:28:32.076336] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 01:33:49.704 [2024-12-09 05:28:32.076345] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 01:33:49.704 [2024-12-09 05:28:32.076354] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 01:33:49.704 [2024-12-09 05:28:32.076364] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 01:33:49.704 [2024-12-09 05:28:32.076372] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 01:33:49.704 [2024-12-09 05:28:32.076381] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:33:49.704 [2024-12-09 05:28:32.076390] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 01:33:49.704 [2024-12-09 05:28:32.076400] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 01:33:49.704 [2024-12-09 05:28:32.076410] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:33:49.704 [2024-12-09 05:28:32.076418] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 01:33:49.704 [2024-12-09 05:28:32.076428] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 01:33:49.704 [2024-12-09 05:28:32.076437] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 01:33:49.704 [2024-12-09 05:28:32.076447] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:33:49.704 [2024-12-09 05:28:32.076479] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 01:33:49.704 [2024-12-09 05:28:32.076488] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 01:33:49.704 [2024-12-09 05:28:32.076498] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 01:33:49.704 [2024-12-09 05:28:32.076507] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 01:33:49.704 [2024-12-09 05:28:32.076516] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 01:33:49.704 [2024-12-09 05:28:32.076525] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 01:33:49.704 [2024-12-09 05:28:32.076536] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 01:33:49.704 [2024-12-09 05:28:32.076548] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 01:33:49.704 [2024-12-09 05:28:32.076563] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 01:33:49.704 [2024-12-09 05:28:32.076574] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 01:33:49.704 [2024-12-09 05:28:32.076585] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 01:33:49.704 [2024-12-09 05:28:32.076595] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 01:33:49.704 [2024-12-09 05:28:32.076605] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 01:33:49.704 [2024-12-09 05:28:32.076615] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 01:33:49.704 [2024-12-09 05:28:32.076626] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 01:33:49.704 [2024-12-09 05:28:32.076636] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 01:33:49.704 [2024-12-09 05:28:32.076646] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 01:33:49.704 [2024-12-09 05:28:32.076656] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 01:33:49.704 [2024-12-09 05:28:32.076666] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 01:33:49.704 [2024-12-09 05:28:32.076676] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 01:33:49.704 [2024-12-09 05:28:32.076686] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 01:33:49.705 [2024-12-09 05:28:32.076697] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 01:33:49.705 [2024-12-09 05:28:32.076707] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 01:33:49.705 [2024-12-09 05:28:32.076718] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 01:33:49.705 [2024-12-09 05:28:32.076729] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 01:33:49.705 [2024-12-09 05:28:32.076739] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 01:33:49.705 [2024-12-09 05:28:32.076749] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 01:33:49.705 [2024-12-09 05:28:32.076759] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 01:33:49.705 [2024-12-09 05:28:32.076770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:49.705 [2024-12-09 05:28:32.076780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 01:33:49.705 [2024-12-09 05:28:32.076790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.819 ms 01:33:49.705 [2024-12-09 05:28:32.076799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:49.705 [2024-12-09 05:28:32.117927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:49.705 [2024-12-09 05:28:32.117965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 01:33:49.705 [2024-12-09 05:28:32.117979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.149 ms 01:33:49.705 [2024-12-09 05:28:32.117993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:49.705 [2024-12-09 05:28:32.118074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:49.705 [2024-12-09 05:28:32.118086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 01:33:49.705 [2024-12-09 05:28:32.118096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 01:33:49.705 [2024-12-09 05:28:32.118106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:49.964 [2024-12-09 05:28:32.178030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:49.964 [2024-12-09 05:28:32.178079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 01:33:49.964 [2024-12-09 05:28:32.178095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.955 ms 01:33:49.964 [2024-12-09 05:28:32.178105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:49.964 [2024-12-09 05:28:32.178153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:49.964 [2024-12-09 05:28:32.178165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 01:33:49.964 [2024-12-09 05:28:32.178181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 01:33:49.964 [2024-12-09 05:28:32.178191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:49.964 [2024-12-09 05:28:32.178712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:49.965 [2024-12-09 05:28:32.178727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 01:33:49.965 [2024-12-09 05:28:32.178739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.442 ms 01:33:49.965 [2024-12-09 05:28:32.178749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:49.965 [2024-12-09 05:28:32.178871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:49.965 [2024-12-09 05:28:32.178886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 01:33:49.965 [2024-12-09 05:28:32.178903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 01:33:49.965 [2024-12-09 05:28:32.178913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:49.965 [2024-12-09 05:28:32.200284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:49.965 [2024-12-09 05:28:32.200330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 01:33:49.965 [2024-12-09 05:28:32.200345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.383 ms 01:33:49.965 [2024-12-09 05:28:32.200355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:49.965 [2024-12-09 05:28:32.225943] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 01:33:49.965 [2024-12-09 05:28:32.225986] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 01:33:49.965 [2024-12-09 05:28:32.226002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:49.965 [2024-12-09 05:28:32.226014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 01:33:49.965 [2024-12-09 05:28:32.226025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.550 ms 01:33:49.965 [2024-12-09 05:28:32.226035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:49.965 [2024-12-09 05:28:32.256181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:49.965 [2024-12-09 05:28:32.256344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 01:33:49.965 [2024-12-09 05:28:32.256367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.147 ms 01:33:49.965 [2024-12-09 05:28:32.256378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:49.965 [2024-12-09 05:28:32.275246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:49.965 [2024-12-09 05:28:32.275380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 01:33:49.965 [2024-12-09 05:28:32.275400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.785 ms 01:33:49.965 [2024-12-09 05:28:32.275411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:49.965 [2024-12-09 05:28:32.294103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:49.965 [2024-12-09 05:28:32.294243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 01:33:49.965 [2024-12-09 05:28:32.294263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.684 ms 01:33:49.965 [2024-12-09 05:28:32.294273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:49.965 [2024-12-09 05:28:32.295180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:49.965 [2024-12-09 05:28:32.295207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 01:33:49.965 [2024-12-09 05:28:32.295219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.766 ms 01:33:49.965 [2024-12-09 05:28:32.295236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:49.965 [2024-12-09 05:28:32.382689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:49.965 [2024-12-09 05:28:32.382747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 01:33:49.965 [2024-12-09 05:28:32.382764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 87.572 ms 01:33:49.965 [2024-12-09 05:28:32.382781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:49.965 [2024-12-09 05:28:32.394099] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 01:33:49.965 [2024-12-09 05:28:32.397281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:49.965 [2024-12-09 05:28:32.397428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 01:33:49.965 [2024-12-09 05:28:32.397454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.467 ms 01:33:49.965 [2024-12-09 05:28:32.397478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:49.965 [2024-12-09 05:28:32.397606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:49.965 [2024-12-09 05:28:32.397620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 01:33:49.965 [2024-12-09 05:28:32.397631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 01:33:49.965 [2024-12-09 05:28:32.397641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:49.965 [2024-12-09 05:28:32.397733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:49.965 [2024-12-09 05:28:32.397746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 01:33:49.965 [2024-12-09 05:28:32.397757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 01:33:49.965 [2024-12-09 05:28:32.397767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:49.965 [2024-12-09 05:28:32.397788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:49.965 [2024-12-09 05:28:32.397799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 01:33:49.965 [2024-12-09 05:28:32.397809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 01:33:49.965 [2024-12-09 05:28:32.397818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:49.965 [2024-12-09 05:28:32.397853] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 01:33:49.965 [2024-12-09 05:28:32.397869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:49.965 [2024-12-09 05:28:32.397879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 01:33:49.965 [2024-12-09 05:28:32.397889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 01:33:49.965 [2024-12-09 05:28:32.397899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:50.224 [2024-12-09 05:28:32.435478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:50.224 [2024-12-09 05:28:32.435524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 01:33:50.224 [2024-12-09 05:28:32.435540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.620 ms 01:33:50.224 [2024-12-09 05:28:32.435557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:50.224 [2024-12-09 05:28:32.435639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:33:50.224 [2024-12-09 05:28:32.435652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 01:33:50.224 [2024-12-09 05:28:32.435663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 01:33:50.224 [2024-12-09 05:28:32.435673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:33:50.224 [2024-12-09 05:28:32.436839] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 396.355 ms, result 0 01:33:51.159  [2024-12-09T05:28:34.552Z] Copying: 23/1024 [MB] (23 MBps) [2024-12-09T05:28:35.488Z] Copying: 46/1024 [MB] (23 MBps) [2024-12-09T05:28:36.465Z] Copying: 70/1024 [MB] (23 MBps) [2024-12-09T05:28:37.842Z] Copying: 93/1024 [MB] (23 MBps) [2024-12-09T05:28:38.778Z] Copying: 116/1024 [MB] (23 MBps) [2024-12-09T05:28:39.712Z] Copying: 139/1024 [MB] (23 MBps) [2024-12-09T05:28:40.649Z] Copying: 163/1024 [MB] (23 MBps) [2024-12-09T05:28:41.586Z] Copying: 186/1024 [MB] (22 MBps) [2024-12-09T05:28:42.524Z] Copying: 208/1024 [MB] (22 MBps) [2024-12-09T05:28:43.461Z] Copying: 231/1024 [MB] (22 MBps) [2024-12-09T05:28:44.841Z] Copying: 253/1024 [MB] (22 MBps) [2024-12-09T05:28:45.779Z] Copying: 275/1024 [MB] (21 MBps) [2024-12-09T05:28:46.716Z] Copying: 298/1024 [MB] (22 MBps) [2024-12-09T05:28:47.653Z] Copying: 320/1024 [MB] (21 MBps) [2024-12-09T05:28:48.589Z] Copying: 341/1024 [MB] (21 MBps) [2024-12-09T05:28:49.525Z] Copying: 367/1024 [MB] (26 MBps) [2024-12-09T05:28:50.528Z] Copying: 393/1024 [MB] (26 MBps) [2024-12-09T05:28:51.485Z] Copying: 420/1024 [MB] (26 MBps) [2024-12-09T05:28:52.419Z] Copying: 446/1024 [MB] (25 MBps) [2024-12-09T05:28:53.795Z] Copying: 472/1024 [MB] (26 MBps) [2024-12-09T05:28:54.732Z] Copying: 500/1024 [MB] (27 MBps) [2024-12-09T05:28:55.695Z] Copying: 528/1024 [MB] (27 MBps) [2024-12-09T05:28:56.629Z] Copying: 555/1024 [MB] (26 MBps) [2024-12-09T05:28:57.562Z] Copying: 581/1024 [MB] (26 MBps) [2024-12-09T05:28:58.533Z] Copying: 607/1024 [MB] (26 MBps) [2024-12-09T05:28:59.467Z] Copying: 630/1024 [MB] (22 MBps) [2024-12-09T05:29:00.843Z] Copying: 651/1024 [MB] (21 MBps) [2024-12-09T05:29:01.411Z] Copying: 673/1024 [MB] (21 MBps) [2024-12-09T05:29:02.790Z] Copying: 694/1024 [MB] (21 MBps) [2024-12-09T05:29:03.725Z] Copying: 715/1024 [MB] (21 MBps) [2024-12-09T05:29:04.660Z] Copying: 739/1024 [MB] (23 MBps) [2024-12-09T05:29:05.596Z] Copying: 761/1024 [MB] (22 MBps) [2024-12-09T05:29:06.544Z] Copying: 784/1024 [MB] (22 MBps) [2024-12-09T05:29:07.480Z] Copying: 807/1024 [MB] (22 MBps) [2024-12-09T05:29:08.414Z] Copying: 829/1024 [MB] (22 MBps) [2024-12-09T05:29:09.789Z] Copying: 852/1024 [MB] (22 MBps) [2024-12-09T05:29:10.725Z] Copying: 874/1024 [MB] (22 MBps) [2024-12-09T05:29:11.708Z] Copying: 897/1024 [MB] (22 MBps) [2024-12-09T05:29:12.652Z] Copying: 919/1024 [MB] (22 MBps) [2024-12-09T05:29:13.590Z] Copying: 942/1024 [MB] (22 MBps) [2024-12-09T05:29:14.527Z] Copying: 963/1024 [MB] (21 MBps) [2024-12-09T05:29:15.462Z] Copying: 984/1024 [MB] (21 MBps) [2024-12-09T05:29:16.433Z] Copying: 1006/1024 [MB] (21 MBps) [2024-12-09T05:29:16.433Z] Copying: 1024/1024 [MB] (average 23 MBps)[2024-12-09 05:29:16.176862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:34:33.977 [2024-12-09 05:29:16.176932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 01:34:33.977 [2024-12-09 05:29:16.176952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 01:34:33.977 [2024-12-09 05:29:16.176964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:34:33.977 [2024-12-09 05:29:16.176989] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 01:34:33.977 [2024-12-09 05:29:16.181532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:34:33.977 [2024-12-09 05:29:16.181574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 01:34:33.977 [2024-12-09 05:29:16.181597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.529 ms 01:34:33.977 [2024-12-09 05:29:16.181609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:34:33.977 [2024-12-09 05:29:16.183682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:34:33.977 [2024-12-09 05:29:16.183866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 01:34:33.977 [2024-12-09 05:29:16.183893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.045 ms 01:34:33.977 [2024-12-09 05:29:16.183906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:34:33.977 [2024-12-09 05:29:16.201206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:34:33.977 [2024-12-09 05:29:16.201253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 01:34:33.977 [2024-12-09 05:29:16.201269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.299 ms 01:34:33.977 [2024-12-09 05:29:16.201281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:34:33.977 [2024-12-09 05:29:16.206070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:34:33.977 [2024-12-09 05:29:16.206238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 01:34:33.977 [2024-12-09 05:29:16.206262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.750 ms 01:34:33.977 [2024-12-09 05:29:16.206273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:34:33.977 [2024-12-09 05:29:16.242111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:34:33.977 [2024-12-09 05:29:16.242155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 01:34:33.977 [2024-12-09 05:29:16.242170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.821 ms 01:34:33.977 [2024-12-09 05:29:16.242182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:34:33.977 [2024-12-09 05:29:16.263126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:34:33.977 [2024-12-09 05:29:16.263178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 01:34:33.977 [2024-12-09 05:29:16.263193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.936 ms 01:34:33.977 [2024-12-09 05:29:16.263204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:34:33.977 [2024-12-09 05:29:16.263334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:34:33.977 [2024-12-09 05:29:16.263355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 01:34:33.977 [2024-12-09 05:29:16.263367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 01:34:33.977 [2024-12-09 05:29:16.263378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:34:33.977 [2024-12-09 05:29:16.298831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:34:33.977 [2024-12-09 05:29:16.298873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 01:34:33.977 [2024-12-09 05:29:16.298887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.492 ms 01:34:33.977 [2024-12-09 05:29:16.298899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:34:33.977 [2024-12-09 05:29:16.332412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:34:33.977 [2024-12-09 05:29:16.332454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 01:34:33.977 [2024-12-09 05:29:16.332479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.526 ms 01:34:33.977 [2024-12-09 05:29:16.332489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:34:33.977 [2024-12-09 05:29:16.366360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:34:33.977 [2024-12-09 05:29:16.366406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 01:34:33.977 [2024-12-09 05:29:16.366420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.885 ms 01:34:33.977 [2024-12-09 05:29:16.366431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:34:33.977 [2024-12-09 05:29:16.400536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:34:33.977 [2024-12-09 05:29:16.400721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 01:34:33.977 [2024-12-09 05:29:16.400746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.061 ms 01:34:33.977 [2024-12-09 05:29:16.400757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:34:33.977 [2024-12-09 05:29:16.400798] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 01:34:33.977 [2024-12-09 05:29:16.400816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 01:34:33.977 [2024-12-09 05:29:16.400841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 01:34:33.977 [2024-12-09 05:29:16.400854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 01:34:33.977 [2024-12-09 05:29:16.400866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 01:34:33.977 [2024-12-09 05:29:16.400879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 01:34:33.977 [2024-12-09 05:29:16.400892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 01:34:33.977 [2024-12-09 05:29:16.400904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 01:34:33.977 [2024-12-09 05:29:16.400916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 01:34:33.977 [2024-12-09 05:29:16.400928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 01:34:33.977 [2024-12-09 05:29:16.400940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 01:34:33.977 [2024-12-09 05:29:16.400952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 01:34:33.977 [2024-12-09 05:29:16.400964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 01:34:33.977 [2024-12-09 05:29:16.400976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 01:34:33.977 [2024-12-09 05:29:16.400989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 01:34:33.977 [2024-12-09 05:29:16.401000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 01:34:33.977 [2024-12-09 05:29:16.401013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 01:34:33.977 [2024-12-09 05:29:16.401024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 01:34:33.977 [2024-12-09 05:29:16.401037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 01:34:33.977 [2024-12-09 05:29:16.401050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 01:34:33.977 [2024-12-09 05:29:16.401062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 01:34:33.978 [2024-12-09 05:29:16.401074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 01:34:33.978 [2024-12-09 05:29:16.401086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 01:34:33.978 [2024-12-09 05:29:16.401099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 01:34:33.978 [2024-12-09 05:29:16.401112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 01:34:33.978 [2024-12-09 05:29:16.401123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 01:34:33.978 [2024-12-09 05:29:16.401135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 01:34:33.978 [2024-12-09 05:29:16.401148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 01:34:33.978 [2024-12-09 05:29:16.401161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 01:34:33.978 [2024-12-09 05:29:16.401174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 01:34:33.978 [2024-12-09 05:29:16.401186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 01:34:33.978 [2024-12-09 05:29:16.401197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 01:34:33.978 [2024-12-09 05:29:16.401210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 01:34:33.978 [2024-12-09 05:29:16.401222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 01:34:33.978 [2024-12-09 05:29:16.401234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 01:34:33.978 [2024-12-09 05:29:16.401245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 01:34:33.978 [2024-12-09 05:29:16.401256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 01:34:33.978 [2024-12-09 05:29:16.401268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 01:34:33.978 [2024-12-09 05:29:16.401280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 01:34:33.978 [2024-12-09 05:29:16.401292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 01:34:33.978 [2024-12-09 05:29:16.401305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 01:34:33.978 [2024-12-09 05:29:16.401317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 01:34:33.978 [2024-12-09 05:29:16.401329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 01:34:33.978 [2024-12-09 05:29:16.401340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 01:34:33.978 [2024-12-09 05:29:16.401352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 01:34:33.978 [2024-12-09 05:29:16.401365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 01:34:33.978 [2024-12-09 05:29:16.401377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 01:34:33.978 [2024-12-09 05:29:16.401389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 01:34:33.978 [2024-12-09 05:29:16.401401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 01:34:33.978 [2024-12-09 05:29:16.401413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 01:34:33.978 [2024-12-09 05:29:16.401425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 01:34:33.978 [2024-12-09 05:29:16.401437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 01:34:33.978 [2024-12-09 05:29:16.401449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 01:34:33.978 [2024-12-09 05:29:16.401477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 01:34:33.978 [2024-12-09 05:29:16.401491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 01:34:33.978 [2024-12-09 05:29:16.401503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 01:34:33.978 [2024-12-09 05:29:16.401516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 01:34:33.978 [2024-12-09 05:29:16.401529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 01:34:33.978 [2024-12-09 05:29:16.401542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 01:34:33.978 [2024-12-09 05:29:16.401554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 01:34:33.978 [2024-12-09 05:29:16.401567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 01:34:33.978 [2024-12-09 05:29:16.401580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 01:34:33.978 [2024-12-09 05:29:16.401593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 01:34:33.978 [2024-12-09 05:29:16.401605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 01:34:33.978 [2024-12-09 05:29:16.401619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 01:34:33.978 [2024-12-09 05:29:16.401631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 01:34:33.978 [2024-12-09 05:29:16.401643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 01:34:33.978 [2024-12-09 05:29:16.401656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 01:34:33.978 [2024-12-09 05:29:16.401667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 01:34:33.978 [2024-12-09 05:29:16.401679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 01:34:33.978 [2024-12-09 05:29:16.401691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 01:34:33.978 [2024-12-09 05:29:16.401703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 01:34:33.978 [2024-12-09 05:29:16.401715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 01:34:33.978 [2024-12-09 05:29:16.401727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 01:34:33.978 [2024-12-09 05:29:16.401739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 01:34:33.978 [2024-12-09 05:29:16.401752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 01:34:33.978 [2024-12-09 05:29:16.401778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 01:34:33.978 [2024-12-09 05:29:16.401790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 01:34:33.978 [2024-12-09 05:29:16.401802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 01:34:33.978 [2024-12-09 05:29:16.401814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 01:34:33.978 [2024-12-09 05:29:16.401826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 01:34:33.978 [2024-12-09 05:29:16.401839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 01:34:33.978 [2024-12-09 05:29:16.401852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 01:34:33.978 [2024-12-09 05:29:16.401864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 01:34:33.978 [2024-12-09 05:29:16.401876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 01:34:33.978 [2024-12-09 05:29:16.401888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 01:34:33.978 [2024-12-09 05:29:16.401901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 01:34:33.978 [2024-12-09 05:29:16.401913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 01:34:33.978 [2024-12-09 05:29:16.401925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 01:34:33.978 [2024-12-09 05:29:16.401937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 01:34:33.978 [2024-12-09 05:29:16.401949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 01:34:33.978 [2024-12-09 05:29:16.401961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 01:34:33.978 [2024-12-09 05:29:16.401973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 01:34:33.978 [2024-12-09 05:29:16.401986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 01:34:33.978 [2024-12-09 05:29:16.401998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 01:34:33.978 [2024-12-09 05:29:16.402009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 01:34:33.978 [2024-12-09 05:29:16.402022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 01:34:33.978 [2024-12-09 05:29:16.402034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 01:34:33.978 [2024-12-09 05:29:16.402045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 01:34:33.978 [2024-12-09 05:29:16.402057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 01:34:33.978 [2024-12-09 05:29:16.402069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 01:34:33.978 [2024-12-09 05:29:16.402089] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 01:34:33.978 [2024-12-09 05:29:16.402107] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 28d9320f-13b5-493c-9ba2-857532a9178f 01:34:33.978 [2024-12-09 05:29:16.402119] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 01:34:33.978 [2024-12-09 05:29:16.402130] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 01:34:33.978 [2024-12-09 05:29:16.402141] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 01:34:33.978 [2024-12-09 05:29:16.402153] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 01:34:33.978 [2024-12-09 05:29:16.402164] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 01:34:33.978 [2024-12-09 05:29:16.402190] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 01:34:33.978 [2024-12-09 05:29:16.402201] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 01:34:33.978 [2024-12-09 05:29:16.402211] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 01:34:33.978 [2024-12-09 05:29:16.402222] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 01:34:33.978 [2024-12-09 05:29:16.402233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:34:33.979 [2024-12-09 05:29:16.402244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 01:34:33.979 [2024-12-09 05:29:16.402257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.439 ms 01:34:33.979 [2024-12-09 05:29:16.402270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:34:33.979 [2024-12-09 05:29:16.422516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:34:33.979 [2024-12-09 05:29:16.422556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 01:34:33.979 [2024-12-09 05:29:16.422570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.231 ms 01:34:33.979 [2024-12-09 05:29:16.422598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:34:33.979 [2024-12-09 05:29:16.423214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:34:33.979 [2024-12-09 05:29:16.423231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 01:34:33.979 [2024-12-09 05:29:16.423244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.588 ms 01:34:33.979 [2024-12-09 05:29:16.423264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:34:34.242 [2024-12-09 05:29:16.478136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:34:34.242 [2024-12-09 05:29:16.478182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 01:34:34.242 [2024-12-09 05:29:16.478198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:34:34.242 [2024-12-09 05:29:16.478228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:34:34.242 [2024-12-09 05:29:16.478294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:34:34.242 [2024-12-09 05:29:16.478308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 01:34:34.242 [2024-12-09 05:29:16.478320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:34:34.242 [2024-12-09 05:29:16.478340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:34:34.242 [2024-12-09 05:29:16.478448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:34:34.242 [2024-12-09 05:29:16.478464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 01:34:34.242 [2024-12-09 05:29:16.478493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:34:34.242 [2024-12-09 05:29:16.478506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:34:34.242 [2024-12-09 05:29:16.478528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:34:34.242 [2024-12-09 05:29:16.478542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 01:34:34.242 [2024-12-09 05:29:16.478555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:34:34.242 [2024-12-09 05:29:16.478568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:34:34.242 [2024-12-09 05:29:16.606181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:34:34.242 [2024-12-09 05:29:16.606259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 01:34:34.242 [2024-12-09 05:29:16.606281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:34:34.242 [2024-12-09 05:29:16.606294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:34:34.500 [2024-12-09 05:29:16.704783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:34:34.500 [2024-12-09 05:29:16.704853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 01:34:34.500 [2024-12-09 05:29:16.704872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:34:34.500 [2024-12-09 05:29:16.704893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:34:34.500 [2024-12-09 05:29:16.705015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:34:34.500 [2024-12-09 05:29:16.705029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 01:34:34.501 [2024-12-09 05:29:16.705042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:34:34.501 [2024-12-09 05:29:16.705054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:34:34.501 [2024-12-09 05:29:16.705100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:34:34.501 [2024-12-09 05:29:16.705113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 01:34:34.501 [2024-12-09 05:29:16.705126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:34:34.501 [2024-12-09 05:29:16.705137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:34:34.501 [2024-12-09 05:29:16.705282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:34:34.501 [2024-12-09 05:29:16.705298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 01:34:34.501 [2024-12-09 05:29:16.705311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:34:34.501 [2024-12-09 05:29:16.705323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:34:34.501 [2024-12-09 05:29:16.705370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:34:34.501 [2024-12-09 05:29:16.705384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 01:34:34.501 [2024-12-09 05:29:16.705396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:34:34.501 [2024-12-09 05:29:16.705407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:34:34.501 [2024-12-09 05:29:16.705458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:34:34.501 [2024-12-09 05:29:16.705500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 01:34:34.501 [2024-12-09 05:29:16.705512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:34:34.501 [2024-12-09 05:29:16.705524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:34:34.501 [2024-12-09 05:29:16.705583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:34:34.501 [2024-12-09 05:29:16.705597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 01:34:34.501 [2024-12-09 05:29:16.705609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:34:34.501 [2024-12-09 05:29:16.705621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:34:34.501 [2024-12-09 05:29:16.705799] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 529.744 ms, result 0 01:34:35.877 01:34:35.877 01:34:35.877 05:29:17 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 01:34:35.877 [2024-12-09 05:29:18.030366] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:34:35.878 [2024-12-09 05:29:18.030521] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79908 ] 01:34:35.878 [2024-12-09 05:29:18.217334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:34:36.137 [2024-12-09 05:29:18.347314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:34:36.396 [2024-12-09 05:29:18.766556] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 01:34:36.396 [2024-12-09 05:29:18.766648] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 01:34:36.657 [2024-12-09 05:29:18.932914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:34:36.657 [2024-12-09 05:29:18.933196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 01:34:36.657 [2024-12-09 05:29:18.933227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 01:34:36.657 [2024-12-09 05:29:18.933241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:34:36.657 [2024-12-09 05:29:18.933313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:34:36.657 [2024-12-09 05:29:18.933333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 01:34:36.657 [2024-12-09 05:29:18.933346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 01:34:36.657 [2024-12-09 05:29:18.933358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:34:36.657 [2024-12-09 05:29:18.933389] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 01:34:36.657 [2024-12-09 05:29:18.934503] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 01:34:36.657 [2024-12-09 05:29:18.934542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:34:36.657 [2024-12-09 05:29:18.934555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 01:34:36.657 [2024-12-09 05:29:18.934570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.161 ms 01:34:36.657 [2024-12-09 05:29:18.934582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:34:36.657 [2024-12-09 05:29:18.937055] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 01:34:36.657 [2024-12-09 05:29:18.956100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:34:36.657 [2024-12-09 05:29:18.956283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 01:34:36.657 [2024-12-09 05:29:18.956310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.077 ms 01:34:36.657 [2024-12-09 05:29:18.956323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:34:36.657 [2024-12-09 05:29:18.956397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:34:36.657 [2024-12-09 05:29:18.956412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 01:34:36.657 [2024-12-09 05:29:18.956426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 01:34:36.657 [2024-12-09 05:29:18.956438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:34:36.657 [2024-12-09 05:29:18.968674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:34:36.657 [2024-12-09 05:29:18.968710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 01:34:36.657 [2024-12-09 05:29:18.968725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.154 ms 01:34:36.657 [2024-12-09 05:29:18.968742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:34:36.657 [2024-12-09 05:29:18.968831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:34:36.657 [2024-12-09 05:29:18.968847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 01:34:36.657 [2024-12-09 05:29:18.968860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 01:34:36.657 [2024-12-09 05:29:18.968872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:34:36.657 [2024-12-09 05:29:18.968933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:34:36.657 [2024-12-09 05:29:18.968947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 01:34:36.657 [2024-12-09 05:29:18.968959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 01:34:36.657 [2024-12-09 05:29:18.968970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:34:36.657 [2024-12-09 05:29:18.969005] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 01:34:36.657 [2024-12-09 05:29:18.974679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:34:36.657 [2024-12-09 05:29:18.974850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 01:34:36.657 [2024-12-09 05:29:18.974880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.691 ms 01:34:36.657 [2024-12-09 05:29:18.974893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:34:36.657 [2024-12-09 05:29:18.974933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:34:36.657 [2024-12-09 05:29:18.974946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 01:34:36.657 [2024-12-09 05:29:18.974959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 01:34:36.657 [2024-12-09 05:29:18.974971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:34:36.657 [2024-12-09 05:29:18.975024] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 01:34:36.657 [2024-12-09 05:29:18.975055] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 01:34:36.657 [2024-12-09 05:29:18.975095] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 01:34:36.657 [2024-12-09 05:29:18.975120] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 01:34:36.657 [2024-12-09 05:29:18.975214] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 01:34:36.657 [2024-12-09 05:29:18.975230] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 01:34:36.657 [2024-12-09 05:29:18.975247] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 01:34:36.657 [2024-12-09 05:29:18.975262] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 01:34:36.657 [2024-12-09 05:29:18.975276] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 01:34:36.657 [2024-12-09 05:29:18.975290] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 01:34:36.657 [2024-12-09 05:29:18.975302] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 01:34:36.657 [2024-12-09 05:29:18.975319] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 01:34:36.657 [2024-12-09 05:29:18.975331] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 01:34:36.657 [2024-12-09 05:29:18.975344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:34:36.657 [2024-12-09 05:29:18.975356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 01:34:36.657 [2024-12-09 05:29:18.975369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.324 ms 01:34:36.657 [2024-12-09 05:29:18.975380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:34:36.657 [2024-12-09 05:29:18.975478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:34:36.657 [2024-12-09 05:29:18.975494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 01:34:36.657 [2024-12-09 05:29:18.975506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 01:34:36.657 [2024-12-09 05:29:18.975518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:34:36.657 [2024-12-09 05:29:18.975624] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 01:34:36.657 [2024-12-09 05:29:18.975641] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 01:34:36.657 [2024-12-09 05:29:18.975654] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 01:34:36.657 [2024-12-09 05:29:18.975667] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:34:36.657 [2024-12-09 05:29:18.975679] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 01:34:36.657 [2024-12-09 05:29:18.975691] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 01:34:36.657 [2024-12-09 05:29:18.975703] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 01:34:36.657 [2024-12-09 05:29:18.975714] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 01:34:36.657 [2024-12-09 05:29:18.975728] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 01:34:36.658 [2024-12-09 05:29:18.975739] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 01:34:36.658 [2024-12-09 05:29:18.975751] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 01:34:36.658 [2024-12-09 05:29:18.975762] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 01:34:36.658 [2024-12-09 05:29:18.975776] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 01:34:36.658 [2024-12-09 05:29:18.975800] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 01:34:36.658 [2024-12-09 05:29:18.975811] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 01:34:36.658 [2024-12-09 05:29:18.975823] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:34:36.658 [2024-12-09 05:29:18.975834] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 01:34:36.658 [2024-12-09 05:29:18.975845] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 01:34:36.658 [2024-12-09 05:29:18.975856] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:34:36.658 [2024-12-09 05:29:18.975868] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 01:34:36.658 [2024-12-09 05:29:18.975879] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 01:34:36.658 [2024-12-09 05:29:18.975891] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:34:36.658 [2024-12-09 05:29:18.975903] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 01:34:36.658 [2024-12-09 05:29:18.975914] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 01:34:36.658 [2024-12-09 05:29:18.975925] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:34:36.658 [2024-12-09 05:29:18.975937] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 01:34:36.658 [2024-12-09 05:29:18.975948] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 01:34:36.658 [2024-12-09 05:29:18.975959] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:34:36.658 [2024-12-09 05:29:18.975970] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 01:34:36.658 [2024-12-09 05:29:18.975981] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 01:34:36.658 [2024-12-09 05:29:18.975992] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:34:36.658 [2024-12-09 05:29:18.976002] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 01:34:36.658 [2024-12-09 05:29:18.976013] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 01:34:36.658 [2024-12-09 05:29:18.976024] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 01:34:36.658 [2024-12-09 05:29:18.976034] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 01:34:36.658 [2024-12-09 05:29:18.976045] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 01:34:36.658 [2024-12-09 05:29:18.976056] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 01:34:36.658 [2024-12-09 05:29:18.976066] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 01:34:36.658 [2024-12-09 05:29:18.976077] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 01:34:36.658 [2024-12-09 05:29:18.976087] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:34:36.658 [2024-12-09 05:29:18.976098] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 01:34:36.658 [2024-12-09 05:29:18.976109] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 01:34:36.658 [2024-12-09 05:29:18.976120] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:34:36.658 [2024-12-09 05:29:18.976132] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 01:34:36.658 [2024-12-09 05:29:18.976145] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 01:34:36.658 [2024-12-09 05:29:18.976157] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 01:34:36.658 [2024-12-09 05:29:18.976169] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:34:36.658 [2024-12-09 05:29:18.976181] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 01:34:36.658 [2024-12-09 05:29:18.976192] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 01:34:36.658 [2024-12-09 05:29:18.976203] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 01:34:36.658 [2024-12-09 05:29:18.976214] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 01:34:36.658 [2024-12-09 05:29:18.976225] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 01:34:36.658 [2024-12-09 05:29:18.976248] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 01:34:36.658 [2024-12-09 05:29:18.976261] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 01:34:36.658 [2024-12-09 05:29:18.976274] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 01:34:36.658 [2024-12-09 05:29:18.976292] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 01:34:36.658 [2024-12-09 05:29:18.976304] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 01:34:36.658 [2024-12-09 05:29:18.976316] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 01:34:36.658 [2024-12-09 05:29:18.976328] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 01:34:36.658 [2024-12-09 05:29:18.976339] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 01:34:36.658 [2024-12-09 05:29:18.976350] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 01:34:36.658 [2024-12-09 05:29:18.976360] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 01:34:36.658 [2024-12-09 05:29:18.976372] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 01:34:36.658 [2024-12-09 05:29:18.976383] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 01:34:36.658 [2024-12-09 05:29:18.976394] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 01:34:36.658 [2024-12-09 05:29:18.976405] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 01:34:36.658 [2024-12-09 05:29:18.976416] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 01:34:36.658 [2024-12-09 05:29:18.976428] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 01:34:36.658 [2024-12-09 05:29:18.976439] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 01:34:36.658 [2024-12-09 05:29:18.976450] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 01:34:36.658 [2024-12-09 05:29:18.976463] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 01:34:36.658 [2024-12-09 05:29:18.976489] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 01:34:36.658 [2024-12-09 05:29:18.976502] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 01:34:36.658 [2024-12-09 05:29:18.976515] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 01:34:36.658 [2024-12-09 05:29:18.976526] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 01:34:36.658 [2024-12-09 05:29:18.976541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:34:36.658 [2024-12-09 05:29:18.976555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 01:34:36.658 [2024-12-09 05:29:18.976567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.972 ms 01:34:36.658 [2024-12-09 05:29:18.976578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:34:36.658 [2024-12-09 05:29:19.022258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:34:36.658 [2024-12-09 05:29:19.022481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 01:34:36.658 [2024-12-09 05:29:19.022508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.701 ms 01:34:36.658 [2024-12-09 05:29:19.022530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:34:36.658 [2024-12-09 05:29:19.022614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:34:36.658 [2024-12-09 05:29:19.022628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 01:34:36.658 [2024-12-09 05:29:19.022642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 01:34:36.658 [2024-12-09 05:29:19.022654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:34:36.658 [2024-12-09 05:29:19.082932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:34:36.658 [2024-12-09 05:29:19.082980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 01:34:36.658 [2024-12-09 05:29:19.083002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.307 ms 01:34:36.658 [2024-12-09 05:29:19.083015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:34:36.658 [2024-12-09 05:29:19.083056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:34:36.658 [2024-12-09 05:29:19.083070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 01:34:36.658 [2024-12-09 05:29:19.083090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 01:34:36.658 [2024-12-09 05:29:19.083101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:34:36.658 [2024-12-09 05:29:19.084008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:34:36.658 [2024-12-09 05:29:19.084034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 01:34:36.658 [2024-12-09 05:29:19.084048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.819 ms 01:34:36.658 [2024-12-09 05:29:19.084061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:34:36.658 [2024-12-09 05:29:19.084201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:34:36.658 [2024-12-09 05:29:19.084218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 01:34:36.658 [2024-12-09 05:29:19.084241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 01:34:36.658 [2024-12-09 05:29:19.084253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:34:36.658 [2024-12-09 05:29:19.107667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:34:36.658 [2024-12-09 05:29:19.107711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 01:34:36.658 [2024-12-09 05:29:19.107726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.424 ms 01:34:36.658 [2024-12-09 05:29:19.107739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:34:36.918 [2024-12-09 05:29:19.127603] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 01:34:36.918 [2024-12-09 05:29:19.127648] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 01:34:36.918 [2024-12-09 05:29:19.127665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:34:36.918 [2024-12-09 05:29:19.127678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 01:34:36.918 [2024-12-09 05:29:19.127691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.839 ms 01:34:36.918 [2024-12-09 05:29:19.127703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:34:36.918 [2024-12-09 05:29:19.156278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:34:36.918 [2024-12-09 05:29:19.156345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 01:34:36.918 [2024-12-09 05:29:19.156362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.573 ms 01:34:36.918 [2024-12-09 05:29:19.156375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:34:36.918 [2024-12-09 05:29:19.174124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:34:36.919 [2024-12-09 05:29:19.174168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 01:34:36.919 [2024-12-09 05:29:19.174183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.716 ms 01:34:36.919 [2024-12-09 05:29:19.174211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:34:36.919 [2024-12-09 05:29:19.191107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:34:36.919 [2024-12-09 05:29:19.191150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 01:34:36.919 [2024-12-09 05:29:19.191166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.879 ms 01:34:36.919 [2024-12-09 05:29:19.191177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:34:36.919 [2024-12-09 05:29:19.191981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:34:36.919 [2024-12-09 05:29:19.192021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 01:34:36.919 [2024-12-09 05:29:19.192041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.670 ms 01:34:36.919 [2024-12-09 05:29:19.192052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:34:36.919 [2024-12-09 05:29:19.287592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:34:36.919 [2024-12-09 05:29:19.287679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 01:34:36.919 [2024-12-09 05:29:19.287707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 95.665 ms 01:34:36.919 [2024-12-09 05:29:19.287720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:34:36.919 [2024-12-09 05:29:19.297833] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 01:34:36.919 [2024-12-09 05:29:19.301389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:34:36.919 [2024-12-09 05:29:19.301428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 01:34:36.919 [2024-12-09 05:29:19.301444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.639 ms 01:34:36.919 [2024-12-09 05:29:19.301485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:34:36.919 [2024-12-09 05:29:19.301569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:34:36.919 [2024-12-09 05:29:19.301586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 01:34:36.919 [2024-12-09 05:29:19.301605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 01:34:36.919 [2024-12-09 05:29:19.301617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:34:36.919 [2024-12-09 05:29:19.301769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:34:36.919 [2024-12-09 05:29:19.301786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 01:34:36.919 [2024-12-09 05:29:19.301799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 01:34:36.919 [2024-12-09 05:29:19.301812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:34:36.919 [2024-12-09 05:29:19.301845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:34:36.919 [2024-12-09 05:29:19.301859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 01:34:36.919 [2024-12-09 05:29:19.301872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 01:34:36.919 [2024-12-09 05:29:19.301885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:34:36.919 [2024-12-09 05:29:19.301935] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 01:34:36.919 [2024-12-09 05:29:19.301951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:34:36.919 [2024-12-09 05:29:19.301964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 01:34:36.919 [2024-12-09 05:29:19.301977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 01:34:36.919 [2024-12-09 05:29:19.301990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:34:36.919 [2024-12-09 05:29:19.337013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:34:36.919 [2024-12-09 05:29:19.337062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 01:34:36.919 [2024-12-09 05:29:19.337085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.054 ms 01:34:36.919 [2024-12-09 05:29:19.337098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:34:36.919 [2024-12-09 05:29:19.337183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:34:36.919 [2024-12-09 05:29:19.337197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 01:34:36.919 [2024-12-09 05:29:19.337210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 01:34:36.919 [2024-12-09 05:29:19.337222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:34:36.919 [2024-12-09 05:29:19.338786] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 405.931 ms, result 0 01:34:38.295  [2024-12-09T05:29:21.686Z] Copying: 24/1024 [MB] (24 MBps) [2024-12-09T05:29:22.621Z] Copying: 49/1024 [MB] (24 MBps) [2024-12-09T05:29:23.557Z] Copying: 73/1024 [MB] (23 MBps) [2024-12-09T05:29:24.936Z] Copying: 96/1024 [MB] (23 MBps) [2024-12-09T05:29:25.881Z] Copying: 120/1024 [MB] (24 MBps) [2024-12-09T05:29:26.847Z] Copying: 144/1024 [MB] (24 MBps) [2024-12-09T05:29:27.782Z] Copying: 169/1024 [MB] (24 MBps) [2024-12-09T05:29:28.717Z] Copying: 193/1024 [MB] (24 MBps) [2024-12-09T05:29:29.648Z] Copying: 218/1024 [MB] (24 MBps) [2024-12-09T05:29:30.583Z] Copying: 242/1024 [MB] (23 MBps) [2024-12-09T05:29:31.534Z] Copying: 265/1024 [MB] (23 MBps) [2024-12-09T05:29:32.911Z] Copying: 289/1024 [MB] (23 MBps) [2024-12-09T05:29:33.847Z] Copying: 312/1024 [MB] (23 MBps) [2024-12-09T05:29:34.782Z] Copying: 336/1024 [MB] (23 MBps) [2024-12-09T05:29:35.716Z] Copying: 360/1024 [MB] (23 MBps) [2024-12-09T05:29:36.650Z] Copying: 384/1024 [MB] (23 MBps) [2024-12-09T05:29:37.585Z] Copying: 408/1024 [MB] (24 MBps) [2024-12-09T05:29:38.522Z] Copying: 433/1024 [MB] (24 MBps) [2024-12-09T05:29:39.899Z] Copying: 457/1024 [MB] (24 MBps) [2024-12-09T05:29:40.836Z] Copying: 481/1024 [MB] (24 MBps) [2024-12-09T05:29:41.772Z] Copying: 507/1024 [MB] (25 MBps) [2024-12-09T05:29:42.727Z] Copying: 532/1024 [MB] (24 MBps) [2024-12-09T05:29:43.664Z] Copying: 556/1024 [MB] (24 MBps) [2024-12-09T05:29:44.599Z] Copying: 581/1024 [MB] (24 MBps) [2024-12-09T05:29:45.536Z] Copying: 606/1024 [MB] (24 MBps) [2024-12-09T05:29:46.912Z] Copying: 631/1024 [MB] (24 MBps) [2024-12-09T05:29:47.850Z] Copying: 656/1024 [MB] (25 MBps) [2024-12-09T05:29:48.787Z] Copying: 680/1024 [MB] (23 MBps) [2024-12-09T05:29:49.724Z] Copying: 704/1024 [MB] (24 MBps) [2024-12-09T05:29:50.660Z] Copying: 728/1024 [MB] (23 MBps) [2024-12-09T05:29:51.594Z] Copying: 752/1024 [MB] (23 MBps) [2024-12-09T05:29:52.524Z] Copying: 776/1024 [MB] (24 MBps) [2024-12-09T05:29:53.896Z] Copying: 800/1024 [MB] (24 MBps) [2024-12-09T05:29:54.831Z] Copying: 825/1024 [MB] (24 MBps) [2024-12-09T05:29:55.764Z] Copying: 849/1024 [MB] (24 MBps) [2024-12-09T05:29:56.707Z] Copying: 873/1024 [MB] (24 MBps) [2024-12-09T05:29:57.645Z] Copying: 897/1024 [MB] (24 MBps) [2024-12-09T05:29:58.582Z] Copying: 921/1024 [MB] (24 MBps) [2024-12-09T05:29:59.518Z] Copying: 946/1024 [MB] (24 MBps) [2024-12-09T05:30:00.896Z] Copying: 970/1024 [MB] (24 MBps) [2024-12-09T05:30:01.834Z] Copying: 994/1024 [MB] (24 MBps) [2024-12-09T05:30:01.834Z] Copying: 1021/1024 [MB] (26 MBps) [2024-12-09T05:30:01.834Z] Copying: 1024/1024 [MB] (average 24 MBps)[2024-12-09 05:30:01.578206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:35:19.378 [2024-12-09 05:30:01.578282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 01:35:19.378 [2024-12-09 05:30:01.578302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 01:35:19.378 [2024-12-09 05:30:01.578314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:35:19.378 [2024-12-09 05:30:01.578339] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 01:35:19.378 [2024-12-09 05:30:01.583236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:35:19.378 [2024-12-09 05:30:01.583282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 01:35:19.378 [2024-12-09 05:30:01.583296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.885 ms 01:35:19.378 [2024-12-09 05:30:01.583308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:35:19.378 [2024-12-09 05:30:01.583534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:35:19.378 [2024-12-09 05:30:01.583550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 01:35:19.378 [2024-12-09 05:30:01.583563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.199 ms 01:35:19.378 [2024-12-09 05:30:01.583576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:35:19.378 [2024-12-09 05:30:01.586064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:35:19.378 [2024-12-09 05:30:01.586245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 01:35:19.378 [2024-12-09 05:30:01.586266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.475 ms 01:35:19.378 [2024-12-09 05:30:01.586285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:35:19.378 [2024-12-09 05:30:01.590884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:35:19.378 [2024-12-09 05:30:01.590919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 01:35:19.378 [2024-12-09 05:30:01.590934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.581 ms 01:35:19.378 [2024-12-09 05:30:01.590946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:35:19.378 [2024-12-09 05:30:01.627284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:35:19.378 [2024-12-09 05:30:01.627327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 01:35:19.378 [2024-12-09 05:30:01.627343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.306 ms 01:35:19.378 [2024-12-09 05:30:01.627355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:35:19.378 [2024-12-09 05:30:01.648673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:35:19.378 [2024-12-09 05:30:01.648842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 01:35:19.378 [2024-12-09 05:30:01.648865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.311 ms 01:35:19.378 [2024-12-09 05:30:01.648877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:35:19.378 [2024-12-09 05:30:01.649002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:35:19.378 [2024-12-09 05:30:01.649017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 01:35:19.378 [2024-12-09 05:30:01.649030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 01:35:19.378 [2024-12-09 05:30:01.649041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:35:19.378 [2024-12-09 05:30:01.685010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:35:19.378 [2024-12-09 05:30:01.685052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 01:35:19.378 [2024-12-09 05:30:01.685067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.009 ms 01:35:19.378 [2024-12-09 05:30:01.685078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:35:19.378 [2024-12-09 05:30:01.720143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:35:19.378 [2024-12-09 05:30:01.720327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 01:35:19.378 [2024-12-09 05:30:01.720350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.080 ms 01:35:19.378 [2024-12-09 05:30:01.720362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:35:19.378 [2024-12-09 05:30:01.754847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:35:19.378 [2024-12-09 05:30:01.755022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 01:35:19.378 [2024-12-09 05:30:01.755062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.501 ms 01:35:19.378 [2024-12-09 05:30:01.755074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:35:19.378 [2024-12-09 05:30:01.789891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:35:19.378 [2024-12-09 05:30:01.789934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 01:35:19.378 [2024-12-09 05:30:01.789948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.759 ms 01:35:19.379 [2024-12-09 05:30:01.789959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:35:19.379 [2024-12-09 05:30:01.790000] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 01:35:19.379 [2024-12-09 05:30:01.790027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 01:35:19.379 [2024-12-09 05:30:01.790046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 01:35:19.379 [2024-12-09 05:30:01.790059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 01:35:19.379 [2024-12-09 05:30:01.790072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 01:35:19.379 [2024-12-09 05:30:01.790084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 01:35:19.379 [2024-12-09 05:30:01.790096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 01:35:19.379 [2024-12-09 05:30:01.790108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 01:35:19.379 [2024-12-09 05:30:01.790120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 01:35:19.379 [2024-12-09 05:30:01.790132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 01:35:19.379 [2024-12-09 05:30:01.790145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 01:35:19.379 [2024-12-09 05:30:01.790156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 01:35:19.379 [2024-12-09 05:30:01.790168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 01:35:19.379 [2024-12-09 05:30:01.790179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 01:35:19.379 [2024-12-09 05:30:01.790190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 01:35:19.379 [2024-12-09 05:30:01.790202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 01:35:19.379 [2024-12-09 05:30:01.790213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 01:35:19.379 [2024-12-09 05:30:01.790224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 01:35:19.379 [2024-12-09 05:30:01.790235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 01:35:19.379 [2024-12-09 05:30:01.790246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 01:35:19.379 [2024-12-09 05:30:01.790259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 01:35:19.379 [2024-12-09 05:30:01.790271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 01:35:19.379 [2024-12-09 05:30:01.790283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 01:35:19.379 [2024-12-09 05:30:01.790295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 01:35:19.379 [2024-12-09 05:30:01.790306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 01:35:19.379 [2024-12-09 05:30:01.790318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 01:35:19.379 [2024-12-09 05:30:01.790329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 01:35:19.379 [2024-12-09 05:30:01.790342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 01:35:19.379 [2024-12-09 05:30:01.790353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 01:35:19.379 [2024-12-09 05:30:01.790364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 01:35:19.379 [2024-12-09 05:30:01.790375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 01:35:19.379 [2024-12-09 05:30:01.790388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 01:35:19.379 [2024-12-09 05:30:01.790400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 01:35:19.379 [2024-12-09 05:30:01.790412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 01:35:19.379 [2024-12-09 05:30:01.790423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 01:35:19.379 [2024-12-09 05:30:01.790434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 01:35:19.379 [2024-12-09 05:30:01.790446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 01:35:19.379 [2024-12-09 05:30:01.790458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 01:35:19.379 [2024-12-09 05:30:01.790484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 01:35:19.379 [2024-12-09 05:30:01.790496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 01:35:19.379 [2024-12-09 05:30:01.790507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 01:35:19.379 [2024-12-09 05:30:01.790519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 01:35:19.379 [2024-12-09 05:30:01.790530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 01:35:19.379 [2024-12-09 05:30:01.790542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 01:35:19.379 [2024-12-09 05:30:01.790554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 01:35:19.379 [2024-12-09 05:30:01.790565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 01:35:19.379 [2024-12-09 05:30:01.790578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 01:35:19.379 [2024-12-09 05:30:01.790590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 01:35:19.379 [2024-12-09 05:30:01.790602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 01:35:19.379 [2024-12-09 05:30:01.790614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 01:35:19.379 [2024-12-09 05:30:01.790625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 01:35:19.379 [2024-12-09 05:30:01.790636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 01:35:19.379 [2024-12-09 05:30:01.790648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 01:35:19.379 [2024-12-09 05:30:01.790660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 01:35:19.379 [2024-12-09 05:30:01.790671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 01:35:19.379 [2024-12-09 05:30:01.790682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 01:35:19.379 [2024-12-09 05:30:01.790695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 01:35:19.379 [2024-12-09 05:30:01.790706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 01:35:19.379 [2024-12-09 05:30:01.790719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 01:35:19.379 [2024-12-09 05:30:01.790730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 01:35:19.379 [2024-12-09 05:30:01.790751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 01:35:19.379 [2024-12-09 05:30:01.790763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 01:35:19.379 [2024-12-09 05:30:01.790774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 01:35:19.379 [2024-12-09 05:30:01.790786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 01:35:19.379 [2024-12-09 05:30:01.790798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 01:35:19.379 [2024-12-09 05:30:01.790809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 01:35:19.379 [2024-12-09 05:30:01.790821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 01:35:19.379 [2024-12-09 05:30:01.790833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 01:35:19.379 [2024-12-09 05:30:01.790847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 01:35:19.379 [2024-12-09 05:30:01.790858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 01:35:19.379 [2024-12-09 05:30:01.790870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 01:35:19.379 [2024-12-09 05:30:01.790881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 01:35:19.379 [2024-12-09 05:30:01.790893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 01:35:19.379 [2024-12-09 05:30:01.790903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 01:35:19.379 [2024-12-09 05:30:01.790914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 01:35:19.379 [2024-12-09 05:30:01.790926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 01:35:19.379 [2024-12-09 05:30:01.790938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 01:35:19.379 [2024-12-09 05:30:01.790948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 01:35:19.379 [2024-12-09 05:30:01.790959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 01:35:19.379 [2024-12-09 05:30:01.790970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 01:35:19.379 [2024-12-09 05:30:01.790981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 01:35:19.379 [2024-12-09 05:30:01.791001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 01:35:19.379 [2024-12-09 05:30:01.791012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 01:35:19.379 [2024-12-09 05:30:01.791024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 01:35:19.379 [2024-12-09 05:30:01.791036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 01:35:19.379 [2024-12-09 05:30:01.791048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 01:35:19.379 [2024-12-09 05:30:01.791060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 01:35:19.379 [2024-12-09 05:30:01.791071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 01:35:19.379 [2024-12-09 05:30:01.791083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 01:35:19.379 [2024-12-09 05:30:01.791095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 01:35:19.379 [2024-12-09 05:30:01.791107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 01:35:19.379 [2024-12-09 05:30:01.791119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 01:35:19.379 [2024-12-09 05:30:01.791130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 01:35:19.379 [2024-12-09 05:30:01.791142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 01:35:19.379 [2024-12-09 05:30:01.791153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 01:35:19.379 [2024-12-09 05:30:01.791165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 01:35:19.379 [2024-12-09 05:30:01.791176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 01:35:19.379 [2024-12-09 05:30:01.791187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 01:35:19.379 [2024-12-09 05:30:01.791199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 01:35:19.379 [2024-12-09 05:30:01.791210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 01:35:19.379 [2024-12-09 05:30:01.791222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 01:35:19.379 [2024-12-09 05:30:01.791241] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 01:35:19.379 [2024-12-09 05:30:01.791253] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 28d9320f-13b5-493c-9ba2-857532a9178f 01:35:19.379 [2024-12-09 05:30:01.791265] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 01:35:19.379 [2024-12-09 05:30:01.791276] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 01:35:19.379 [2024-12-09 05:30:01.791287] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 01:35:19.379 [2024-12-09 05:30:01.791298] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 01:35:19.379 [2024-12-09 05:30:01.791324] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 01:35:19.379 [2024-12-09 05:30:01.791336] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 01:35:19.379 [2024-12-09 05:30:01.791346] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 01:35:19.379 [2024-12-09 05:30:01.791357] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 01:35:19.379 [2024-12-09 05:30:01.791367] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 01:35:19.379 [2024-12-09 05:30:01.791377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:35:19.379 [2024-12-09 05:30:01.791388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 01:35:19.379 [2024-12-09 05:30:01.791401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.381 ms 01:35:19.379 [2024-12-09 05:30:01.791416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:35:19.379 [2024-12-09 05:30:01.811732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:35:19.379 [2024-12-09 05:30:01.811771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 01:35:19.379 [2024-12-09 05:30:01.811786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.286 ms 01:35:19.379 [2024-12-09 05:30:01.811798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:35:19.379 [2024-12-09 05:30:01.812402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:35:19.379 [2024-12-09 05:30:01.812420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 01:35:19.379 [2024-12-09 05:30:01.812441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.580 ms 01:35:19.379 [2024-12-09 05:30:01.812452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:35:19.637 [2024-12-09 05:30:01.868668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:35:19.638 [2024-12-09 05:30:01.868851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 01:35:19.638 [2024-12-09 05:30:01.868876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:35:19.638 [2024-12-09 05:30:01.868890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:35:19.638 [2024-12-09 05:30:01.868957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:35:19.638 [2024-12-09 05:30:01.868970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 01:35:19.638 [2024-12-09 05:30:01.868993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:35:19.638 [2024-12-09 05:30:01.869006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:35:19.638 [2024-12-09 05:30:01.869088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:35:19.638 [2024-12-09 05:30:01.869105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 01:35:19.638 [2024-12-09 05:30:01.869117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:35:19.638 [2024-12-09 05:30:01.869130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:35:19.638 [2024-12-09 05:30:01.869152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:35:19.638 [2024-12-09 05:30:01.869165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 01:35:19.638 [2024-12-09 05:30:01.869178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:35:19.638 [2024-12-09 05:30:01.869197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:35:19.638 [2024-12-09 05:30:02.000000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:35:19.638 [2024-12-09 05:30:02.000078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 01:35:19.638 [2024-12-09 05:30:02.000099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:35:19.638 [2024-12-09 05:30:02.000113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:35:19.895 [2024-12-09 05:30:02.104110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:35:19.895 [2024-12-09 05:30:02.104178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 01:35:19.895 [2024-12-09 05:30:02.104206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:35:19.895 [2024-12-09 05:30:02.104219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:35:19.895 [2024-12-09 05:30:02.104359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:35:19.895 [2024-12-09 05:30:02.104375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 01:35:19.895 [2024-12-09 05:30:02.104388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:35:19.895 [2024-12-09 05:30:02.104401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:35:19.895 [2024-12-09 05:30:02.104457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:35:19.895 [2024-12-09 05:30:02.104490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 01:35:19.895 [2024-12-09 05:30:02.104504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:35:19.895 [2024-12-09 05:30:02.104517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:35:19.895 [2024-12-09 05:30:02.104652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:35:19.895 [2024-12-09 05:30:02.104669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 01:35:19.895 [2024-12-09 05:30:02.104683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:35:19.895 [2024-12-09 05:30:02.104695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:35:19.895 [2024-12-09 05:30:02.104747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:35:19.895 [2024-12-09 05:30:02.104763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 01:35:19.895 [2024-12-09 05:30:02.104776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:35:19.895 [2024-12-09 05:30:02.104789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:35:19.895 [2024-12-09 05:30:02.104847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:35:19.895 [2024-12-09 05:30:02.104862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 01:35:19.895 [2024-12-09 05:30:02.104876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:35:19.895 [2024-12-09 05:30:02.104889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:35:19.895 [2024-12-09 05:30:02.104945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:35:19.895 [2024-12-09 05:30:02.104960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 01:35:19.895 [2024-12-09 05:30:02.104972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:35:19.895 [2024-12-09 05:30:02.104984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:35:19.895 [2024-12-09 05:30:02.105165] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 527.762 ms, result 0 01:35:20.829 01:35:20.829 01:35:21.088 05:30:03 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 01:35:22.989 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 01:35:22.989 05:30:05 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 01:35:22.989 [2024-12-09 05:30:05.133631] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:35:22.989 [2024-12-09 05:30:05.133772] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80380 ] 01:35:22.989 [2024-12-09 05:30:05.319020] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:35:23.247 [2024-12-09 05:30:05.453809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:35:23.505 [2024-12-09 05:30:05.856846] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 01:35:23.505 [2024-12-09 05:30:05.856936] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 01:35:23.766 [2024-12-09 05:30:06.021952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:35:23.766 [2024-12-09 05:30:06.022017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 01:35:23.766 [2024-12-09 05:30:06.022035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 01:35:23.766 [2024-12-09 05:30:06.022045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:35:23.766 [2024-12-09 05:30:06.022096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:35:23.766 [2024-12-09 05:30:06.022111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 01:35:23.766 [2024-12-09 05:30:06.022122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 01:35:23.766 [2024-12-09 05:30:06.022132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:35:23.766 [2024-12-09 05:30:06.022153] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 01:35:23.766 [2024-12-09 05:30:06.023116] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 01:35:23.766 [2024-12-09 05:30:06.023148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:35:23.766 [2024-12-09 05:30:06.023159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 01:35:23.766 [2024-12-09 05:30:06.023171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.001 ms 01:35:23.766 [2024-12-09 05:30:06.023180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:35:23.766 [2024-12-09 05:30:06.025653] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 01:35:23.766 [2024-12-09 05:30:06.045285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:35:23.766 [2024-12-09 05:30:06.045322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 01:35:23.766 [2024-12-09 05:30:06.045338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.666 ms 01:35:23.766 [2024-12-09 05:30:06.045349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:35:23.766 [2024-12-09 05:30:06.045416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:35:23.766 [2024-12-09 05:30:06.045429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 01:35:23.766 [2024-12-09 05:30:06.045440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 01:35:23.766 [2024-12-09 05:30:06.045451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:35:23.766 [2024-12-09 05:30:06.057551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:35:23.766 [2024-12-09 05:30:06.057580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 01:35:23.766 [2024-12-09 05:30:06.057594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.038 ms 01:35:23.766 [2024-12-09 05:30:06.057609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:35:23.766 [2024-12-09 05:30:06.057691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:35:23.766 [2024-12-09 05:30:06.057705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 01:35:23.766 [2024-12-09 05:30:06.057716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 01:35:23.766 [2024-12-09 05:30:06.057726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:35:23.766 [2024-12-09 05:30:06.057779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:35:23.766 [2024-12-09 05:30:06.057791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 01:35:23.766 [2024-12-09 05:30:06.057802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 01:35:23.766 [2024-12-09 05:30:06.057812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:35:23.766 [2024-12-09 05:30:06.057843] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 01:35:23.766 [2024-12-09 05:30:06.063327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:35:23.766 [2024-12-09 05:30:06.063359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 01:35:23.766 [2024-12-09 05:30:06.063376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.501 ms 01:35:23.766 [2024-12-09 05:30:06.063386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:35:23.766 [2024-12-09 05:30:06.063417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:35:23.766 [2024-12-09 05:30:06.063428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 01:35:23.766 [2024-12-09 05:30:06.063439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 01:35:23.766 [2024-12-09 05:30:06.063449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:35:23.766 [2024-12-09 05:30:06.063500] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 01:35:23.766 [2024-12-09 05:30:06.063527] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 01:35:23.766 [2024-12-09 05:30:06.063563] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 01:35:23.766 [2024-12-09 05:30:06.063586] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 01:35:23.766 [2024-12-09 05:30:06.063673] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 01:35:23.766 [2024-12-09 05:30:06.063687] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 01:35:23.766 [2024-12-09 05:30:06.063700] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 01:35:23.766 [2024-12-09 05:30:06.063713] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 01:35:23.766 [2024-12-09 05:30:06.063724] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 01:35:23.766 [2024-12-09 05:30:06.063735] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 01:35:23.766 [2024-12-09 05:30:06.063746] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 01:35:23.766 [2024-12-09 05:30:06.063761] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 01:35:23.766 [2024-12-09 05:30:06.063770] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 01:35:23.766 [2024-12-09 05:30:06.063781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:35:23.766 [2024-12-09 05:30:06.063791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 01:35:23.766 [2024-12-09 05:30:06.063802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.285 ms 01:35:23.766 [2024-12-09 05:30:06.063812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:35:23.766 [2024-12-09 05:30:06.063880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:35:23.766 [2024-12-09 05:30:06.063890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 01:35:23.766 [2024-12-09 05:30:06.063901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 01:35:23.766 [2024-12-09 05:30:06.063910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:35:23.766 [2024-12-09 05:30:06.064003] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 01:35:23.766 [2024-12-09 05:30:06.064018] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 01:35:23.766 [2024-12-09 05:30:06.064029] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 01:35:23.766 [2024-12-09 05:30:06.064040] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:35:23.766 [2024-12-09 05:30:06.064050] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 01:35:23.766 [2024-12-09 05:30:06.064060] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 01:35:23.766 [2024-12-09 05:30:06.064069] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 01:35:23.766 [2024-12-09 05:30:06.064080] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 01:35:23.766 [2024-12-09 05:30:06.064090] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 01:35:23.766 [2024-12-09 05:30:06.064099] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 01:35:23.767 [2024-12-09 05:30:06.064112] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 01:35:23.767 [2024-12-09 05:30:06.064121] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 01:35:23.767 [2024-12-09 05:30:06.064131] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 01:35:23.767 [2024-12-09 05:30:06.064149] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 01:35:23.767 [2024-12-09 05:30:06.064159] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 01:35:23.767 [2024-12-09 05:30:06.064168] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:35:23.767 [2024-12-09 05:30:06.064178] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 01:35:23.767 [2024-12-09 05:30:06.064187] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 01:35:23.767 [2024-12-09 05:30:06.064196] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:35:23.767 [2024-12-09 05:30:06.064206] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 01:35:23.767 [2024-12-09 05:30:06.064216] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 01:35:23.767 [2024-12-09 05:30:06.064225] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:35:23.767 [2024-12-09 05:30:06.064233] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 01:35:23.767 [2024-12-09 05:30:06.064242] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 01:35:23.767 [2024-12-09 05:30:06.064251] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:35:23.767 [2024-12-09 05:30:06.064260] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 01:35:23.767 [2024-12-09 05:30:06.064269] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 01:35:23.767 [2024-12-09 05:30:06.064277] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:35:23.767 [2024-12-09 05:30:06.064286] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 01:35:23.767 [2024-12-09 05:30:06.064295] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 01:35:23.767 [2024-12-09 05:30:06.064304] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:35:23.767 [2024-12-09 05:30:06.064312] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 01:35:23.767 [2024-12-09 05:30:06.064321] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 01:35:23.767 [2024-12-09 05:30:06.064329] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 01:35:23.767 [2024-12-09 05:30:06.064337] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 01:35:23.767 [2024-12-09 05:30:06.064346] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 01:35:23.767 [2024-12-09 05:30:06.064354] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 01:35:23.767 [2024-12-09 05:30:06.064363] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 01:35:23.767 [2024-12-09 05:30:06.064371] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 01:35:23.767 [2024-12-09 05:30:06.064379] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:35:23.767 [2024-12-09 05:30:06.064387] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 01:35:23.767 [2024-12-09 05:30:06.064396] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 01:35:23.767 [2024-12-09 05:30:06.064406] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:35:23.767 [2024-12-09 05:30:06.064415] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 01:35:23.767 [2024-12-09 05:30:06.064425] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 01:35:23.767 [2024-12-09 05:30:06.064435] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 01:35:23.767 [2024-12-09 05:30:06.064443] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:35:23.767 [2024-12-09 05:30:06.064453] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 01:35:23.767 [2024-12-09 05:30:06.064845] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 01:35:23.767 [2024-12-09 05:30:06.064893] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 01:35:23.767 [2024-12-09 05:30:06.064926] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 01:35:23.767 [2024-12-09 05:30:06.064955] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 01:35:23.767 [2024-12-09 05:30:06.064984] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 01:35:23.767 [2024-12-09 05:30:06.065017] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 01:35:23.767 [2024-12-09 05:30:06.065133] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 01:35:23.767 [2024-12-09 05:30:06.065190] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 01:35:23.767 [2024-12-09 05:30:06.065237] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 01:35:23.767 [2024-12-09 05:30:06.065282] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 01:35:23.767 [2024-12-09 05:30:06.065368] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 01:35:23.767 [2024-12-09 05:30:06.065418] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 01:35:23.767 [2024-12-09 05:30:06.065474] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 01:35:23.767 [2024-12-09 05:30:06.065524] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 01:35:23.767 [2024-12-09 05:30:06.065570] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 01:35:23.767 [2024-12-09 05:30:06.065729] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 01:35:23.767 [2024-12-09 05:30:06.065775] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 01:35:23.767 [2024-12-09 05:30:06.065821] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 01:35:23.767 [2024-12-09 05:30:06.065866] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 01:35:23.767 [2024-12-09 05:30:06.065961] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 01:35:23.767 [2024-12-09 05:30:06.065995] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 01:35:23.767 [2024-12-09 05:30:06.066005] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 01:35:23.767 [2024-12-09 05:30:06.066017] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 01:35:23.767 [2024-12-09 05:30:06.066029] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 01:35:23.767 [2024-12-09 05:30:06.066040] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 01:35:23.767 [2024-12-09 05:30:06.066051] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 01:35:23.767 [2024-12-09 05:30:06.066063] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 01:35:23.767 [2024-12-09 05:30:06.066077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:35:23.767 [2024-12-09 05:30:06.066089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 01:35:23.767 [2024-12-09 05:30:06.066100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.127 ms 01:35:23.767 [2024-12-09 05:30:06.066110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:35:23.767 [2024-12-09 05:30:06.109032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:35:23.767 [2024-12-09 05:30:06.109068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 01:35:23.767 [2024-12-09 05:30:06.109081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.930 ms 01:35:23.767 [2024-12-09 05:30:06.109097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:35:23.767 [2024-12-09 05:30:06.109172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:35:23.767 [2024-12-09 05:30:06.109183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 01:35:23.767 [2024-12-09 05:30:06.109194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 01:35:23.767 [2024-12-09 05:30:06.109205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:35:23.767 [2024-12-09 05:30:06.187826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:35:23.767 [2024-12-09 05:30:06.187874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 01:35:23.767 [2024-12-09 05:30:06.187891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 78.687 ms 01:35:23.767 [2024-12-09 05:30:06.187903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:35:23.767 [2024-12-09 05:30:06.187961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:35:23.767 [2024-12-09 05:30:06.187973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 01:35:23.767 [2024-12-09 05:30:06.187990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 01:35:23.767 [2024-12-09 05:30:06.188001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:35:23.767 [2024-12-09 05:30:06.188895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:35:23.767 [2024-12-09 05:30:06.188918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 01:35:23.767 [2024-12-09 05:30:06.188930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.807 ms 01:35:23.767 [2024-12-09 05:30:06.188941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:35:23.767 [2024-12-09 05:30:06.189076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:35:23.767 [2024-12-09 05:30:06.189092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 01:35:23.767 [2024-12-09 05:30:06.189108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 01:35:23.767 [2024-12-09 05:30:06.189118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:35:23.767 [2024-12-09 05:30:06.210634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:35:23.767 [2024-12-09 05:30:06.210676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 01:35:23.767 [2024-12-09 05:30:06.210691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.526 ms 01:35:23.767 [2024-12-09 05:30:06.210702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:35:24.027 [2024-12-09 05:30:06.232691] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 01:35:24.027 [2024-12-09 05:30:06.232735] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 01:35:24.027 [2024-12-09 05:30:06.232752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:35:24.027 [2024-12-09 05:30:06.232764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 01:35:24.027 [2024-12-09 05:30:06.232777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.957 ms 01:35:24.027 [2024-12-09 05:30:06.232788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:35:24.027 [2024-12-09 05:30:06.266016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:35:24.027 [2024-12-09 05:30:06.266058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 01:35:24.027 [2024-12-09 05:30:06.266074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.231 ms 01:35:24.027 [2024-12-09 05:30:06.266086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:35:24.027 [2024-12-09 05:30:06.286015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:35:24.027 [2024-12-09 05:30:06.286235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 01:35:24.027 [2024-12-09 05:30:06.286257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.901 ms 01:35:24.027 [2024-12-09 05:30:06.286268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:35:24.027 [2024-12-09 05:30:06.305374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:35:24.027 [2024-12-09 05:30:06.305410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 01:35:24.027 [2024-12-09 05:30:06.305425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.093 ms 01:35:24.027 [2024-12-09 05:30:06.305435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:35:24.027 [2024-12-09 05:30:06.306262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:35:24.027 [2024-12-09 05:30:06.306296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 01:35:24.027 [2024-12-09 05:30:06.306313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.714 ms 01:35:24.027 [2024-12-09 05:30:06.306324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:35:24.027 [2024-12-09 05:30:06.414619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:35:24.027 [2024-12-09 05:30:06.414926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 01:35:24.027 [2024-12-09 05:30:06.414965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 108.447 ms 01:35:24.027 [2024-12-09 05:30:06.414978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:35:24.027 [2024-12-09 05:30:06.425990] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 01:35:24.027 [2024-12-09 05:30:06.430869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:35:24.027 [2024-12-09 05:30:06.430903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 01:35:24.027 [2024-12-09 05:30:06.430921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.774 ms 01:35:24.027 [2024-12-09 05:30:06.430932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:35:24.027 [2024-12-09 05:30:06.431091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:35:24.027 [2024-12-09 05:30:06.431105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 01:35:24.027 [2024-12-09 05:30:06.431122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 01:35:24.027 [2024-12-09 05:30:06.431132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:35:24.027 [2024-12-09 05:30:06.431221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:35:24.027 [2024-12-09 05:30:06.431235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 01:35:24.027 [2024-12-09 05:30:06.431246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 01:35:24.027 [2024-12-09 05:30:06.431257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:35:24.027 [2024-12-09 05:30:06.431280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:35:24.027 [2024-12-09 05:30:06.431292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 01:35:24.027 [2024-12-09 05:30:06.431302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 01:35:24.027 [2024-12-09 05:30:06.431313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:35:24.027 [2024-12-09 05:30:06.431357] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 01:35:24.027 [2024-12-09 05:30:06.431370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:35:24.027 [2024-12-09 05:30:06.431380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 01:35:24.027 [2024-12-09 05:30:06.431391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 01:35:24.027 [2024-12-09 05:30:06.431401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:35:24.027 [2024-12-09 05:30:06.467583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:35:24.027 [2024-12-09 05:30:06.467626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 01:35:24.027 [2024-12-09 05:30:06.467649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.218 ms 01:35:24.027 [2024-12-09 05:30:06.467660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:35:24.027 [2024-12-09 05:30:06.467750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:35:24.027 [2024-12-09 05:30:06.467763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 01:35:24.027 [2024-12-09 05:30:06.467775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 01:35:24.027 [2024-12-09 05:30:06.467786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:35:24.027 [2024-12-09 05:30:06.469327] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 447.565 ms, result 0 01:35:25.405  [2024-12-09T05:30:08.795Z] Copying: 23/1024 [MB] (23 MBps) [2024-12-09T05:30:09.742Z] Copying: 46/1024 [MB] (22 MBps) [2024-12-09T05:30:10.678Z] Copying: 68/1024 [MB] (22 MBps) [2024-12-09T05:30:11.665Z] Copying: 92/1024 [MB] (23 MBps) [2024-12-09T05:30:12.602Z] Copying: 114/1024 [MB] (22 MBps) [2024-12-09T05:30:13.539Z] Copying: 136/1024 [MB] (21 MBps) [2024-12-09T05:30:14.477Z] Copying: 158/1024 [MB] (22 MBps) [2024-12-09T05:30:15.850Z] Copying: 181/1024 [MB] (22 MBps) [2024-12-09T05:30:16.785Z] Copying: 203/1024 [MB] (22 MBps) [2024-12-09T05:30:17.722Z] Copying: 225/1024 [MB] (21 MBps) [2024-12-09T05:30:18.661Z] Copying: 247/1024 [MB] (21 MBps) [2024-12-09T05:30:19.598Z] Copying: 268/1024 [MB] (21 MBps) [2024-12-09T05:30:20.537Z] Copying: 290/1024 [MB] (22 MBps) [2024-12-09T05:30:21.474Z] Copying: 313/1024 [MB] (22 MBps) [2024-12-09T05:30:22.852Z] Copying: 335/1024 [MB] (22 MBps) [2024-12-09T05:30:23.785Z] Copying: 358/1024 [MB] (22 MBps) [2024-12-09T05:30:24.718Z] Copying: 380/1024 [MB] (22 MBps) [2024-12-09T05:30:25.674Z] Copying: 403/1024 [MB] (22 MBps) [2024-12-09T05:30:26.609Z] Copying: 425/1024 [MB] (22 MBps) [2024-12-09T05:30:27.544Z] Copying: 448/1024 [MB] (22 MBps) [2024-12-09T05:30:28.482Z] Copying: 470/1024 [MB] (22 MBps) [2024-12-09T05:30:29.883Z] Copying: 492/1024 [MB] (22 MBps) [2024-12-09T05:30:30.451Z] Copying: 514/1024 [MB] (22 MBps) [2024-12-09T05:30:31.830Z] Copying: 537/1024 [MB] (22 MBps) [2024-12-09T05:30:32.763Z] Copying: 560/1024 [MB] (23 MBps) [2024-12-09T05:30:33.698Z] Copying: 583/1024 [MB] (22 MBps) [2024-12-09T05:30:34.631Z] Copying: 605/1024 [MB] (22 MBps) [2024-12-09T05:30:35.569Z] Copying: 627/1024 [MB] (21 MBps) [2024-12-09T05:30:36.505Z] Copying: 648/1024 [MB] (21 MBps) [2024-12-09T05:30:37.440Z] Copying: 671/1024 [MB] (22 MBps) [2024-12-09T05:30:38.822Z] Copying: 693/1024 [MB] (22 MBps) [2024-12-09T05:30:39.763Z] Copying: 716/1024 [MB] (22 MBps) [2024-12-09T05:30:40.770Z] Copying: 738/1024 [MB] (22 MBps) [2024-12-09T05:30:41.705Z] Copying: 762/1024 [MB] (23 MBps) [2024-12-09T05:30:42.640Z] Copying: 784/1024 [MB] (22 MBps) [2024-12-09T05:30:43.575Z] Copying: 806/1024 [MB] (22 MBps) [2024-12-09T05:30:44.509Z] Copying: 828/1024 [MB] (21 MBps) [2024-12-09T05:30:45.444Z] Copying: 849/1024 [MB] (21 MBps) [2024-12-09T05:30:46.819Z] Copying: 871/1024 [MB] (21 MBps) [2024-12-09T05:30:47.757Z] Copying: 893/1024 [MB] (21 MBps) [2024-12-09T05:30:48.693Z] Copying: 915/1024 [MB] (21 MBps) [2024-12-09T05:30:49.629Z] Copying: 936/1024 [MB] (21 MBps) [2024-12-09T05:30:50.565Z] Copying: 958/1024 [MB] (21 MBps) [2024-12-09T05:30:51.501Z] Copying: 979/1024 [MB] (21 MBps) [2024-12-09T05:30:52.437Z] Copying: 1000/1024 [MB] (21 MBps) [2024-12-09T05:30:52.694Z] Copying: 1022/1024 [MB] (21 MBps) [2024-12-09T05:30:52.694Z] Copying: 1024/1024 [MB] (average 22 MBps)[2024-12-09 05:30:52.492605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:36:10.238 [2024-12-09 05:30:52.492678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 01:36:10.238 [2024-12-09 05:30:52.492699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 01:36:10.238 [2024-12-09 05:30:52.492709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:36:10.238 [2024-12-09 05:30:52.492740] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 01:36:10.238 [2024-12-09 05:30:52.497449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:36:10.238 [2024-12-09 05:30:52.497499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 01:36:10.238 [2024-12-09 05:30:52.497512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.698 ms 01:36:10.239 [2024-12-09 05:30:52.497522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:36:10.239 [2024-12-09 05:30:52.500182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:36:10.239 [2024-12-09 05:30:52.500224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 01:36:10.239 [2024-12-09 05:30:52.500238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.640 ms 01:36:10.239 [2024-12-09 05:30:52.500248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:36:10.239 [2024-12-09 05:30:52.517548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:36:10.239 [2024-12-09 05:30:52.517588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 01:36:10.239 [2024-12-09 05:30:52.517602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.311 ms 01:36:10.239 [2024-12-09 05:30:52.517664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:36:10.239 [2024-12-09 05:30:52.522232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:36:10.239 [2024-12-09 05:30:52.522266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 01:36:10.239 [2024-12-09 05:30:52.522285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.545 ms 01:36:10.239 [2024-12-09 05:30:52.522295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:36:10.239 [2024-12-09 05:30:52.557685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:36:10.239 [2024-12-09 05:30:52.557847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 01:36:10.239 [2024-12-09 05:30:52.557868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.393 ms 01:36:10.239 [2024-12-09 05:30:52.557878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:36:10.239 [2024-12-09 05:30:52.579285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:36:10.239 [2024-12-09 05:30:52.579323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 01:36:10.239 [2024-12-09 05:30:52.579335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.375 ms 01:36:10.239 [2024-12-09 05:30:52.579345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:36:10.239 [2024-12-09 05:30:52.579979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:36:10.239 [2024-12-09 05:30:52.580015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 01:36:10.239 [2024-12-09 05:30:52.580027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.589 ms 01:36:10.239 [2024-12-09 05:30:52.580038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:36:10.239 [2024-12-09 05:30:52.614470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:36:10.239 [2024-12-09 05:30:52.614505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 01:36:10.239 [2024-12-09 05:30:52.614517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.472 ms 01:36:10.239 [2024-12-09 05:30:52.614526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:36:10.239 [2024-12-09 05:30:52.648801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:36:10.239 [2024-12-09 05:30:52.648838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 01:36:10.239 [2024-12-09 05:30:52.648851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.294 ms 01:36:10.239 [2024-12-09 05:30:52.648861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:36:10.239 [2024-12-09 05:30:52.684004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:36:10.239 [2024-12-09 05:30:52.684146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 01:36:10.239 [2024-12-09 05:30:52.684165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.163 ms 01:36:10.239 [2024-12-09 05:30:52.684175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:36:10.498 [2024-12-09 05:30:52.716827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:36:10.498 [2024-12-09 05:30:52.716863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 01:36:10.498 [2024-12-09 05:30:52.716875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.598 ms 01:36:10.498 [2024-12-09 05:30:52.716885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:36:10.498 [2024-12-09 05:30:52.716920] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 01:36:10.498 [2024-12-09 05:30:52.716942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 256 / 261120 wr_cnt: 1 state: open 01:36:10.498 [2024-12-09 05:30:52.716958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 01:36:10.498 [2024-12-09 05:30:52.716969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 01:36:10.498 [2024-12-09 05:30:52.716979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 01:36:10.498 [2024-12-09 05:30:52.716990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 01:36:10.498 [2024-12-09 05:30:52.717000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 01:36:10.498 [2024-12-09 05:30:52.717010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 01:36:10.498 [2024-12-09 05:30:52.717020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 01:36:10.498 [2024-12-09 05:30:52.717031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 01:36:10.498 [2024-12-09 05:30:52.717040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 01:36:10.498 [2024-12-09 05:30:52.717050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 01:36:10.498 [2024-12-09 05:30:52.717060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 01:36:10.498 [2024-12-09 05:30:52.717070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 01:36:10.498 [2024-12-09 05:30:52.717079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 01:36:10.498 [2024-12-09 05:30:52.717089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 01:36:10.498 [2024-12-09 05:30:52.717099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 01:36:10.498 [2024-12-09 05:30:52.717109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 01:36:10.498 [2024-12-09 05:30:52.717118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 01:36:10.498 [2024-12-09 05:30:52.717127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 01:36:10.498 [2024-12-09 05:30:52.717137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 01:36:10.498 [2024-12-09 05:30:52.717147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 01:36:10.498 [2024-12-09 05:30:52.717157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 01:36:10.498 [2024-12-09 05:30:52.717167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 01:36:10.498 [2024-12-09 05:30:52.717176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 01:36:10.498 [2024-12-09 05:30:52.717186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 01:36:10.498 [2024-12-09 05:30:52.717195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 01:36:10.498 [2024-12-09 05:30:52.717206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 01:36:10.498 [2024-12-09 05:30:52.717216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 01:36:10.498 [2024-12-09 05:30:52.717226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 01:36:10.498 [2024-12-09 05:30:52.717237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 01:36:10.498 [2024-12-09 05:30:52.717247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 01:36:10.498 [2024-12-09 05:30:52.717257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 01:36:10.498 [2024-12-09 05:30:52.717267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 01:36:10.498 [2024-12-09 05:30:52.717277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 01:36:10.498 [2024-12-09 05:30:52.717287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 01:36:10.498 [2024-12-09 05:30:52.717296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 01:36:10.498 [2024-12-09 05:30:52.717305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 01:36:10.498 [2024-12-09 05:30:52.717315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 01:36:10.498 [2024-12-09 05:30:52.717325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 01:36:10.498 [2024-12-09 05:30:52.717335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 01:36:10.499 [2024-12-09 05:30:52.717345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 01:36:10.499 [2024-12-09 05:30:52.717355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 01:36:10.499 [2024-12-09 05:30:52.717364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 01:36:10.499 [2024-12-09 05:30:52.717374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 01:36:10.499 [2024-12-09 05:30:52.717383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 01:36:10.499 [2024-12-09 05:30:52.717393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 01:36:10.499 [2024-12-09 05:30:52.717402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 01:36:10.499 [2024-12-09 05:30:52.717412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 01:36:10.499 [2024-12-09 05:30:52.717421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 01:36:10.499 [2024-12-09 05:30:52.717431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 01:36:10.499 [2024-12-09 05:30:52.717441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 01:36:10.499 [2024-12-09 05:30:52.717450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 01:36:10.499 [2024-12-09 05:30:52.717478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 01:36:10.499 [2024-12-09 05:30:52.717489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 01:36:10.499 [2024-12-09 05:30:52.717500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 01:36:10.499 [2024-12-09 05:30:52.717511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 01:36:10.499 [2024-12-09 05:30:52.717521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 01:36:10.499 [2024-12-09 05:30:52.717531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 01:36:10.499 [2024-12-09 05:30:52.717541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 01:36:10.499 [2024-12-09 05:30:52.717551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 01:36:10.499 [2024-12-09 05:30:52.717561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 01:36:10.499 [2024-12-09 05:30:52.717573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 01:36:10.499 [2024-12-09 05:30:52.717583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 01:36:10.499 [2024-12-09 05:30:52.717593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 01:36:10.499 [2024-12-09 05:30:52.717603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 01:36:10.499 [2024-12-09 05:30:52.717614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 01:36:10.499 [2024-12-09 05:30:52.717624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 01:36:10.499 [2024-12-09 05:30:52.717634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 01:36:10.499 [2024-12-09 05:30:52.717644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 01:36:10.499 [2024-12-09 05:30:52.717653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 01:36:10.499 [2024-12-09 05:30:52.717663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 01:36:10.499 [2024-12-09 05:30:52.717674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 01:36:10.499 [2024-12-09 05:30:52.717684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 01:36:10.499 [2024-12-09 05:30:52.717694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 01:36:10.499 [2024-12-09 05:30:52.717704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 01:36:10.499 [2024-12-09 05:30:52.717714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 01:36:10.499 [2024-12-09 05:30:52.717724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 01:36:10.499 [2024-12-09 05:30:52.717733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 01:36:10.499 [2024-12-09 05:30:52.717743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 01:36:10.499 [2024-12-09 05:30:52.717752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 01:36:10.499 [2024-12-09 05:30:52.717767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 01:36:10.499 [2024-12-09 05:30:52.717777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 01:36:10.499 [2024-12-09 05:30:52.717787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 01:36:10.499 [2024-12-09 05:30:52.717796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 01:36:10.499 [2024-12-09 05:30:52.717806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 01:36:10.499 [2024-12-09 05:30:52.717816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 01:36:10.499 [2024-12-09 05:30:52.717826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 01:36:10.499 [2024-12-09 05:30:52.717835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 01:36:10.499 [2024-12-09 05:30:52.717845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 01:36:10.499 [2024-12-09 05:30:52.717854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 01:36:10.499 [2024-12-09 05:30:52.717864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 01:36:10.499 [2024-12-09 05:30:52.717873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 01:36:10.499 [2024-12-09 05:30:52.717883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 01:36:10.499 [2024-12-09 05:30:52.717894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 01:36:10.499 [2024-12-09 05:30:52.717906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 01:36:10.499 [2024-12-09 05:30:52.717916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 01:36:10.499 [2024-12-09 05:30:52.717926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 01:36:10.499 [2024-12-09 05:30:52.717935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 01:36:10.499 [2024-12-09 05:30:52.717944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 01:36:10.499 [2024-12-09 05:30:52.717954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 01:36:10.499 [2024-12-09 05:30:52.717970] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 01:36:10.499 [2024-12-09 05:30:52.717979] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 28d9320f-13b5-493c-9ba2-857532a9178f 01:36:10.499 [2024-12-09 05:30:52.717989] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 256 01:36:10.499 [2024-12-09 05:30:52.717999] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 1216 01:36:10.499 [2024-12-09 05:30:52.718007] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 256 01:36:10.499 [2024-12-09 05:30:52.718017] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 4.7500 01:36:10.499 [2024-12-09 05:30:52.718039] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 01:36:10.499 [2024-12-09 05:30:52.718048] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 01:36:10.499 [2024-12-09 05:30:52.718057] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 01:36:10.499 [2024-12-09 05:30:52.718065] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 01:36:10.499 [2024-12-09 05:30:52.718074] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 01:36:10.499 [2024-12-09 05:30:52.718082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:36:10.499 [2024-12-09 05:30:52.718092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 01:36:10.499 [2024-12-09 05:30:52.718102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.166 ms 01:36:10.499 [2024-12-09 05:30:52.718116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:36:10.499 [2024-12-09 05:30:52.737865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:36:10.499 [2024-12-09 05:30:52.737898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 01:36:10.499 [2024-12-09 05:30:52.737910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.750 ms 01:36:10.499 [2024-12-09 05:30:52.737921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:36:10.499 [2024-12-09 05:30:52.738540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:36:10.499 [2024-12-09 05:30:52.738559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 01:36:10.499 [2024-12-09 05:30:52.738569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.600 ms 01:36:10.499 [2024-12-09 05:30:52.738596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:36:10.499 [2024-12-09 05:30:52.793039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:36:10.499 [2024-12-09 05:30:52.793074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 01:36:10.499 [2024-12-09 05:30:52.793087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:36:10.499 [2024-12-09 05:30:52.793098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:36:10.499 [2024-12-09 05:30:52.793156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:36:10.499 [2024-12-09 05:30:52.793172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 01:36:10.499 [2024-12-09 05:30:52.793182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:36:10.499 [2024-12-09 05:30:52.793191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:36:10.499 [2024-12-09 05:30:52.793269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:36:10.499 [2024-12-09 05:30:52.793283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 01:36:10.499 [2024-12-09 05:30:52.793293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:36:10.500 [2024-12-09 05:30:52.793303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:36:10.500 [2024-12-09 05:30:52.793321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:36:10.500 [2024-12-09 05:30:52.793331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 01:36:10.500 [2024-12-09 05:30:52.793346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:36:10.500 [2024-12-09 05:30:52.793356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:36:10.500 [2024-12-09 05:30:52.918953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:36:10.500 [2024-12-09 05:30:52.919020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 01:36:10.500 [2024-12-09 05:30:52.919036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:36:10.500 [2024-12-09 05:30:52.919048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:36:10.758 [2024-12-09 05:30:53.017688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:36:10.758 [2024-12-09 05:30:53.017979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 01:36:10.758 [2024-12-09 05:30:53.018003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:36:10.758 [2024-12-09 05:30:53.018014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:36:10.758 [2024-12-09 05:30:53.018127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:36:10.758 [2024-12-09 05:30:53.018139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 01:36:10.758 [2024-12-09 05:30:53.018152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:36:10.759 [2024-12-09 05:30:53.018163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:36:10.759 [2024-12-09 05:30:53.018204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:36:10.759 [2024-12-09 05:30:53.018215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 01:36:10.759 [2024-12-09 05:30:53.018226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:36:10.759 [2024-12-09 05:30:53.018243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:36:10.759 [2024-12-09 05:30:53.018367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:36:10.759 [2024-12-09 05:30:53.018381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 01:36:10.759 [2024-12-09 05:30:53.018393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:36:10.759 [2024-12-09 05:30:53.018404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:36:10.759 [2024-12-09 05:30:53.018442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:36:10.759 [2024-12-09 05:30:53.018454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 01:36:10.759 [2024-12-09 05:30:53.018490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:36:10.759 [2024-12-09 05:30:53.018517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:36:10.759 [2024-12-09 05:30:53.018570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:36:10.759 [2024-12-09 05:30:53.018581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 01:36:10.759 [2024-12-09 05:30:53.018593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:36:10.759 [2024-12-09 05:30:53.018604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:36:10.759 [2024-12-09 05:30:53.018654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:36:10.759 [2024-12-09 05:30:53.018666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 01:36:10.759 [2024-12-09 05:30:53.018677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:36:10.759 [2024-12-09 05:30:53.018692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:36:10.759 [2024-12-09 05:30:53.018844] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 527.047 ms, result 0 01:36:12.133 01:36:12.134 01:36:12.134 05:30:54 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 01:36:12.391 [2024-12-09 05:30:54.594532] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:36:12.391 [2024-12-09 05:30:54.594650] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80876 ] 01:36:12.391 [2024-12-09 05:30:54.774314] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:36:12.649 [2024-12-09 05:30:54.900049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:36:12.907 [2024-12-09 05:30:55.301383] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 01:36:12.907 [2024-12-09 05:30:55.301487] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 01:36:13.166 [2024-12-09 05:30:55.465508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:36:13.166 [2024-12-09 05:30:55.465569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 01:36:13.166 [2024-12-09 05:30:55.465587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 01:36:13.166 [2024-12-09 05:30:55.465597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:36:13.166 [2024-12-09 05:30:55.465646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:36:13.166 [2024-12-09 05:30:55.465661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 01:36:13.166 [2024-12-09 05:30:55.465671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 01:36:13.166 [2024-12-09 05:30:55.465681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:36:13.166 [2024-12-09 05:30:55.465703] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 01:36:13.166 [2024-12-09 05:30:55.466615] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 01:36:13.166 [2024-12-09 05:30:55.466640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:36:13.166 [2024-12-09 05:30:55.466651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 01:36:13.166 [2024-12-09 05:30:55.466662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.943 ms 01:36:13.166 [2024-12-09 05:30:55.466672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:36:13.166 [2024-12-09 05:30:55.469105] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 01:36:13.166 [2024-12-09 05:30:55.488043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:36:13.166 [2024-12-09 05:30:55.488082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 01:36:13.166 [2024-12-09 05:30:55.488097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.970 ms 01:36:13.166 [2024-12-09 05:30:55.488109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:36:13.166 [2024-12-09 05:30:55.488198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:36:13.166 [2024-12-09 05:30:55.488215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 01:36:13.166 [2024-12-09 05:30:55.488227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 01:36:13.166 [2024-12-09 05:30:55.488237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:36:13.166 [2024-12-09 05:30:55.500708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:36:13.166 [2024-12-09 05:30:55.500741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 01:36:13.166 [2024-12-09 05:30:55.500754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.420 ms 01:36:13.166 [2024-12-09 05:30:55.500770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:36:13.166 [2024-12-09 05:30:55.500858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:36:13.166 [2024-12-09 05:30:55.500871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 01:36:13.166 [2024-12-09 05:30:55.500882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 01:36:13.166 [2024-12-09 05:30:55.500892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:36:13.166 [2024-12-09 05:30:55.500950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:36:13.166 [2024-12-09 05:30:55.500962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 01:36:13.166 [2024-12-09 05:30:55.500972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 01:36:13.166 [2024-12-09 05:30:55.500982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:36:13.166 [2024-12-09 05:30:55.501012] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 01:36:13.166 [2024-12-09 05:30:55.506522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:36:13.166 [2024-12-09 05:30:55.506553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 01:36:13.166 [2024-12-09 05:30:55.506569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.526 ms 01:36:13.166 [2024-12-09 05:30:55.506579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:36:13.166 [2024-12-09 05:30:55.506609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:36:13.166 [2024-12-09 05:30:55.506620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 01:36:13.166 [2024-12-09 05:30:55.506630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 01:36:13.166 [2024-12-09 05:30:55.506640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:36:13.166 [2024-12-09 05:30:55.506674] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 01:36:13.166 [2024-12-09 05:30:55.506699] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 01:36:13.166 [2024-12-09 05:30:55.506735] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 01:36:13.166 [2024-12-09 05:30:55.506757] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 01:36:13.167 [2024-12-09 05:30:55.506861] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 01:36:13.167 [2024-12-09 05:30:55.506880] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 01:36:13.167 [2024-12-09 05:30:55.506894] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 01:36:13.167 [2024-12-09 05:30:55.506908] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 01:36:13.167 [2024-12-09 05:30:55.506920] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 01:36:13.167 [2024-12-09 05:30:55.506932] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 01:36:13.167 [2024-12-09 05:30:55.506943] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 01:36:13.167 [2024-12-09 05:30:55.506957] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 01:36:13.167 [2024-12-09 05:30:55.506967] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 01:36:13.167 [2024-12-09 05:30:55.506979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:36:13.167 [2024-12-09 05:30:55.506990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 01:36:13.167 [2024-12-09 05:30:55.507008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.308 ms 01:36:13.167 [2024-12-09 05:30:55.507017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:36:13.167 [2024-12-09 05:30:55.507085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:36:13.167 [2024-12-09 05:30:55.507096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 01:36:13.167 [2024-12-09 05:30:55.507107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 01:36:13.167 [2024-12-09 05:30:55.507116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:36:13.167 [2024-12-09 05:30:55.507212] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 01:36:13.167 [2024-12-09 05:30:55.507227] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 01:36:13.167 [2024-12-09 05:30:55.507238] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 01:36:13.167 [2024-12-09 05:30:55.507248] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:36:13.167 [2024-12-09 05:30:55.507259] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 01:36:13.167 [2024-12-09 05:30:55.507268] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 01:36:13.167 [2024-12-09 05:30:55.507278] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 01:36:13.167 [2024-12-09 05:30:55.507289] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 01:36:13.167 [2024-12-09 05:30:55.507298] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 01:36:13.167 [2024-12-09 05:30:55.507308] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 01:36:13.167 [2024-12-09 05:30:55.507317] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 01:36:13.167 [2024-12-09 05:30:55.507327] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 01:36:13.167 [2024-12-09 05:30:55.507336] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 01:36:13.167 [2024-12-09 05:30:55.507355] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 01:36:13.167 [2024-12-09 05:30:55.507364] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 01:36:13.167 [2024-12-09 05:30:55.507373] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:36:13.167 [2024-12-09 05:30:55.507382] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 01:36:13.167 [2024-12-09 05:30:55.507391] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 01:36:13.167 [2024-12-09 05:30:55.507399] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:36:13.167 [2024-12-09 05:30:55.507408] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 01:36:13.167 [2024-12-09 05:30:55.507417] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 01:36:13.167 [2024-12-09 05:30:55.507426] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:36:13.167 [2024-12-09 05:30:55.507435] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 01:36:13.167 [2024-12-09 05:30:55.507444] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 01:36:13.167 [2024-12-09 05:30:55.507453] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:36:13.167 [2024-12-09 05:30:55.507483] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 01:36:13.167 [2024-12-09 05:30:55.507493] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 01:36:13.167 [2024-12-09 05:30:55.507502] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:36:13.167 [2024-12-09 05:30:55.507511] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 01:36:13.167 [2024-12-09 05:30:55.507520] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 01:36:13.167 [2024-12-09 05:30:55.507529] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:36:13.167 [2024-12-09 05:30:55.507538] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 01:36:13.167 [2024-12-09 05:30:55.507547] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 01:36:13.167 [2024-12-09 05:30:55.507556] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 01:36:13.167 [2024-12-09 05:30:55.507564] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 01:36:13.167 [2024-12-09 05:30:55.507573] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 01:36:13.167 [2024-12-09 05:30:55.507582] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 01:36:13.167 [2024-12-09 05:30:55.507590] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 01:36:13.167 [2024-12-09 05:30:55.507599] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 01:36:13.167 [2024-12-09 05:30:55.507607] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:36:13.167 [2024-12-09 05:30:55.507616] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 01:36:13.167 [2024-12-09 05:30:55.507624] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 01:36:13.167 [2024-12-09 05:30:55.507651] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:36:13.167 [2024-12-09 05:30:55.507660] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 01:36:13.167 [2024-12-09 05:30:55.507670] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 01:36:13.167 [2024-12-09 05:30:55.507680] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 01:36:13.167 [2024-12-09 05:30:55.507691] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:36:13.167 [2024-12-09 05:30:55.507701] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 01:36:13.167 [2024-12-09 05:30:55.507711] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 01:36:13.167 [2024-12-09 05:30:55.507720] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 01:36:13.167 [2024-12-09 05:30:55.507729] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 01:36:13.167 [2024-12-09 05:30:55.507738] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 01:36:13.167 [2024-12-09 05:30:55.507748] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 01:36:13.167 [2024-12-09 05:30:55.507759] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 01:36:13.167 [2024-12-09 05:30:55.507772] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 01:36:13.167 [2024-12-09 05:30:55.507789] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 01:36:13.167 [2024-12-09 05:30:55.507800] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 01:36:13.167 [2024-12-09 05:30:55.507810] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 01:36:13.168 [2024-12-09 05:30:55.507821] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 01:36:13.168 [2024-12-09 05:30:55.507831] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 01:36:13.168 [2024-12-09 05:30:55.507841] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 01:36:13.168 [2024-12-09 05:30:55.507852] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 01:36:13.168 [2024-12-09 05:30:55.507862] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 01:36:13.168 [2024-12-09 05:30:55.507872] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 01:36:13.168 [2024-12-09 05:30:55.507882] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 01:36:13.168 [2024-12-09 05:30:55.507892] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 01:36:13.168 [2024-12-09 05:30:55.507902] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 01:36:13.168 [2024-12-09 05:30:55.507911] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 01:36:13.168 [2024-12-09 05:30:55.507921] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 01:36:13.168 [2024-12-09 05:30:55.507931] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 01:36:13.168 [2024-12-09 05:30:55.507942] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 01:36:13.168 [2024-12-09 05:30:55.507952] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 01:36:13.168 [2024-12-09 05:30:55.507963] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 01:36:13.168 [2024-12-09 05:30:55.507973] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 01:36:13.168 [2024-12-09 05:30:55.507985] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 01:36:13.168 [2024-12-09 05:30:55.507995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:36:13.168 [2024-12-09 05:30:55.508006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 01:36:13.168 [2024-12-09 05:30:55.508017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.836 ms 01:36:13.168 [2024-12-09 05:30:55.508027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:36:13.168 [2024-12-09 05:30:55.556028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:36:13.168 [2024-12-09 05:30:55.556066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 01:36:13.168 [2024-12-09 05:30:55.556079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.031 ms 01:36:13.168 [2024-12-09 05:30:55.556095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:36:13.168 [2024-12-09 05:30:55.556171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:36:13.168 [2024-12-09 05:30:55.556184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 01:36:13.168 [2024-12-09 05:30:55.556194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 01:36:13.168 [2024-12-09 05:30:55.556203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:36:13.427 [2024-12-09 05:30:55.633586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:36:13.427 [2024-12-09 05:30:55.633626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 01:36:13.427 [2024-12-09 05:30:55.633641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 77.449 ms 01:36:13.427 [2024-12-09 05:30:55.633652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:36:13.427 [2024-12-09 05:30:55.633694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:36:13.427 [2024-12-09 05:30:55.633706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 01:36:13.427 [2024-12-09 05:30:55.633722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 01:36:13.427 [2024-12-09 05:30:55.633732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:36:13.427 [2024-12-09 05:30:55.634575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:36:13.427 [2024-12-09 05:30:55.634593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 01:36:13.427 [2024-12-09 05:30:55.634605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.761 ms 01:36:13.427 [2024-12-09 05:30:55.634615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:36:13.427 [2024-12-09 05:30:55.634757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:36:13.427 [2024-12-09 05:30:55.634773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 01:36:13.427 [2024-12-09 05:30:55.634791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 01:36:13.427 [2024-12-09 05:30:55.634801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:36:13.427 [2024-12-09 05:30:55.655245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:36:13.427 [2024-12-09 05:30:55.655283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 01:36:13.427 [2024-12-09 05:30:55.655298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.455 ms 01:36:13.427 [2024-12-09 05:30:55.655309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:36:13.427 [2024-12-09 05:30:55.673781] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 3, empty chunks = 1 01:36:13.427 [2024-12-09 05:30:55.673819] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 01:36:13.427 [2024-12-09 05:30:55.673834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:36:13.427 [2024-12-09 05:30:55.673845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 01:36:13.427 [2024-12-09 05:30:55.673856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.450 ms 01:36:13.427 [2024-12-09 05:30:55.673866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:36:13.427 [2024-12-09 05:30:55.702769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:36:13.427 [2024-12-09 05:30:55.702818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 01:36:13.427 [2024-12-09 05:30:55.702833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.903 ms 01:36:13.427 [2024-12-09 05:30:55.702844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:36:13.427 [2024-12-09 05:30:55.720898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:36:13.427 [2024-12-09 05:30:55.720934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 01:36:13.427 [2024-12-09 05:30:55.720948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.036 ms 01:36:13.427 [2024-12-09 05:30:55.720958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:36:13.427 [2024-12-09 05:30:55.738927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:36:13.427 [2024-12-09 05:30:55.738965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 01:36:13.427 [2024-12-09 05:30:55.738979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.943 ms 01:36:13.427 [2024-12-09 05:30:55.738990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:36:13.427 [2024-12-09 05:30:55.739836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:36:13.427 [2024-12-09 05:30:55.740009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 01:36:13.427 [2024-12-09 05:30:55.740037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.715 ms 01:36:13.427 [2024-12-09 05:30:55.740049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:36:13.427 [2024-12-09 05:30:55.837637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:36:13.427 [2024-12-09 05:30:55.837707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 01:36:13.427 [2024-12-09 05:30:55.837734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 97.715 ms 01:36:13.427 [2024-12-09 05:30:55.837746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:36:13.427 [2024-12-09 05:30:55.848450] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 01:36:13.427 [2024-12-09 05:30:55.852391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:36:13.427 [2024-12-09 05:30:55.852422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 01:36:13.427 [2024-12-09 05:30:55.852438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.614 ms 01:36:13.427 [2024-12-09 05:30:55.852449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:36:13.427 [2024-12-09 05:30:55.852556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:36:13.427 [2024-12-09 05:30:55.852572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 01:36:13.427 [2024-12-09 05:30:55.852589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 01:36:13.427 [2024-12-09 05:30:55.852600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:36:13.427 [2024-12-09 05:30:55.853928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:36:13.427 [2024-12-09 05:30:55.854129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 01:36:13.427 [2024-12-09 05:30:55.854151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.257 ms 01:36:13.427 [2024-12-09 05:30:55.854161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:36:13.427 [2024-12-09 05:30:55.854205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:36:13.427 [2024-12-09 05:30:55.854218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 01:36:13.427 [2024-12-09 05:30:55.854245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 01:36:13.427 [2024-12-09 05:30:55.854257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:36:13.427 [2024-12-09 05:30:55.854307] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 01:36:13.427 [2024-12-09 05:30:55.854321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:36:13.427 [2024-12-09 05:30:55.854332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 01:36:13.427 [2024-12-09 05:30:55.854344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 01:36:13.427 [2024-12-09 05:30:55.854355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:36:13.686 [2024-12-09 05:30:55.890185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:36:13.686 [2024-12-09 05:30:55.890223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 01:36:13.686 [2024-12-09 05:30:55.890244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.863 ms 01:36:13.686 [2024-12-09 05:30:55.890255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:36:13.686 [2024-12-09 05:30:55.890334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:36:13.686 [2024-12-09 05:30:55.890347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 01:36:13.686 [2024-12-09 05:30:55.890358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 01:36:13.686 [2024-12-09 05:30:55.890369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:36:13.686 [2024-12-09 05:30:55.892270] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 426.897 ms, result 0 01:36:14.691  [2024-12-09T05:30:58.525Z] Copying: 2468/1048576 [kB] (2468 kBps) [2024-12-09T05:30:59.463Z] Copying: 25/1024 [MB] (23 MBps) [2024-12-09T05:31:00.399Z] Copying: 49/1024 [MB] (23 MBps) [2024-12-09T05:31:01.342Z] Copying: 73/1024 [MB] (24 MBps) [2024-12-09T05:31:02.278Z] Copying: 97/1024 [MB] (23 MBps) [2024-12-09T05:31:03.215Z] Copying: 121/1024 [MB] (24 MBps) [2024-12-09T05:31:04.149Z] Copying: 145/1024 [MB] (24 MBps) [2024-12-09T05:31:05.526Z] Copying: 169/1024 [MB] (23 MBps) [2024-12-09T05:31:06.460Z] Copying: 192/1024 [MB] (23 MBps) [2024-12-09T05:31:07.395Z] Copying: 215/1024 [MB] (22 MBps) [2024-12-09T05:31:08.333Z] Copying: 238/1024 [MB] (23 MBps) [2024-12-09T05:31:09.270Z] Copying: 262/1024 [MB] (23 MBps) [2024-12-09T05:31:10.207Z] Copying: 285/1024 [MB] (23 MBps) [2024-12-09T05:31:11.144Z] Copying: 308/1024 [MB] (22 MBps) [2024-12-09T05:31:12.097Z] Copying: 333/1024 [MB] (24 MBps) [2024-12-09T05:31:13.481Z] Copying: 355/1024 [MB] (22 MBps) [2024-12-09T05:31:14.418Z] Copying: 378/1024 [MB] (22 MBps) [2024-12-09T05:31:15.355Z] Copying: 401/1024 [MB] (22 MBps) [2024-12-09T05:31:16.406Z] Copying: 423/1024 [MB] (22 MBps) [2024-12-09T05:31:17.340Z] Copying: 446/1024 [MB] (22 MBps) [2024-12-09T05:31:18.276Z] Copying: 470/1024 [MB] (23 MBps) [2024-12-09T05:31:19.214Z] Copying: 493/1024 [MB] (22 MBps) [2024-12-09T05:31:20.151Z] Copying: 516/1024 [MB] (23 MBps) [2024-12-09T05:31:21.087Z] Copying: 539/1024 [MB] (23 MBps) [2024-12-09T05:31:22.466Z] Copying: 562/1024 [MB] (23 MBps) [2024-12-09T05:31:23.403Z] Copying: 586/1024 [MB] (23 MBps) [2024-12-09T05:31:24.341Z] Copying: 609/1024 [MB] (22 MBps) [2024-12-09T05:31:25.284Z] Copying: 632/1024 [MB] (23 MBps) [2024-12-09T05:31:26.218Z] Copying: 655/1024 [MB] (22 MBps) [2024-12-09T05:31:27.154Z] Copying: 678/1024 [MB] (22 MBps) [2024-12-09T05:31:28.105Z] Copying: 701/1024 [MB] (23 MBps) [2024-12-09T05:31:29.482Z] Copying: 724/1024 [MB] (23 MBps) [2024-12-09T05:31:30.418Z] Copying: 747/1024 [MB] (22 MBps) [2024-12-09T05:31:31.355Z] Copying: 770/1024 [MB] (22 MBps) [2024-12-09T05:31:32.293Z] Copying: 793/1024 [MB] (23 MBps) [2024-12-09T05:31:33.230Z] Copying: 817/1024 [MB] (23 MBps) [2024-12-09T05:31:34.167Z] Copying: 841/1024 [MB] (23 MBps) [2024-12-09T05:31:35.106Z] Copying: 865/1024 [MB] (23 MBps) [2024-12-09T05:31:36.483Z] Copying: 889/1024 [MB] (24 MBps) [2024-12-09T05:31:37.060Z] Copying: 912/1024 [MB] (23 MBps) [2024-12-09T05:31:38.483Z] Copying: 936/1024 [MB] (23 MBps) [2024-12-09T05:31:39.049Z] Copying: 960/1024 [MB] (23 MBps) [2024-12-09T05:31:40.423Z] Copying: 983/1024 [MB] (23 MBps) [2024-12-09T05:31:40.991Z] Copying: 1005/1024 [MB] (22 MBps) [2024-12-09T05:31:41.929Z] Copying: 1024/1024 [MB] (average 22 MBps)[2024-12-09 05:31:41.592407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:36:59.473 [2024-12-09 05:31:41.592789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 01:36:59.473 [2024-12-09 05:31:41.592842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 01:36:59.473 [2024-12-09 05:31:41.592854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:36:59.473 [2024-12-09 05:31:41.592917] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 01:36:59.473 [2024-12-09 05:31:41.598521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:36:59.473 [2024-12-09 05:31:41.598556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 01:36:59.473 [2024-12-09 05:31:41.598570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.588 ms 01:36:59.473 [2024-12-09 05:31:41.598581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:36:59.473 [2024-12-09 05:31:41.598826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:36:59.473 [2024-12-09 05:31:41.598840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 01:36:59.473 [2024-12-09 05:31:41.598852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.206 ms 01:36:59.473 [2024-12-09 05:31:41.598868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:36:59.473 [2024-12-09 05:31:41.609988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:36:59.473 [2024-12-09 05:31:41.610132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 01:36:59.473 [2024-12-09 05:31:41.610215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.116 ms 01:36:59.473 [2024-12-09 05:31:41.610253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:36:59.473 [2024-12-09 05:31:41.615794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:36:59.473 [2024-12-09 05:31:41.615826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 01:36:59.473 [2024-12-09 05:31:41.615840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.487 ms 01:36:59.473 [2024-12-09 05:31:41.615859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:36:59.473 [2024-12-09 05:31:41.653600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:36:59.473 [2024-12-09 05:31:41.653634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 01:36:59.473 [2024-12-09 05:31:41.653648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.727 ms 01:36:59.473 [2024-12-09 05:31:41.653659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:36:59.473 [2024-12-09 05:31:41.673508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:36:59.473 [2024-12-09 05:31:41.673695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 01:36:59.473 [2024-12-09 05:31:41.673718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.838 ms 01:36:59.473 [2024-12-09 05:31:41.673730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:36:59.473 [2024-12-09 05:31:41.835024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:36:59.473 [2024-12-09 05:31:41.835175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 01:36:59.473 [2024-12-09 05:31:41.835321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 161.416 ms 01:36:59.473 [2024-12-09 05:31:41.835363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:36:59.473 [2024-12-09 05:31:41.870261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:36:59.473 [2024-12-09 05:31:41.870400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 01:36:59.473 [2024-12-09 05:31:41.870559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.908 ms 01:36:59.473 [2024-12-09 05:31:41.870597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:36:59.473 [2024-12-09 05:31:41.904892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:36:59.473 [2024-12-09 05:31:41.905074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 01:36:59.473 [2024-12-09 05:31:41.905218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.290 ms 01:36:59.473 [2024-12-09 05:31:41.905257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:36:59.734 [2024-12-09 05:31:41.938846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:36:59.734 [2024-12-09 05:31:41.938978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 01:36:59.734 [2024-12-09 05:31:41.939067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.585 ms 01:36:59.734 [2024-12-09 05:31:41.939101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:36:59.734 [2024-12-09 05:31:41.972025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:36:59.734 [2024-12-09 05:31:41.972156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 01:36:59.734 [2024-12-09 05:31:41.972224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.827 ms 01:36:59.734 [2024-12-09 05:31:41.972256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:36:59.734 [2024-12-09 05:31:41.972308] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 01:36:59.734 [2024-12-09 05:31:41.972366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131584 / 261120 wr_cnt: 1 state: open 01:36:59.734 [2024-12-09 05:31:41.972421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 01:36:59.734 [2024-12-09 05:31:41.972477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 01:36:59.734 [2024-12-09 05:31:41.972525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 01:36:59.734 [2024-12-09 05:31:41.972643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 01:36:59.734 [2024-12-09 05:31:41.972694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 01:36:59.734 [2024-12-09 05:31:41.972861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 01:36:59.734 [2024-12-09 05:31:41.972914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 01:36:59.734 [2024-12-09 05:31:41.972961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 01:36:59.734 [2024-12-09 05:31:41.973007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 01:36:59.734 [2024-12-09 05:31:41.973101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 01:36:59.734 [2024-12-09 05:31:41.973152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 01:36:59.734 [2024-12-09 05:31:41.973199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 01:36:59.734 [2024-12-09 05:31:41.973245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 01:36:59.734 [2024-12-09 05:31:41.973339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 01:36:59.734 [2024-12-09 05:31:41.973388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 01:36:59.734 [2024-12-09 05:31:41.973435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 01:36:59.734 [2024-12-09 05:31:41.973610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 01:36:59.734 [2024-12-09 05:31:41.973658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 01:36:59.734 [2024-12-09 05:31:41.973705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 01:36:59.734 [2024-12-09 05:31:41.973782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 01:36:59.734 [2024-12-09 05:31:41.973834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 01:36:59.734 [2024-12-09 05:31:41.973924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 01:36:59.734 [2024-12-09 05:31:41.973973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 01:36:59.734 [2024-12-09 05:31:41.974052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 01:36:59.734 [2024-12-09 05:31:41.974102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 01:36:59.734 [2024-12-09 05:31:41.974148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 01:36:59.734 [2024-12-09 05:31:41.974309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 01:36:59.734 [2024-12-09 05:31:41.974358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 01:36:59.734 [2024-12-09 05:31:41.974404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 01:36:59.734 [2024-12-09 05:31:41.974451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 01:36:59.734 [2024-12-09 05:31:41.974513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 01:36:59.734 [2024-12-09 05:31:41.974698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 01:36:59.734 [2024-12-09 05:31:41.974746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 01:36:59.734 [2024-12-09 05:31:41.974790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 01:36:59.734 [2024-12-09 05:31:41.974836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 01:36:59.734 [2024-12-09 05:31:41.974949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 01:36:59.734 [2024-12-09 05:31:41.974998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 01:36:59.734 [2024-12-09 05:31:41.975058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 01:36:59.734 [2024-12-09 05:31:41.975105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 01:36:59.734 [2024-12-09 05:31:41.975224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 01:36:59.734 [2024-12-09 05:31:41.975271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 01:36:59.734 [2024-12-09 05:31:41.975318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 01:36:59.734 [2024-12-09 05:31:41.975418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 01:36:59.734 [2024-12-09 05:31:41.975483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 01:36:59.735 [2024-12-09 05:31:41.975673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 01:36:59.735 [2024-12-09 05:31:41.975687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 01:36:59.735 [2024-12-09 05:31:41.975698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 01:36:59.735 [2024-12-09 05:31:41.975709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 01:36:59.735 [2024-12-09 05:31:41.975721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 01:36:59.735 [2024-12-09 05:31:41.975731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 01:36:59.735 [2024-12-09 05:31:41.975742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 01:36:59.735 [2024-12-09 05:31:41.975752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 01:36:59.735 [2024-12-09 05:31:41.975764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 01:36:59.735 [2024-12-09 05:31:41.975774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 01:36:59.735 [2024-12-09 05:31:41.975784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 01:36:59.735 [2024-12-09 05:31:41.975795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 01:36:59.735 [2024-12-09 05:31:41.975805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 01:36:59.735 [2024-12-09 05:31:41.975815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 01:36:59.735 [2024-12-09 05:31:41.975826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 01:36:59.735 [2024-12-09 05:31:41.975836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 01:36:59.735 [2024-12-09 05:31:41.975846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 01:36:59.735 [2024-12-09 05:31:41.975858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 01:36:59.735 [2024-12-09 05:31:41.975869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 01:36:59.735 [2024-12-09 05:31:41.975879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 01:36:59.735 [2024-12-09 05:31:41.975889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 01:36:59.735 [2024-12-09 05:31:41.975900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 01:36:59.735 [2024-12-09 05:31:41.975910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 01:36:59.735 [2024-12-09 05:31:41.975921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 01:36:59.735 [2024-12-09 05:31:41.975931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 01:36:59.735 [2024-12-09 05:31:41.975942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 01:36:59.735 [2024-12-09 05:31:41.975952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 01:36:59.735 [2024-12-09 05:31:41.975963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 01:36:59.735 [2024-12-09 05:31:41.975974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 01:36:59.735 [2024-12-09 05:31:41.975984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 01:36:59.735 [2024-12-09 05:31:41.975996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 01:36:59.735 [2024-12-09 05:31:41.976007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 01:36:59.735 [2024-12-09 05:31:41.976017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 01:36:59.735 [2024-12-09 05:31:41.976026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 01:36:59.735 [2024-12-09 05:31:41.976037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 01:36:59.735 [2024-12-09 05:31:41.976048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 01:36:59.735 [2024-12-09 05:31:41.976058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 01:36:59.735 [2024-12-09 05:31:41.976069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 01:36:59.735 [2024-12-09 05:31:41.976079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 01:36:59.735 [2024-12-09 05:31:41.976089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 01:36:59.735 [2024-12-09 05:31:41.976101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 01:36:59.735 [2024-12-09 05:31:41.976111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 01:36:59.735 [2024-12-09 05:31:41.976122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 01:36:59.735 [2024-12-09 05:31:41.976132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 01:36:59.735 [2024-12-09 05:31:41.976142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 01:36:59.735 [2024-12-09 05:31:41.976152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 01:36:59.735 [2024-12-09 05:31:41.976162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 01:36:59.735 [2024-12-09 05:31:41.976172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 01:36:59.735 [2024-12-09 05:31:41.976182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 01:36:59.735 [2024-12-09 05:31:41.976194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 01:36:59.735 [2024-12-09 05:31:41.976206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 01:36:59.735 [2024-12-09 05:31:41.976216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 01:36:59.735 [2024-12-09 05:31:41.976226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 01:36:59.735 [2024-12-09 05:31:41.976235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 01:36:59.735 [2024-12-09 05:31:41.976245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 01:36:59.735 [2024-12-09 05:31:41.976264] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 01:36:59.735 [2024-12-09 05:31:41.976276] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 28d9320f-13b5-493c-9ba2-857532a9178f 01:36:59.735 [2024-12-09 05:31:41.976288] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131584 01:36:59.735 [2024-12-09 05:31:41.976299] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 132288 01:36:59.735 [2024-12-09 05:31:41.976310] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 131328 01:36:59.735 [2024-12-09 05:31:41.976321] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0073 01:36:59.735 [2024-12-09 05:31:41.976344] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 01:36:59.735 [2024-12-09 05:31:41.976367] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 01:36:59.735 [2024-12-09 05:31:41.976377] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 01:36:59.735 [2024-12-09 05:31:41.976386] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 01:36:59.735 [2024-12-09 05:31:41.976395] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 01:36:59.735 [2024-12-09 05:31:41.976406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:36:59.735 [2024-12-09 05:31:41.976417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 01:36:59.735 [2024-12-09 05:31:41.976428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.105 ms 01:36:59.735 [2024-12-09 05:31:41.976438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:36:59.735 [2024-12-09 05:31:41.995611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:36:59.735 [2024-12-09 05:31:41.995647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 01:36:59.735 [2024-12-09 05:31:41.995668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.140 ms 01:36:59.735 [2024-12-09 05:31:41.995679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:36:59.735 [2024-12-09 05:31:41.996313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:36:59.735 [2024-12-09 05:31:41.996330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 01:36:59.735 [2024-12-09 05:31:41.996342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.611 ms 01:36:59.735 [2024-12-09 05:31:41.996354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:36:59.735 [2024-12-09 05:31:42.047019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:36:59.735 [2024-12-09 05:31:42.047060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 01:36:59.735 [2024-12-09 05:31:42.047072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:36:59.735 [2024-12-09 05:31:42.047083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:36:59.735 [2024-12-09 05:31:42.047141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:36:59.735 [2024-12-09 05:31:42.047152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 01:36:59.735 [2024-12-09 05:31:42.047162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:36:59.735 [2024-12-09 05:31:42.047172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:36:59.735 [2024-12-09 05:31:42.047236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:36:59.735 [2024-12-09 05:31:42.047256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 01:36:59.735 [2024-12-09 05:31:42.047270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:36:59.735 [2024-12-09 05:31:42.047280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:36:59.735 [2024-12-09 05:31:42.047296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:36:59.735 [2024-12-09 05:31:42.047306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 01:36:59.735 [2024-12-09 05:31:42.047316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:36:59.735 [2024-12-09 05:31:42.047327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:36:59.735 [2024-12-09 05:31:42.172526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:36:59.735 [2024-12-09 05:31:42.172793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 01:36:59.735 [2024-12-09 05:31:42.172816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:36:59.735 [2024-12-09 05:31:42.172827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:36:59.995 [2024-12-09 05:31:42.271340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:36:59.995 [2024-12-09 05:31:42.271391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 01:36:59.995 [2024-12-09 05:31:42.271407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:36:59.995 [2024-12-09 05:31:42.271418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:36:59.995 [2024-12-09 05:31:42.271547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:36:59.995 [2024-12-09 05:31:42.271562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 01:36:59.995 [2024-12-09 05:31:42.271573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:36:59.995 [2024-12-09 05:31:42.271604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:36:59.995 [2024-12-09 05:31:42.271646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:36:59.995 [2024-12-09 05:31:42.271657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 01:36:59.995 [2024-12-09 05:31:42.271667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:36:59.995 [2024-12-09 05:31:42.271677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:36:59.995 [2024-12-09 05:31:42.271798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:36:59.995 [2024-12-09 05:31:42.271813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 01:36:59.995 [2024-12-09 05:31:42.271824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:36:59.995 [2024-12-09 05:31:42.271835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:36:59.995 [2024-12-09 05:31:42.271882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:36:59.995 [2024-12-09 05:31:42.271895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 01:36:59.995 [2024-12-09 05:31:42.271906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:36:59.995 [2024-12-09 05:31:42.271915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:36:59.995 [2024-12-09 05:31:42.271962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:36:59.995 [2024-12-09 05:31:42.271974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 01:36:59.995 [2024-12-09 05:31:42.271984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:36:59.995 [2024-12-09 05:31:42.271994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:36:59.995 [2024-12-09 05:31:42.272051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:36:59.995 [2024-12-09 05:31:42.272062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 01:36:59.995 [2024-12-09 05:31:42.272073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:36:59.995 [2024-12-09 05:31:42.272083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:36:59.995 [2024-12-09 05:31:42.272225] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 680.889 ms, result 0 01:37:01.375 01:37:01.375 01:37:01.375 05:31:43 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 01:37:02.761 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 01:37:02.761 05:31:45 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 01:37:02.761 05:31:45 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 01:37:02.761 05:31:45 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 01:37:03.025 05:31:45 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 01:37:03.025 05:31:45 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 01:37:03.025 Process with pid 79186 is not found 01:37:03.025 Remove shared memory files 01:37:03.025 05:31:45 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 79186 01:37:03.025 05:31:45 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 79186 ']' 01:37:03.025 05:31:45 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 79186 01:37:03.025 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (79186) - No such process 01:37:03.025 05:31:45 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 79186 is not found' 01:37:03.025 05:31:45 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 01:37:03.025 05:31:45 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 01:37:03.025 05:31:45 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 01:37:03.025 05:31:45 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 01:37:03.025 05:31:45 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 01:37:03.025 05:31:45 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 01:37:03.025 05:31:45 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 01:37:03.025 ************************************ 01:37:03.025 END TEST ftl_restore 01:37:03.025 ************************************ 01:37:03.025 01:37:03.025 real 3m34.620s 01:37:03.025 user 3m20.209s 01:37:03.025 sys 0m15.227s 01:37:03.025 05:31:45 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 01:37:03.025 05:31:45 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 01:37:03.025 05:31:45 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 01:37:03.025 05:31:45 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 01:37:03.025 05:31:45 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 01:37:03.025 05:31:45 ftl -- common/autotest_common.sh@10 -- # set +x 01:37:03.025 ************************************ 01:37:03.025 START TEST ftl_dirty_shutdown 01:37:03.025 ************************************ 01:37:03.025 05:31:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 01:37:03.284 * Looking for test storage... 01:37:03.284 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 01:37:03.285 05:31:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:37:03.285 05:31:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:37:03.285 05:31:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 01:37:03.285 05:31:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:37:03.285 05:31:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:37:03.285 05:31:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 01:37:03.285 05:31:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 01:37:03.285 05:31:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 01:37:03.285 05:31:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 01:37:03.285 05:31:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 01:37:03.285 05:31:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 01:37:03.285 05:31:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 01:37:03.285 05:31:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 01:37:03.285 05:31:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 01:37:03.285 05:31:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:37:03.285 05:31:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 01:37:03.285 05:31:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 01:37:03.285 05:31:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 01:37:03.285 05:31:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:37:03.285 05:31:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 01:37:03.285 05:31:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 01:37:03.285 05:31:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:37:03.285 05:31:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 01:37:03.285 05:31:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 01:37:03.285 05:31:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 01:37:03.285 05:31:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 01:37:03.285 05:31:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:37:03.285 05:31:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 01:37:03.285 05:31:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 01:37:03.285 05:31:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:37:03.285 05:31:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:37:03.285 05:31:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 01:37:03.285 05:31:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:37:03.285 05:31:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:37:03.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:37:03.285 --rc genhtml_branch_coverage=1 01:37:03.285 --rc genhtml_function_coverage=1 01:37:03.285 --rc genhtml_legend=1 01:37:03.285 --rc geninfo_all_blocks=1 01:37:03.285 --rc geninfo_unexecuted_blocks=1 01:37:03.285 01:37:03.285 ' 01:37:03.285 05:31:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:37:03.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:37:03.285 --rc genhtml_branch_coverage=1 01:37:03.285 --rc genhtml_function_coverage=1 01:37:03.285 --rc genhtml_legend=1 01:37:03.285 --rc geninfo_all_blocks=1 01:37:03.285 --rc geninfo_unexecuted_blocks=1 01:37:03.285 01:37:03.285 ' 01:37:03.285 05:31:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:37:03.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:37:03.285 --rc genhtml_branch_coverage=1 01:37:03.285 --rc genhtml_function_coverage=1 01:37:03.285 --rc genhtml_legend=1 01:37:03.285 --rc geninfo_all_blocks=1 01:37:03.285 --rc geninfo_unexecuted_blocks=1 01:37:03.285 01:37:03.285 ' 01:37:03.285 05:31:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:37:03.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:37:03.285 --rc genhtml_branch_coverage=1 01:37:03.285 --rc genhtml_function_coverage=1 01:37:03.285 --rc genhtml_legend=1 01:37:03.285 --rc geninfo_all_blocks=1 01:37:03.285 --rc geninfo_unexecuted_blocks=1 01:37:03.285 01:37:03.285 ' 01:37:03.285 05:31:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 01:37:03.285 05:31:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 01:37:03.285 05:31:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 01:37:03.285 05:31:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 01:37:03.285 05:31:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 01:37:03.285 05:31:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 01:37:03.285 05:31:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:37:03.285 05:31:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 01:37:03.285 05:31:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 01:37:03.285 05:31:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:37:03.285 05:31:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:37:03.285 05:31:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 01:37:03.285 05:31:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 01:37:03.285 05:31:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 01:37:03.285 05:31:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 01:37:03.285 05:31:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 01:37:03.285 05:31:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 01:37:03.285 05:31:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:37:03.285 05:31:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:37:03.285 05:31:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 01:37:03.285 05:31:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 01:37:03.285 05:31:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 01:37:03.285 05:31:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 01:37:03.285 05:31:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 01:37:03.285 05:31:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 01:37:03.285 05:31:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 01:37:03.285 05:31:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 01:37:03.285 05:31:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:37:03.285 05:31:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:37:03.285 05:31:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:37:03.285 05:31:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:37:03.285 05:31:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 01:37:03.285 05:31:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 01:37:03.285 05:31:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 01:37:03.285 05:31:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 01:37:03.285 05:31:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 01:37:03.285 05:31:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 01:37:03.285 05:31:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 01:37:03.285 05:31:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 01:37:03.285 05:31:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 01:37:03.285 05:31:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 01:37:03.285 05:31:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 01:37:03.285 05:31:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=81468 01:37:03.285 05:31:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 01:37:03.285 05:31:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 81468 01:37:03.285 05:31:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 81468 ']' 01:37:03.285 05:31:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:37:03.285 05:31:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 01:37:03.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:37:03.285 05:31:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:37:03.285 05:31:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 01:37:03.285 05:31:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 01:37:03.544 [2024-12-09 05:31:45.834649] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:37:03.544 [2024-12-09 05:31:45.834987] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81468 ] 01:37:03.802 [2024-12-09 05:31:46.020780] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:37:03.802 [2024-12-09 05:31:46.148547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:37:04.736 05:31:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:37:04.736 05:31:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 01:37:04.736 05:31:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 01:37:04.736 05:31:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 01:37:04.736 05:31:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 01:37:04.736 05:31:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 01:37:04.736 05:31:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 01:37:04.736 05:31:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 01:37:05.303 05:31:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 01:37:05.303 05:31:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 01:37:05.303 05:31:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 01:37:05.303 05:31:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 01:37:05.303 05:31:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 01:37:05.303 05:31:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 01:37:05.303 05:31:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 01:37:05.303 05:31:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 01:37:05.303 05:31:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 01:37:05.303 { 01:37:05.303 "name": "nvme0n1", 01:37:05.303 "aliases": [ 01:37:05.303 "b2ce132d-08c2-4dd4-84df-465f8ee18f9d" 01:37:05.303 ], 01:37:05.303 "product_name": "NVMe disk", 01:37:05.303 "block_size": 4096, 01:37:05.303 "num_blocks": 1310720, 01:37:05.303 "uuid": "b2ce132d-08c2-4dd4-84df-465f8ee18f9d", 01:37:05.303 "numa_id": -1, 01:37:05.303 "assigned_rate_limits": { 01:37:05.303 "rw_ios_per_sec": 0, 01:37:05.303 "rw_mbytes_per_sec": 0, 01:37:05.303 "r_mbytes_per_sec": 0, 01:37:05.303 "w_mbytes_per_sec": 0 01:37:05.303 }, 01:37:05.303 "claimed": true, 01:37:05.303 "claim_type": "read_many_write_one", 01:37:05.303 "zoned": false, 01:37:05.303 "supported_io_types": { 01:37:05.303 "read": true, 01:37:05.303 "write": true, 01:37:05.303 "unmap": true, 01:37:05.303 "flush": true, 01:37:05.303 "reset": true, 01:37:05.303 "nvme_admin": true, 01:37:05.303 "nvme_io": true, 01:37:05.303 "nvme_io_md": false, 01:37:05.303 "write_zeroes": true, 01:37:05.303 "zcopy": false, 01:37:05.303 "get_zone_info": false, 01:37:05.303 "zone_management": false, 01:37:05.303 "zone_append": false, 01:37:05.303 "compare": true, 01:37:05.303 "compare_and_write": false, 01:37:05.303 "abort": true, 01:37:05.303 "seek_hole": false, 01:37:05.303 "seek_data": false, 01:37:05.303 "copy": true, 01:37:05.303 "nvme_iov_md": false 01:37:05.303 }, 01:37:05.303 "driver_specific": { 01:37:05.303 "nvme": [ 01:37:05.303 { 01:37:05.303 "pci_address": "0000:00:11.0", 01:37:05.303 "trid": { 01:37:05.303 "trtype": "PCIe", 01:37:05.303 "traddr": "0000:00:11.0" 01:37:05.303 }, 01:37:05.303 "ctrlr_data": { 01:37:05.303 "cntlid": 0, 01:37:05.303 "vendor_id": "0x1b36", 01:37:05.303 "model_number": "QEMU NVMe Ctrl", 01:37:05.303 "serial_number": "12341", 01:37:05.303 "firmware_revision": "8.0.0", 01:37:05.303 "subnqn": "nqn.2019-08.org.qemu:12341", 01:37:05.303 "oacs": { 01:37:05.303 "security": 0, 01:37:05.303 "format": 1, 01:37:05.303 "firmware": 0, 01:37:05.303 "ns_manage": 1 01:37:05.303 }, 01:37:05.303 "multi_ctrlr": false, 01:37:05.303 "ana_reporting": false 01:37:05.303 }, 01:37:05.303 "vs": { 01:37:05.303 "nvme_version": "1.4" 01:37:05.303 }, 01:37:05.303 "ns_data": { 01:37:05.303 "id": 1, 01:37:05.303 "can_share": false 01:37:05.303 } 01:37:05.303 } 01:37:05.303 ], 01:37:05.303 "mp_policy": "active_passive" 01:37:05.303 } 01:37:05.303 } 01:37:05.303 ]' 01:37:05.303 05:31:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 01:37:05.303 05:31:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 01:37:05.303 05:31:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 01:37:05.560 05:31:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 01:37:05.560 05:31:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 01:37:05.560 05:31:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 01:37:05.560 05:31:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 01:37:05.560 05:31:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 01:37:05.560 05:31:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 01:37:05.560 05:31:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 01:37:05.560 05:31:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 01:37:05.560 05:31:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=a29adb7a-9dd2-4660-96bc-ed4cfdf2fb47 01:37:05.560 05:31:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 01:37:05.560 05:31:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a29adb7a-9dd2-4660-96bc-ed4cfdf2fb47 01:37:05.818 05:31:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 01:37:06.075 05:31:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=f40c1519-414a-424c-90a5-f1c7635a2e2f 01:37:06.076 05:31:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u f40c1519-414a-424c-90a5-f1c7635a2e2f 01:37:06.336 05:31:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=3a774e22-574d-4f74-a821-1644442aa4b7 01:37:06.336 05:31:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 01:37:06.336 05:31:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 3a774e22-574d-4f74-a821-1644442aa4b7 01:37:06.336 05:31:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 01:37:06.336 05:31:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 01:37:06.336 05:31:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=3a774e22-574d-4f74-a821-1644442aa4b7 01:37:06.336 05:31:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 01:37:06.336 05:31:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 3a774e22-574d-4f74-a821-1644442aa4b7 01:37:06.336 05:31:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=3a774e22-574d-4f74-a821-1644442aa4b7 01:37:06.336 05:31:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 01:37:06.336 05:31:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 01:37:06.336 05:31:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 01:37:06.336 05:31:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3a774e22-574d-4f74-a821-1644442aa4b7 01:37:06.626 05:31:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 01:37:06.626 { 01:37:06.626 "name": "3a774e22-574d-4f74-a821-1644442aa4b7", 01:37:06.626 "aliases": [ 01:37:06.626 "lvs/nvme0n1p0" 01:37:06.626 ], 01:37:06.626 "product_name": "Logical Volume", 01:37:06.626 "block_size": 4096, 01:37:06.626 "num_blocks": 26476544, 01:37:06.626 "uuid": "3a774e22-574d-4f74-a821-1644442aa4b7", 01:37:06.626 "assigned_rate_limits": { 01:37:06.626 "rw_ios_per_sec": 0, 01:37:06.626 "rw_mbytes_per_sec": 0, 01:37:06.626 "r_mbytes_per_sec": 0, 01:37:06.626 "w_mbytes_per_sec": 0 01:37:06.627 }, 01:37:06.627 "claimed": false, 01:37:06.627 "zoned": false, 01:37:06.627 "supported_io_types": { 01:37:06.627 "read": true, 01:37:06.627 "write": true, 01:37:06.627 "unmap": true, 01:37:06.627 "flush": false, 01:37:06.627 "reset": true, 01:37:06.627 "nvme_admin": false, 01:37:06.627 "nvme_io": false, 01:37:06.627 "nvme_io_md": false, 01:37:06.627 "write_zeroes": true, 01:37:06.627 "zcopy": false, 01:37:06.627 "get_zone_info": false, 01:37:06.627 "zone_management": false, 01:37:06.627 "zone_append": false, 01:37:06.627 "compare": false, 01:37:06.627 "compare_and_write": false, 01:37:06.627 "abort": false, 01:37:06.627 "seek_hole": true, 01:37:06.627 "seek_data": true, 01:37:06.627 "copy": false, 01:37:06.627 "nvme_iov_md": false 01:37:06.627 }, 01:37:06.627 "driver_specific": { 01:37:06.627 "lvol": { 01:37:06.627 "lvol_store_uuid": "f40c1519-414a-424c-90a5-f1c7635a2e2f", 01:37:06.627 "base_bdev": "nvme0n1", 01:37:06.627 "thin_provision": true, 01:37:06.627 "num_allocated_clusters": 0, 01:37:06.627 "snapshot": false, 01:37:06.627 "clone": false, 01:37:06.627 "esnap_clone": false 01:37:06.627 } 01:37:06.627 } 01:37:06.627 } 01:37:06.627 ]' 01:37:06.627 05:31:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 01:37:06.627 05:31:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 01:37:06.627 05:31:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 01:37:06.627 05:31:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 01:37:06.627 05:31:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 01:37:06.627 05:31:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 01:37:06.627 05:31:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 01:37:06.627 05:31:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 01:37:06.627 05:31:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 01:37:06.909 05:31:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 01:37:06.909 05:31:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 01:37:06.909 05:31:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 3a774e22-574d-4f74-a821-1644442aa4b7 01:37:06.909 05:31:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=3a774e22-574d-4f74-a821-1644442aa4b7 01:37:06.909 05:31:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 01:37:06.909 05:31:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 01:37:06.909 05:31:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 01:37:06.909 05:31:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3a774e22-574d-4f74-a821-1644442aa4b7 01:37:07.167 05:31:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 01:37:07.167 { 01:37:07.167 "name": "3a774e22-574d-4f74-a821-1644442aa4b7", 01:37:07.167 "aliases": [ 01:37:07.167 "lvs/nvme0n1p0" 01:37:07.167 ], 01:37:07.167 "product_name": "Logical Volume", 01:37:07.167 "block_size": 4096, 01:37:07.167 "num_blocks": 26476544, 01:37:07.167 "uuid": "3a774e22-574d-4f74-a821-1644442aa4b7", 01:37:07.167 "assigned_rate_limits": { 01:37:07.167 "rw_ios_per_sec": 0, 01:37:07.167 "rw_mbytes_per_sec": 0, 01:37:07.168 "r_mbytes_per_sec": 0, 01:37:07.168 "w_mbytes_per_sec": 0 01:37:07.168 }, 01:37:07.168 "claimed": false, 01:37:07.168 "zoned": false, 01:37:07.168 "supported_io_types": { 01:37:07.168 "read": true, 01:37:07.168 "write": true, 01:37:07.168 "unmap": true, 01:37:07.168 "flush": false, 01:37:07.168 "reset": true, 01:37:07.168 "nvme_admin": false, 01:37:07.168 "nvme_io": false, 01:37:07.168 "nvme_io_md": false, 01:37:07.168 "write_zeroes": true, 01:37:07.168 "zcopy": false, 01:37:07.168 "get_zone_info": false, 01:37:07.168 "zone_management": false, 01:37:07.168 "zone_append": false, 01:37:07.168 "compare": false, 01:37:07.168 "compare_and_write": false, 01:37:07.168 "abort": false, 01:37:07.168 "seek_hole": true, 01:37:07.168 "seek_data": true, 01:37:07.168 "copy": false, 01:37:07.168 "nvme_iov_md": false 01:37:07.168 }, 01:37:07.168 "driver_specific": { 01:37:07.168 "lvol": { 01:37:07.168 "lvol_store_uuid": "f40c1519-414a-424c-90a5-f1c7635a2e2f", 01:37:07.168 "base_bdev": "nvme0n1", 01:37:07.168 "thin_provision": true, 01:37:07.168 "num_allocated_clusters": 0, 01:37:07.168 "snapshot": false, 01:37:07.168 "clone": false, 01:37:07.168 "esnap_clone": false 01:37:07.168 } 01:37:07.168 } 01:37:07.168 } 01:37:07.168 ]' 01:37:07.168 05:31:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 01:37:07.168 05:31:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 01:37:07.168 05:31:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 01:37:07.168 05:31:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 01:37:07.168 05:31:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 01:37:07.168 05:31:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 01:37:07.168 05:31:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 01:37:07.168 05:31:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 01:37:07.427 05:31:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 01:37:07.427 05:31:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 3a774e22-574d-4f74-a821-1644442aa4b7 01:37:07.427 05:31:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=3a774e22-574d-4f74-a821-1644442aa4b7 01:37:07.427 05:31:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 01:37:07.427 05:31:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 01:37:07.427 05:31:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 01:37:07.427 05:31:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3a774e22-574d-4f74-a821-1644442aa4b7 01:37:07.427 05:31:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 01:37:07.427 { 01:37:07.427 "name": "3a774e22-574d-4f74-a821-1644442aa4b7", 01:37:07.427 "aliases": [ 01:37:07.427 "lvs/nvme0n1p0" 01:37:07.427 ], 01:37:07.427 "product_name": "Logical Volume", 01:37:07.427 "block_size": 4096, 01:37:07.427 "num_blocks": 26476544, 01:37:07.427 "uuid": "3a774e22-574d-4f74-a821-1644442aa4b7", 01:37:07.427 "assigned_rate_limits": { 01:37:07.427 "rw_ios_per_sec": 0, 01:37:07.427 "rw_mbytes_per_sec": 0, 01:37:07.427 "r_mbytes_per_sec": 0, 01:37:07.427 "w_mbytes_per_sec": 0 01:37:07.427 }, 01:37:07.427 "claimed": false, 01:37:07.427 "zoned": false, 01:37:07.427 "supported_io_types": { 01:37:07.427 "read": true, 01:37:07.427 "write": true, 01:37:07.427 "unmap": true, 01:37:07.427 "flush": false, 01:37:07.427 "reset": true, 01:37:07.427 "nvme_admin": false, 01:37:07.427 "nvme_io": false, 01:37:07.427 "nvme_io_md": false, 01:37:07.427 "write_zeroes": true, 01:37:07.427 "zcopy": false, 01:37:07.427 "get_zone_info": false, 01:37:07.427 "zone_management": false, 01:37:07.427 "zone_append": false, 01:37:07.427 "compare": false, 01:37:07.427 "compare_and_write": false, 01:37:07.427 "abort": false, 01:37:07.427 "seek_hole": true, 01:37:07.427 "seek_data": true, 01:37:07.427 "copy": false, 01:37:07.427 "nvme_iov_md": false 01:37:07.427 }, 01:37:07.427 "driver_specific": { 01:37:07.427 "lvol": { 01:37:07.427 "lvol_store_uuid": "f40c1519-414a-424c-90a5-f1c7635a2e2f", 01:37:07.427 "base_bdev": "nvme0n1", 01:37:07.427 "thin_provision": true, 01:37:07.427 "num_allocated_clusters": 0, 01:37:07.427 "snapshot": false, 01:37:07.427 "clone": false, 01:37:07.427 "esnap_clone": false 01:37:07.427 } 01:37:07.427 } 01:37:07.427 } 01:37:07.427 ]' 01:37:07.427 05:31:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 01:37:07.686 05:31:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 01:37:07.686 05:31:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 01:37:07.686 05:31:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 01:37:07.686 05:31:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 01:37:07.686 05:31:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 01:37:07.686 05:31:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 01:37:07.686 05:31:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 3a774e22-574d-4f74-a821-1644442aa4b7 --l2p_dram_limit 10' 01:37:07.686 05:31:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 01:37:07.686 05:31:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 01:37:07.686 05:31:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 01:37:07.686 05:31:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 3a774e22-574d-4f74-a821-1644442aa4b7 --l2p_dram_limit 10 -c nvc0n1p0 01:37:07.945 [2024-12-09 05:31:50.147641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:37:07.945 [2024-12-09 05:31:50.147699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 01:37:07.945 [2024-12-09 05:31:50.147723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 01:37:07.945 [2024-12-09 05:31:50.147734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:37:07.945 [2024-12-09 05:31:50.147829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:37:07.945 [2024-12-09 05:31:50.147843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 01:37:07.945 [2024-12-09 05:31:50.147857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 01:37:07.945 [2024-12-09 05:31:50.147869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:37:07.945 [2024-12-09 05:31:50.147894] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 01:37:07.945 [2024-12-09 05:31:50.149012] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 01:37:07.945 [2024-12-09 05:31:50.149050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:37:07.945 [2024-12-09 05:31:50.149061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 01:37:07.945 [2024-12-09 05:31:50.149076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.158 ms 01:37:07.945 [2024-12-09 05:31:50.149088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:37:07.945 [2024-12-09 05:31:50.149190] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID d5ef8fa2-4aaa-43c5-83fd-e3213962e517 01:37:07.945 [2024-12-09 05:31:50.151659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:37:07.945 [2024-12-09 05:31:50.151846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 01:37:07.945 [2024-12-09 05:31:50.151867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 01:37:07.945 [2024-12-09 05:31:50.151885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:37:07.945 [2024-12-09 05:31:50.165825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:37:07.945 [2024-12-09 05:31:50.165860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 01:37:07.945 [2024-12-09 05:31:50.165873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.893 ms 01:37:07.945 [2024-12-09 05:31:50.165888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:37:07.945 [2024-12-09 05:31:50.165994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:37:07.945 [2024-12-09 05:31:50.166012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 01:37:07.945 [2024-12-09 05:31:50.166023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 01:37:07.945 [2024-12-09 05:31:50.166041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:37:07.945 [2024-12-09 05:31:50.166104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:37:07.945 [2024-12-09 05:31:50.166121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 01:37:07.945 [2024-12-09 05:31:50.166137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 01:37:07.945 [2024-12-09 05:31:50.166150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:37:07.945 [2024-12-09 05:31:50.166177] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 01:37:07.945 [2024-12-09 05:31:50.172385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:37:07.945 [2024-12-09 05:31:50.172416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 01:37:07.945 [2024-12-09 05:31:50.172433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.224 ms 01:37:07.945 [2024-12-09 05:31:50.172444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:37:07.945 [2024-12-09 05:31:50.172499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:37:07.945 [2024-12-09 05:31:50.172511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 01:37:07.945 [2024-12-09 05:31:50.172526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 01:37:07.945 [2024-12-09 05:31:50.172536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:37:07.945 [2024-12-09 05:31:50.172572] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 01:37:07.945 [2024-12-09 05:31:50.172701] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 01:37:07.946 [2024-12-09 05:31:50.172724] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 01:37:07.946 [2024-12-09 05:31:50.172739] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 01:37:07.946 [2024-12-09 05:31:50.172756] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 01:37:07.946 [2024-12-09 05:31:50.172768] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 01:37:07.946 [2024-12-09 05:31:50.172782] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 01:37:07.946 [2024-12-09 05:31:50.172796] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 01:37:07.946 [2024-12-09 05:31:50.172810] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 01:37:07.946 [2024-12-09 05:31:50.172820] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 01:37:07.946 [2024-12-09 05:31:50.172834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:37:07.946 [2024-12-09 05:31:50.172855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 01:37:07.946 [2024-12-09 05:31:50.172870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.264 ms 01:37:07.946 [2024-12-09 05:31:50.172879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:37:07.946 [2024-12-09 05:31:50.172953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:37:07.946 [2024-12-09 05:31:50.172964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 01:37:07.946 [2024-12-09 05:31:50.172977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 01:37:07.946 [2024-12-09 05:31:50.172988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:37:07.946 [2024-12-09 05:31:50.173085] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 01:37:07.946 [2024-12-09 05:31:50.173098] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 01:37:07.946 [2024-12-09 05:31:50.173113] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 01:37:07.946 [2024-12-09 05:31:50.173122] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:37:07.946 [2024-12-09 05:31:50.173136] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 01:37:07.946 [2024-12-09 05:31:50.173145] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 01:37:07.946 [2024-12-09 05:31:50.173158] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 01:37:07.946 [2024-12-09 05:31:50.173167] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 01:37:07.946 [2024-12-09 05:31:50.173179] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 01:37:07.946 [2024-12-09 05:31:50.173188] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 01:37:07.946 [2024-12-09 05:31:50.173199] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 01:37:07.946 [2024-12-09 05:31:50.173209] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 01:37:07.946 [2024-12-09 05:31:50.173221] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 01:37:07.946 [2024-12-09 05:31:50.173229] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 01:37:07.946 [2024-12-09 05:31:50.173242] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 01:37:07.946 [2024-12-09 05:31:50.173251] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:37:07.946 [2024-12-09 05:31:50.173266] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 01:37:07.946 [2024-12-09 05:31:50.173275] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 01:37:07.946 [2024-12-09 05:31:50.173289] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:37:07.946 [2024-12-09 05:31:50.173299] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 01:37:07.946 [2024-12-09 05:31:50.173312] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 01:37:07.946 [2024-12-09 05:31:50.173321] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:37:07.946 [2024-12-09 05:31:50.173334] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 01:37:07.946 [2024-12-09 05:31:50.173344] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 01:37:07.946 [2024-12-09 05:31:50.173355] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:37:07.946 [2024-12-09 05:31:50.173365] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 01:37:07.946 [2024-12-09 05:31:50.173377] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 01:37:07.946 [2024-12-09 05:31:50.173385] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:37:07.946 [2024-12-09 05:31:50.173397] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 01:37:07.946 [2024-12-09 05:31:50.173406] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 01:37:07.946 [2024-12-09 05:31:50.173417] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:37:07.946 [2024-12-09 05:31:50.173427] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 01:37:07.946 [2024-12-09 05:31:50.173442] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 01:37:07.946 [2024-12-09 05:31:50.173451] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 01:37:07.946 [2024-12-09 05:31:50.173475] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 01:37:07.946 [2024-12-09 05:31:50.173485] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 01:37:07.946 [2024-12-09 05:31:50.173497] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 01:37:07.946 [2024-12-09 05:31:50.173506] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 01:37:07.946 [2024-12-09 05:31:50.173519] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 01:37:07.946 [2024-12-09 05:31:50.173528] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:37:07.946 [2024-12-09 05:31:50.173541] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 01:37:07.946 [2024-12-09 05:31:50.173550] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 01:37:07.946 [2024-12-09 05:31:50.173563] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:37:07.946 [2024-12-09 05:31:50.173572] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 01:37:07.946 [2024-12-09 05:31:50.173585] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 01:37:07.946 [2024-12-09 05:31:50.173596] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 01:37:07.946 [2024-12-09 05:31:50.173610] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:37:07.946 [2024-12-09 05:31:50.173620] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 01:37:07.946 [2024-12-09 05:31:50.173637] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 01:37:07.946 [2024-12-09 05:31:50.173646] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 01:37:07.946 [2024-12-09 05:31:50.173658] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 01:37:07.946 [2024-12-09 05:31:50.173667] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 01:37:07.946 [2024-12-09 05:31:50.173680] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 01:37:07.946 [2024-12-09 05:31:50.173694] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 01:37:07.946 [2024-12-09 05:31:50.173713] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 01:37:07.946 [2024-12-09 05:31:50.173725] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 01:37:07.946 [2024-12-09 05:31:50.173739] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 01:37:07.946 [2024-12-09 05:31:50.173749] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 01:37:07.946 [2024-12-09 05:31:50.173762] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 01:37:07.946 [2024-12-09 05:31:50.173771] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 01:37:07.946 [2024-12-09 05:31:50.173786] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 01:37:07.946 [2024-12-09 05:31:50.173796] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 01:37:07.946 [2024-12-09 05:31:50.173810] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 01:37:07.946 [2024-12-09 05:31:50.173822] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 01:37:07.946 [2024-12-09 05:31:50.173838] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 01:37:07.946 [2024-12-09 05:31:50.173848] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 01:37:07.946 [2024-12-09 05:31:50.173862] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 01:37:07.946 [2024-12-09 05:31:50.173872] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 01:37:07.946 [2024-12-09 05:31:50.173886] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 01:37:07.946 [2024-12-09 05:31:50.173896] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 01:37:07.946 [2024-12-09 05:31:50.173932] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 01:37:07.947 [2024-12-09 05:31:50.173949] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 01:37:07.947 [2024-12-09 05:31:50.173964] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 01:37:07.947 [2024-12-09 05:31:50.173975] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 01:37:07.947 [2024-12-09 05:31:50.173989] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 01:37:07.947 [2024-12-09 05:31:50.174001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:37:07.947 [2024-12-09 05:31:50.174016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 01:37:07.947 [2024-12-09 05:31:50.174026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.974 ms 01:37:07.947 [2024-12-09 05:31:50.174039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:37:07.947 [2024-12-09 05:31:50.174084] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 01:37:07.947 [2024-12-09 05:31:50.174103] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 01:37:14.518 [2024-12-09 05:31:55.702504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:37:14.518 [2024-12-09 05:31:55.702580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 01:37:14.518 [2024-12-09 05:31:55.702599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5537.395 ms 01:37:14.518 [2024-12-09 05:31:55.702613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:37:14.518 [2024-12-09 05:31:55.749660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:37:14.518 [2024-12-09 05:31:55.749720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 01:37:14.518 [2024-12-09 05:31:55.749737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.826 ms 01:37:14.518 [2024-12-09 05:31:55.749751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:37:14.518 [2024-12-09 05:31:55.749888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:37:14.518 [2024-12-09 05:31:55.749906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 01:37:14.518 [2024-12-09 05:31:55.749919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 01:37:14.518 [2024-12-09 05:31:55.749941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:37:14.518 [2024-12-09 05:31:55.802056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:37:14.518 [2024-12-09 05:31:55.802323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 01:37:14.518 [2024-12-09 05:31:55.802349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.134 ms 01:37:14.518 [2024-12-09 05:31:55.802368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:37:14.518 [2024-12-09 05:31:55.802417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:37:14.518 [2024-12-09 05:31:55.802433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 01:37:14.518 [2024-12-09 05:31:55.802446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 01:37:14.518 [2024-12-09 05:31:55.802493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:37:14.518 [2024-12-09 05:31:55.803319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:37:14.518 [2024-12-09 05:31:55.803341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 01:37:14.518 [2024-12-09 05:31:55.803354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.767 ms 01:37:14.518 [2024-12-09 05:31:55.803368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:37:14.518 [2024-12-09 05:31:55.803488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:37:14.518 [2024-12-09 05:31:55.803508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 01:37:14.518 [2024-12-09 05:31:55.803519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 01:37:14.518 [2024-12-09 05:31:55.803537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:37:14.518 [2024-12-09 05:31:55.829729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:37:14.518 [2024-12-09 05:31:55.829771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 01:37:14.518 [2024-12-09 05:31:55.829785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.211 ms 01:37:14.518 [2024-12-09 05:31:55.829798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:37:14.518 [2024-12-09 05:31:55.872281] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 01:37:14.518 [2024-12-09 05:31:55.878175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:37:14.518 [2024-12-09 05:31:55.878392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 01:37:14.518 [2024-12-09 05:31:55.878423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.364 ms 01:37:14.518 [2024-12-09 05:31:55.878435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:37:14.518 [2024-12-09 05:31:56.038194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:37:14.518 [2024-12-09 05:31:56.038256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 01:37:14.518 [2024-12-09 05:31:56.038277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 159.952 ms 01:37:14.518 [2024-12-09 05:31:56.038288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:37:14.518 [2024-12-09 05:31:56.038514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:37:14.518 [2024-12-09 05:31:56.038528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 01:37:14.519 [2024-12-09 05:31:56.038548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.176 ms 01:37:14.519 [2024-12-09 05:31:56.038559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:37:14.519 [2024-12-09 05:31:56.073329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:37:14.519 [2024-12-09 05:31:56.073574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 01:37:14.519 [2024-12-09 05:31:56.073603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.769 ms 01:37:14.519 [2024-12-09 05:31:56.073614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:37:14.519 [2024-12-09 05:31:56.106986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:37:14.519 [2024-12-09 05:31:56.107029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 01:37:14.519 [2024-12-09 05:31:56.107048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.356 ms 01:37:14.519 [2024-12-09 05:31:56.107058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:37:14.519 [2024-12-09 05:31:56.107973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:37:14.519 [2024-12-09 05:31:56.108002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 01:37:14.519 [2024-12-09 05:31:56.108022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.873 ms 01:37:14.519 [2024-12-09 05:31:56.108033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:37:14.519 [2024-12-09 05:31:56.215570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:37:14.519 [2024-12-09 05:31:56.215609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 01:37:14.519 [2024-12-09 05:31:56.215632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 107.650 ms 01:37:14.519 [2024-12-09 05:31:56.215644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:37:14.519 [2024-12-09 05:31:56.252155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:37:14.519 [2024-12-09 05:31:56.252192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 01:37:14.519 [2024-12-09 05:31:56.252211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.483 ms 01:37:14.519 [2024-12-09 05:31:56.252222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:37:14.519 [2024-12-09 05:31:56.286731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:37:14.519 [2024-12-09 05:31:56.286888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 01:37:14.519 [2024-12-09 05:31:56.286923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.519 ms 01:37:14.519 [2024-12-09 05:31:56.286933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:37:14.519 [2024-12-09 05:31:56.321561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:37:14.519 [2024-12-09 05:31:56.321597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 01:37:14.519 [2024-12-09 05:31:56.321615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.639 ms 01:37:14.519 [2024-12-09 05:31:56.321626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:37:14.519 [2024-12-09 05:31:56.321687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:37:14.519 [2024-12-09 05:31:56.321699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 01:37:14.519 [2024-12-09 05:31:56.321717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 01:37:14.519 [2024-12-09 05:31:56.321728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:37:14.519 [2024-12-09 05:31:56.321839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:37:14.519 [2024-12-09 05:31:56.321855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 01:37:14.519 [2024-12-09 05:31:56.321873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 01:37:14.519 [2024-12-09 05:31:56.321884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:37:14.519 [2024-12-09 05:31:56.323263] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 6185.094 ms, result 0 01:37:14.519 { 01:37:14.519 "name": "ftl0", 01:37:14.519 "uuid": "d5ef8fa2-4aaa-43c5-83fd-e3213962e517" 01:37:14.519 } 01:37:14.519 05:31:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 01:37:14.519 05:31:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 01:37:14.519 05:31:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 01:37:14.519 05:31:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 01:37:14.519 05:31:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 01:37:14.519 /dev/nbd0 01:37:14.519 05:31:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 01:37:14.519 05:31:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 01:37:14.519 05:31:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 01:37:14.519 05:31:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:37:14.519 05:31:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:37:14.519 05:31:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 01:37:14.519 05:31:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 01:37:14.519 05:31:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:37:14.519 05:31:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:37:14.519 05:31:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 01:37:14.519 1+0 records in 01:37:14.519 1+0 records out 01:37:14.519 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000410529 s, 10.0 MB/s 01:37:14.519 05:31:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 01:37:14.519 05:31:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 01:37:14.519 05:31:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 01:37:14.519 05:31:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:37:14.519 05:31:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 01:37:14.519 05:31:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 01:37:14.776 [2024-12-09 05:31:56.995209] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:37:14.776 [2024-12-09 05:31:56.995368] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81638 ] 01:37:14.776 [2024-12-09 05:31:57.178428] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:37:15.034 [2024-12-09 05:31:57.306195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:37:16.408  [2024-12-09T05:31:59.796Z] Copying: 209/1024 [MB] (209 MBps) [2024-12-09T05:32:00.730Z] Copying: 418/1024 [MB] (209 MBps) [2024-12-09T05:32:01.667Z] Copying: 628/1024 [MB] (210 MBps) [2024-12-09T05:32:03.046Z] Copying: 824/1024 [MB] (196 MBps) [2024-12-09T05:32:03.046Z] Copying: 1022/1024 [MB] (197 MBps) [2024-12-09T05:32:03.982Z] Copying: 1024/1024 [MB] (average 204 MBps) 01:37:21.526 01:37:21.526 05:32:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 01:37:23.431 05:32:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 01:37:23.432 [2024-12-09 05:32:05.845655] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:37:23.432 [2024-12-09 05:32:05.845777] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81725 ] 01:37:23.689 [2024-12-09 05:32:06.029226] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:37:23.948 [2024-12-09 05:32:06.151051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:37:25.338  [2024-12-09T05:32:08.730Z] Copying: 16/1024 [MB] (16 MBps) [2024-12-09T05:32:09.668Z] Copying: 32/1024 [MB] (16 MBps) [2024-12-09T05:32:10.605Z] Copying: 48/1024 [MB] (16 MBps) [2024-12-09T05:32:11.544Z] Copying: 64/1024 [MB] (15 MBps) [2024-12-09T05:32:12.923Z] Copying: 79/1024 [MB] (15 MBps) [2024-12-09T05:32:13.861Z] Copying: 95/1024 [MB] (15 MBps) [2024-12-09T05:32:14.798Z] Copying: 111/1024 [MB] (15 MBps) [2024-12-09T05:32:15.735Z] Copying: 127/1024 [MB] (15 MBps) [2024-12-09T05:32:16.673Z] Copying: 142/1024 [MB] (15 MBps) [2024-12-09T05:32:17.610Z] Copying: 159/1024 [MB] (16 MBps) [2024-12-09T05:32:18.547Z] Copying: 175/1024 [MB] (15 MBps) [2024-12-09T05:32:19.929Z] Copying: 191/1024 [MB] (16 MBps) [2024-12-09T05:32:20.867Z] Copying: 208/1024 [MB] (16 MBps) [2024-12-09T05:32:21.848Z] Copying: 225/1024 [MB] (16 MBps) [2024-12-09T05:32:22.787Z] Copying: 241/1024 [MB] (16 MBps) [2024-12-09T05:32:23.725Z] Copying: 257/1024 [MB] (15 MBps) [2024-12-09T05:32:24.664Z] Copying: 273/1024 [MB] (16 MBps) [2024-12-09T05:32:25.602Z] Copying: 290/1024 [MB] (16 MBps) [2024-12-09T05:32:26.554Z] Copying: 306/1024 [MB] (16 MBps) [2024-12-09T05:32:27.490Z] Copying: 322/1024 [MB] (16 MBps) [2024-12-09T05:32:28.867Z] Copying: 338/1024 [MB] (16 MBps) [2024-12-09T05:32:29.804Z] Copying: 354/1024 [MB] (16 MBps) [2024-12-09T05:32:30.742Z] Copying: 371/1024 [MB] (16 MBps) [2024-12-09T05:32:31.679Z] Copying: 388/1024 [MB] (17 MBps) [2024-12-09T05:32:32.613Z] Copying: 404/1024 [MB] (16 MBps) [2024-12-09T05:32:33.550Z] Copying: 421/1024 [MB] (16 MBps) [2024-12-09T05:32:34.488Z] Copying: 437/1024 [MB] (16 MBps) [2024-12-09T05:32:35.869Z] Copying: 454/1024 [MB] (16 MBps) [2024-12-09T05:32:36.822Z] Copying: 470/1024 [MB] (16 MBps) [2024-12-09T05:32:37.785Z] Copying: 487/1024 [MB] (16 MBps) [2024-12-09T05:32:38.721Z] Copying: 503/1024 [MB] (16 MBps) [2024-12-09T05:32:39.659Z] Copying: 519/1024 [MB] (15 MBps) [2024-12-09T05:32:40.597Z] Copying: 535/1024 [MB] (15 MBps) [2024-12-09T05:32:41.533Z] Copying: 551/1024 [MB] (16 MBps) [2024-12-09T05:32:42.472Z] Copying: 568/1024 [MB] (16 MBps) [2024-12-09T05:32:43.853Z] Copying: 584/1024 [MB] (16 MBps) [2024-12-09T05:32:44.793Z] Copying: 601/1024 [MB] (16 MBps) [2024-12-09T05:32:45.730Z] Copying: 618/1024 [MB] (16 MBps) [2024-12-09T05:32:46.665Z] Copying: 634/1024 [MB] (16 MBps) [2024-12-09T05:32:47.602Z] Copying: 651/1024 [MB] (16 MBps) [2024-12-09T05:32:48.539Z] Copying: 668/1024 [MB] (16 MBps) [2024-12-09T05:32:49.474Z] Copying: 683/1024 [MB] (15 MBps) [2024-12-09T05:32:50.851Z] Copying: 700/1024 [MB] (16 MBps) [2024-12-09T05:32:51.789Z] Copying: 716/1024 [MB] (16 MBps) [2024-12-09T05:32:52.800Z] Copying: 732/1024 [MB] (16 MBps) [2024-12-09T05:32:53.737Z] Copying: 749/1024 [MB] (16 MBps) [2024-12-09T05:32:54.683Z] Copying: 765/1024 [MB] (16 MBps) [2024-12-09T05:32:55.620Z] Copying: 781/1024 [MB] (15 MBps) [2024-12-09T05:32:56.557Z] Copying: 797/1024 [MB] (16 MBps) [2024-12-09T05:32:57.493Z] Copying: 815/1024 [MB] (18 MBps) [2024-12-09T05:32:58.872Z] Copying: 832/1024 [MB] (16 MBps) [2024-12-09T05:32:59.439Z] Copying: 848/1024 [MB] (16 MBps) [2024-12-09T05:33:00.812Z] Copying: 864/1024 [MB] (16 MBps) [2024-12-09T05:33:01.749Z] Copying: 880/1024 [MB] (16 MBps) [2024-12-09T05:33:02.685Z] Copying: 898/1024 [MB] (17 MBps) [2024-12-09T05:33:03.622Z] Copying: 915/1024 [MB] (16 MBps) [2024-12-09T05:33:04.558Z] Copying: 931/1024 [MB] (16 MBps) [2024-12-09T05:33:05.497Z] Copying: 949/1024 [MB] (17 MBps) [2024-12-09T05:33:06.433Z] Copying: 966/1024 [MB] (16 MBps) [2024-12-09T05:33:07.870Z] Copying: 982/1024 [MB] (16 MBps) [2024-12-09T05:33:08.438Z] Copying: 999/1024 [MB] (16 MBps) [2024-12-09T05:33:09.017Z] Copying: 1015/1024 [MB] (16 MBps) [2024-12-09T05:33:10.392Z] Copying: 1024/1024 [MB] (average 16 MBps) 01:38:27.936 01:38:27.936 05:33:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 01:38:27.936 05:33:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 01:38:28.196 05:33:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 01:38:28.196 [2024-12-09 05:33:10.646855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:38:28.196 [2024-12-09 05:33:10.646911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 01:38:28.196 [2024-12-09 05:33:10.646928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 01:38:28.196 [2024-12-09 05:33:10.646947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:38:28.196 [2024-12-09 05:33:10.646972] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 01:38:28.196 [2024-12-09 05:33:10.651431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:38:28.196 [2024-12-09 05:33:10.651475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 01:38:28.196 [2024-12-09 05:33:10.651491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.443 ms 01:38:28.196 [2024-12-09 05:33:10.651501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:38:28.456 [2024-12-09 05:33:10.653441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:38:28.456 [2024-12-09 05:33:10.653488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 01:38:28.456 [2024-12-09 05:33:10.653505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.904 ms 01:38:28.456 [2024-12-09 05:33:10.653516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:38:28.456 [2024-12-09 05:33:10.672019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:38:28.456 [2024-12-09 05:33:10.672057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 01:38:28.456 [2024-12-09 05:33:10.672081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.504 ms 01:38:28.456 [2024-12-09 05:33:10.672091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:38:28.456 [2024-12-09 05:33:10.676836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:38:28.456 [2024-12-09 05:33:10.676868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 01:38:28.456 [2024-12-09 05:33:10.676883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.710 ms 01:38:28.456 [2024-12-09 05:33:10.676893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:38:28.456 [2024-12-09 05:33:10.712422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:38:28.456 [2024-12-09 05:33:10.712458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 01:38:28.456 [2024-12-09 05:33:10.712488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.507 ms 01:38:28.456 [2024-12-09 05:33:10.712514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:38:28.456 [2024-12-09 05:33:10.733974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:38:28.456 [2024-12-09 05:33:10.734020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 01:38:28.456 [2024-12-09 05:33:10.734042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.445 ms 01:38:28.456 [2024-12-09 05:33:10.734052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:38:28.456 [2024-12-09 05:33:10.734194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:38:28.456 [2024-12-09 05:33:10.734209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 01:38:28.456 [2024-12-09 05:33:10.734222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 01:38:28.456 [2024-12-09 05:33:10.734232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:38:28.456 [2024-12-09 05:33:10.768338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:38:28.456 [2024-12-09 05:33:10.768374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 01:38:28.456 [2024-12-09 05:33:10.768391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.138 ms 01:38:28.456 [2024-12-09 05:33:10.768401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:38:28.456 [2024-12-09 05:33:10.801308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:38:28.456 [2024-12-09 05:33:10.801342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 01:38:28.456 [2024-12-09 05:33:10.801358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.916 ms 01:38:28.456 [2024-12-09 05:33:10.801366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:38:28.456 [2024-12-09 05:33:10.834128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:38:28.456 [2024-12-09 05:33:10.834163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 01:38:28.456 [2024-12-09 05:33:10.834180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.768 ms 01:38:28.456 [2024-12-09 05:33:10.834189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:38:28.456 [2024-12-09 05:33:10.866965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:38:28.456 [2024-12-09 05:33:10.867149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 01:38:28.456 [2024-12-09 05:33:10.867176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.727 ms 01:38:28.456 [2024-12-09 05:33:10.867185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:38:28.456 [2024-12-09 05:33:10.867266] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 01:38:28.456 [2024-12-09 05:33:10.867285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 01:38:28.456 [2024-12-09 05:33:10.867302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 01:38:28.456 [2024-12-09 05:33:10.867313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 01:38:28.456 [2024-12-09 05:33:10.867327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 01:38:28.456 [2024-12-09 05:33:10.867338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 01:38:28.456 [2024-12-09 05:33:10.867352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 01:38:28.456 [2024-12-09 05:33:10.867363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 01:38:28.456 [2024-12-09 05:33:10.867381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 01:38:28.456 [2024-12-09 05:33:10.867391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 01:38:28.456 [2024-12-09 05:33:10.867405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 01:38:28.456 [2024-12-09 05:33:10.867415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 01:38:28.456 [2024-12-09 05:33:10.867429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 01:38:28.456 [2024-12-09 05:33:10.867440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 01:38:28.456 [2024-12-09 05:33:10.867454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 01:38:28.456 [2024-12-09 05:33:10.867484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 01:38:28.456 [2024-12-09 05:33:10.867499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 01:38:28.456 [2024-12-09 05:33:10.867510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 01:38:28.456 [2024-12-09 05:33:10.867524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 01:38:28.456 [2024-12-09 05:33:10.867535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 01:38:28.456 [2024-12-09 05:33:10.867549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 01:38:28.456 [2024-12-09 05:33:10.867560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 01:38:28.456 [2024-12-09 05:33:10.867575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 01:38:28.456 [2024-12-09 05:33:10.867586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 01:38:28.456 [2024-12-09 05:33:10.867603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 01:38:28.456 [2024-12-09 05:33:10.867615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 01:38:28.456 [2024-12-09 05:33:10.867630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 01:38:28.456 [2024-12-09 05:33:10.867640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 01:38:28.456 [2024-12-09 05:33:10.867655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 01:38:28.456 [2024-12-09 05:33:10.867666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 01:38:28.456 [2024-12-09 05:33:10.867680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 01:38:28.456 [2024-12-09 05:33:10.867692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 01:38:28.456 [2024-12-09 05:33:10.867706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 01:38:28.456 [2024-12-09 05:33:10.867717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 01:38:28.456 [2024-12-09 05:33:10.867731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 01:38:28.456 [2024-12-09 05:33:10.867742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 01:38:28.456 [2024-12-09 05:33:10.867755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 01:38:28.456 [2024-12-09 05:33:10.867766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 01:38:28.456 [2024-12-09 05:33:10.867779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 01:38:28.456 [2024-12-09 05:33:10.867790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 01:38:28.456 [2024-12-09 05:33:10.867808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 01:38:28.457 [2024-12-09 05:33:10.867818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 01:38:28.457 [2024-12-09 05:33:10.867831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 01:38:28.457 [2024-12-09 05:33:10.867842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 01:38:28.457 [2024-12-09 05:33:10.867855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 01:38:28.457 [2024-12-09 05:33:10.867865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 01:38:28.457 [2024-12-09 05:33:10.867879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 01:38:28.457 [2024-12-09 05:33:10.867908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 01:38:28.457 [2024-12-09 05:33:10.867923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 01:38:28.457 [2024-12-09 05:33:10.867934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 01:38:28.457 [2024-12-09 05:33:10.867948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 01:38:28.457 [2024-12-09 05:33:10.867959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 01:38:28.457 [2024-12-09 05:33:10.867973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 01:38:28.457 [2024-12-09 05:33:10.867983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 01:38:28.457 [2024-12-09 05:33:10.867998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 01:38:28.457 [2024-12-09 05:33:10.868009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 01:38:28.457 [2024-12-09 05:33:10.868025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 01:38:28.457 [2024-12-09 05:33:10.868037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 01:38:28.457 [2024-12-09 05:33:10.868051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 01:38:28.457 [2024-12-09 05:33:10.868062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 01:38:28.457 [2024-12-09 05:33:10.868075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 01:38:28.457 [2024-12-09 05:33:10.868086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 01:38:28.457 [2024-12-09 05:33:10.868100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 01:38:28.457 [2024-12-09 05:33:10.868111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 01:38:28.457 [2024-12-09 05:33:10.868125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 01:38:28.457 [2024-12-09 05:33:10.868135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 01:38:28.457 [2024-12-09 05:33:10.868149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 01:38:28.457 [2024-12-09 05:33:10.868159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 01:38:28.457 [2024-12-09 05:33:10.868172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 01:38:28.457 [2024-12-09 05:33:10.868183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 01:38:28.457 [2024-12-09 05:33:10.868197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 01:38:28.457 [2024-12-09 05:33:10.868209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 01:38:28.457 [2024-12-09 05:33:10.868227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 01:38:28.457 [2024-12-09 05:33:10.868238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 01:38:28.457 [2024-12-09 05:33:10.868252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 01:38:28.457 [2024-12-09 05:33:10.868263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 01:38:28.457 [2024-12-09 05:33:10.868277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 01:38:28.457 [2024-12-09 05:33:10.868288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 01:38:28.457 [2024-12-09 05:33:10.868302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 01:38:28.457 [2024-12-09 05:33:10.868313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 01:38:28.457 [2024-12-09 05:33:10.868326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 01:38:28.457 [2024-12-09 05:33:10.868339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 01:38:28.457 [2024-12-09 05:33:10.868352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 01:38:28.457 [2024-12-09 05:33:10.868362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 01:38:28.457 [2024-12-09 05:33:10.868375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 01:38:28.457 [2024-12-09 05:33:10.868386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 01:38:28.457 [2024-12-09 05:33:10.868400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 01:38:28.457 [2024-12-09 05:33:10.868411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 01:38:28.457 [2024-12-09 05:33:10.868429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 01:38:28.457 [2024-12-09 05:33:10.868440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 01:38:28.457 [2024-12-09 05:33:10.868453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 01:38:28.457 [2024-12-09 05:33:10.868473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 01:38:28.457 [2024-12-09 05:33:10.868493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 01:38:28.457 [2024-12-09 05:33:10.868504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 01:38:28.457 [2024-12-09 05:33:10.868521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 01:38:28.457 [2024-12-09 05:33:10.868533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 01:38:28.457 [2024-12-09 05:33:10.868558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 01:38:28.457 [2024-12-09 05:33:10.868568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 01:38:28.457 [2024-12-09 05:33:10.868581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 01:38:28.457 [2024-12-09 05:33:10.868591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 01:38:28.457 [2024-12-09 05:33:10.868618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 01:38:28.457 [2024-12-09 05:33:10.868636] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 01:38:28.457 [2024-12-09 05:33:10.868648] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d5ef8fa2-4aaa-43c5-83fd-e3213962e517 01:38:28.457 [2024-12-09 05:33:10.868659] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 01:38:28.457 [2024-12-09 05:33:10.868675] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 01:38:28.457 [2024-12-09 05:33:10.868688] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 01:38:28.457 [2024-12-09 05:33:10.868704] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 01:38:28.457 [2024-12-09 05:33:10.868713] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 01:38:28.457 [2024-12-09 05:33:10.868725] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 01:38:28.457 [2024-12-09 05:33:10.868735] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 01:38:28.457 [2024-12-09 05:33:10.868746] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 01:38:28.457 [2024-12-09 05:33:10.868755] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 01:38:28.457 [2024-12-09 05:33:10.868767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:38:28.457 [2024-12-09 05:33:10.868778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 01:38:28.457 [2024-12-09 05:33:10.868792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.507 ms 01:38:28.457 [2024-12-09 05:33:10.868801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:38:28.457 [2024-12-09 05:33:10.888990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:38:28.457 [2024-12-09 05:33:10.889021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 01:38:28.457 [2024-12-09 05:33:10.889036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.168 ms 01:38:28.457 [2024-12-09 05:33:10.889047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:38:28.457 [2024-12-09 05:33:10.889654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:38:28.457 [2024-12-09 05:33:10.889667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 01:38:28.457 [2024-12-09 05:33:10.889681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.568 ms 01:38:28.457 [2024-12-09 05:33:10.889692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:38:28.717 [2024-12-09 05:33:10.957745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:38:28.717 [2024-12-09 05:33:10.957780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 01:38:28.717 [2024-12-09 05:33:10.957797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:38:28.717 [2024-12-09 05:33:10.957808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:38:28.717 [2024-12-09 05:33:10.957869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:38:28.717 [2024-12-09 05:33:10.957880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 01:38:28.717 [2024-12-09 05:33:10.957894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:38:28.717 [2024-12-09 05:33:10.957903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:38:28.717 [2024-12-09 05:33:10.958001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:38:28.717 [2024-12-09 05:33:10.958019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 01:38:28.717 [2024-12-09 05:33:10.958032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:38:28.717 [2024-12-09 05:33:10.958042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:38:28.717 [2024-12-09 05:33:10.958068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:38:28.717 [2024-12-09 05:33:10.958079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 01:38:28.717 [2024-12-09 05:33:10.958092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:38:28.717 [2024-12-09 05:33:10.958102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:38:28.717 [2024-12-09 05:33:11.083993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:38:28.717 [2024-12-09 05:33:11.084041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 01:38:28.717 [2024-12-09 05:33:11.084058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:38:28.717 [2024-12-09 05:33:11.084069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:38:28.976 [2024-12-09 05:33:11.184939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:38:28.976 [2024-12-09 05:33:11.184989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 01:38:28.976 [2024-12-09 05:33:11.185008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:38:28.976 [2024-12-09 05:33:11.185020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:38:28.976 [2024-12-09 05:33:11.185157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:38:28.976 [2024-12-09 05:33:11.185170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 01:38:28.976 [2024-12-09 05:33:11.185188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:38:28.976 [2024-12-09 05:33:11.185199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:38:28.976 [2024-12-09 05:33:11.185266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:38:28.976 [2024-12-09 05:33:11.185278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 01:38:28.976 [2024-12-09 05:33:11.185293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:38:28.976 [2024-12-09 05:33:11.185303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:38:28.976 [2024-12-09 05:33:11.185424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:38:28.976 [2024-12-09 05:33:11.185439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 01:38:28.976 [2024-12-09 05:33:11.185452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:38:28.976 [2024-12-09 05:33:11.185488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:38:28.976 [2024-12-09 05:33:11.185539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:38:28.976 [2024-12-09 05:33:11.185551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 01:38:28.976 [2024-12-09 05:33:11.185565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:38:28.976 [2024-12-09 05:33:11.185575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:38:28.976 [2024-12-09 05:33:11.185628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:38:28.976 [2024-12-09 05:33:11.185640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 01:38:28.976 [2024-12-09 05:33:11.185653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:38:28.976 [2024-12-09 05:33:11.185666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:38:28.976 [2024-12-09 05:33:11.185726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:38:28.976 [2024-12-09 05:33:11.185741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 01:38:28.976 [2024-12-09 05:33:11.185755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:38:28.976 [2024-12-09 05:33:11.185765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:38:28.976 [2024-12-09 05:33:11.185927] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 539.893 ms, result 0 01:38:28.976 true 01:38:28.976 05:33:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 81468 01:38:28.976 05:33:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid81468 01:38:28.977 05:33:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 01:38:28.977 [2024-12-09 05:33:11.320669] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:38:28.977 [2024-12-09 05:33:11.320789] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82388 ] 01:38:29.236 [2024-12-09 05:33:11.503393] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:38:29.236 [2024-12-09 05:33:11.634403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:38:30.614  [2024-12-09T05:33:14.008Z] Copying: 208/1024 [MB] (208 MBps) [2024-12-09T05:33:15.386Z] Copying: 420/1024 [MB] (211 MBps) [2024-12-09T05:33:16.323Z] Copying: 632/1024 [MB] (212 MBps) [2024-12-09T05:33:16.892Z] Copying: 844/1024 [MB] (211 MBps) [2024-12-09T05:33:18.272Z] Copying: 1024/1024 [MB] (average 210 MBps) 01:38:35.816 01:38:35.816 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 81468 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 01:38:35.816 05:33:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 01:38:35.816 [2024-12-09 05:33:18.240936] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:38:35.816 [2024-12-09 05:33:18.241266] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82458 ] 01:38:36.075 [2024-12-09 05:33:18.421974] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:38:36.333 [2024-12-09 05:33:18.548512] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:38:36.592 [2024-12-09 05:33:18.955166] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 01:38:36.592 [2024-12-09 05:33:18.955248] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 01:38:36.592 [2024-12-09 05:33:19.022440] blobstore.c:4896:bs_recover: *NOTICE*: Performing recovery on blobstore 01:38:36.592 [2024-12-09 05:33:19.022792] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 01:38:36.592 [2024-12-09 05:33:19.022975] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 01:38:37.162 [2024-12-09 05:33:19.344765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:38:37.162 [2024-12-09 05:33:19.345024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 01:38:37.162 [2024-12-09 05:33:19.345051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 01:38:37.162 [2024-12-09 05:33:19.345068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:38:37.162 [2024-12-09 05:33:19.345132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:38:37.162 [2024-12-09 05:33:19.345146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 01:38:37.162 [2024-12-09 05:33:19.345158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 01:38:37.162 [2024-12-09 05:33:19.345169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:38:37.162 [2024-12-09 05:33:19.345192] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 01:38:37.162 [2024-12-09 05:33:19.346182] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 01:38:37.162 [2024-12-09 05:33:19.346213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:38:37.162 [2024-12-09 05:33:19.346225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 01:38:37.162 [2024-12-09 05:33:19.346237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.028 ms 01:38:37.162 [2024-12-09 05:33:19.346248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:38:37.162 [2024-12-09 05:33:19.348759] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 01:38:37.162 [2024-12-09 05:33:19.368949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:38:37.162 [2024-12-09 05:33:19.369000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 01:38:37.162 [2024-12-09 05:33:19.369015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.223 ms 01:38:37.162 [2024-12-09 05:33:19.369026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:38:37.162 [2024-12-09 05:33:19.369091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:38:37.162 [2024-12-09 05:33:19.369105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 01:38:37.162 [2024-12-09 05:33:19.369116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 01:38:37.162 [2024-12-09 05:33:19.369127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:38:37.162 [2024-12-09 05:33:19.381284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:38:37.162 [2024-12-09 05:33:19.381313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 01:38:37.162 [2024-12-09 05:33:19.381326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.103 ms 01:38:37.162 [2024-12-09 05:33:19.381336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:38:37.162 [2024-12-09 05:33:19.381424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:38:37.162 [2024-12-09 05:33:19.381437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 01:38:37.162 [2024-12-09 05:33:19.381448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 01:38:37.162 [2024-12-09 05:33:19.381473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:38:37.162 [2024-12-09 05:33:19.381553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:38:37.162 [2024-12-09 05:33:19.381566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 01:38:37.162 [2024-12-09 05:33:19.381578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 01:38:37.162 [2024-12-09 05:33:19.381588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:38:37.162 [2024-12-09 05:33:19.381615] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 01:38:37.162 [2024-12-09 05:33:19.387062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:38:37.162 [2024-12-09 05:33:19.387094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 01:38:37.162 [2024-12-09 05:33:19.387107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.464 ms 01:38:37.162 [2024-12-09 05:33:19.387117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:38:37.162 [2024-12-09 05:33:19.387149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:38:37.162 [2024-12-09 05:33:19.387160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 01:38:37.162 [2024-12-09 05:33:19.387171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 01:38:37.162 [2024-12-09 05:33:19.387181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:38:37.162 [2024-12-09 05:33:19.387221] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 01:38:37.162 [2024-12-09 05:33:19.387247] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 01:38:37.162 [2024-12-09 05:33:19.387286] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 01:38:37.162 [2024-12-09 05:33:19.387304] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 01:38:37.162 [2024-12-09 05:33:19.387398] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 01:38:37.162 [2024-12-09 05:33:19.387413] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 01:38:37.162 [2024-12-09 05:33:19.387427] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 01:38:37.162 [2024-12-09 05:33:19.387445] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 01:38:37.162 [2024-12-09 05:33:19.387457] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 01:38:37.162 [2024-12-09 05:33:19.387492] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 01:38:37.162 [2024-12-09 05:33:19.387503] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 01:38:37.162 [2024-12-09 05:33:19.387513] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 01:38:37.162 [2024-12-09 05:33:19.387537] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 01:38:37.162 [2024-12-09 05:33:19.387548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:38:37.162 [2024-12-09 05:33:19.387558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 01:38:37.162 [2024-12-09 05:33:19.387570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.331 ms 01:38:37.162 [2024-12-09 05:33:19.387580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:38:37.162 [2024-12-09 05:33:19.387651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:38:37.162 [2024-12-09 05:33:19.387668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 01:38:37.162 [2024-12-09 05:33:19.387679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 01:38:37.162 [2024-12-09 05:33:19.387691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:38:37.162 [2024-12-09 05:33:19.387788] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 01:38:37.162 [2024-12-09 05:33:19.387804] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 01:38:37.162 [2024-12-09 05:33:19.387816] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 01:38:37.162 [2024-12-09 05:33:19.387827] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:38:37.162 [2024-12-09 05:33:19.387837] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 01:38:37.162 [2024-12-09 05:33:19.387847] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 01:38:37.162 [2024-12-09 05:33:19.387856] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 01:38:37.162 [2024-12-09 05:33:19.387866] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 01:38:37.162 [2024-12-09 05:33:19.387876] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 01:38:37.162 [2024-12-09 05:33:19.387897] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 01:38:37.162 [2024-12-09 05:33:19.387909] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 01:38:37.162 [2024-12-09 05:33:19.387918] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 01:38:37.162 [2024-12-09 05:33:19.387928] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 01:38:37.163 [2024-12-09 05:33:19.387938] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 01:38:37.163 [2024-12-09 05:33:19.387947] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 01:38:37.163 [2024-12-09 05:33:19.387956] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:38:37.163 [2024-12-09 05:33:19.387966] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 01:38:37.163 [2024-12-09 05:33:19.387975] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 01:38:37.163 [2024-12-09 05:33:19.387985] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:38:37.163 [2024-12-09 05:33:19.387994] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 01:38:37.163 [2024-12-09 05:33:19.388004] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 01:38:37.163 [2024-12-09 05:33:19.388013] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:38:37.163 [2024-12-09 05:33:19.388022] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 01:38:37.163 [2024-12-09 05:33:19.388031] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 01:38:37.163 [2024-12-09 05:33:19.388041] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:38:37.163 [2024-12-09 05:33:19.388050] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 01:38:37.163 [2024-12-09 05:33:19.388059] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 01:38:37.163 [2024-12-09 05:33:19.388068] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:38:37.163 [2024-12-09 05:33:19.388077] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 01:38:37.163 [2024-12-09 05:33:19.388086] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 01:38:37.163 [2024-12-09 05:33:19.388095] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:38:37.163 [2024-12-09 05:33:19.388106] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 01:38:37.163 [2024-12-09 05:33:19.388115] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 01:38:37.163 [2024-12-09 05:33:19.388124] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 01:38:37.163 [2024-12-09 05:33:19.388133] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 01:38:37.163 [2024-12-09 05:33:19.388142] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 01:38:37.163 [2024-12-09 05:33:19.388150] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 01:38:37.163 [2024-12-09 05:33:19.388159] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 01:38:37.163 [2024-12-09 05:33:19.388168] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 01:38:37.163 [2024-12-09 05:33:19.388177] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:38:37.163 [2024-12-09 05:33:19.388186] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 01:38:37.163 [2024-12-09 05:33:19.388196] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 01:38:37.163 [2024-12-09 05:33:19.388206] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:38:37.163 [2024-12-09 05:33:19.388216] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 01:38:37.163 [2024-12-09 05:33:19.388226] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 01:38:37.163 [2024-12-09 05:33:19.388240] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 01:38:37.163 [2024-12-09 05:33:19.388250] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:38:37.163 [2024-12-09 05:33:19.388260] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 01:38:37.163 [2024-12-09 05:33:19.388270] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 01:38:37.163 [2024-12-09 05:33:19.388279] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 01:38:37.163 [2024-12-09 05:33:19.388289] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 01:38:37.163 [2024-12-09 05:33:19.388298] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 01:38:37.163 [2024-12-09 05:33:19.388308] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 01:38:37.163 [2024-12-09 05:33:19.388319] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 01:38:37.163 [2024-12-09 05:33:19.388332] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 01:38:37.163 [2024-12-09 05:33:19.388346] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 01:38:37.163 [2024-12-09 05:33:19.388356] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 01:38:37.163 [2024-12-09 05:33:19.388367] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 01:38:37.163 [2024-12-09 05:33:19.388378] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 01:38:37.163 [2024-12-09 05:33:19.388389] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 01:38:37.163 [2024-12-09 05:33:19.388399] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 01:38:37.163 [2024-12-09 05:33:19.388410] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 01:38:37.163 [2024-12-09 05:33:19.388421] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 01:38:37.163 [2024-12-09 05:33:19.388431] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 01:38:37.163 [2024-12-09 05:33:19.388442] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 01:38:37.163 [2024-12-09 05:33:19.388452] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 01:38:37.163 [2024-12-09 05:33:19.388474] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 01:38:37.163 [2024-12-09 05:33:19.388485] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 01:38:37.163 [2024-12-09 05:33:19.388495] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 01:38:37.163 [2024-12-09 05:33:19.388506] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 01:38:37.163 [2024-12-09 05:33:19.388517] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 01:38:37.163 [2024-12-09 05:33:19.388528] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 01:38:37.163 [2024-12-09 05:33:19.388539] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 01:38:37.163 [2024-12-09 05:33:19.388549] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 01:38:37.163 [2024-12-09 05:33:19.388561] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 01:38:37.163 [2024-12-09 05:33:19.388572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:38:37.163 [2024-12-09 05:33:19.388583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 01:38:37.163 [2024-12-09 05:33:19.388594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.840 ms 01:38:37.163 [2024-12-09 05:33:19.388604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:38:37.163 [2024-12-09 05:33:19.436326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:38:37.163 [2024-12-09 05:33:19.436362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 01:38:37.163 [2024-12-09 05:33:19.436376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.736 ms 01:38:37.163 [2024-12-09 05:33:19.436387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:38:37.163 [2024-12-09 05:33:19.436494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:38:37.163 [2024-12-09 05:33:19.436507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 01:38:37.163 [2024-12-09 05:33:19.436518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 01:38:37.163 [2024-12-09 05:33:19.436529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:38:37.163 [2024-12-09 05:33:19.499912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:38:37.163 [2024-12-09 05:33:19.500230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 01:38:37.163 [2024-12-09 05:33:19.500261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.429 ms 01:38:37.163 [2024-12-09 05:33:19.500273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:38:37.163 [2024-12-09 05:33:19.500343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:38:37.163 [2024-12-09 05:33:19.500356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 01:38:37.163 [2024-12-09 05:33:19.500368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 01:38:37.163 [2024-12-09 05:33:19.500378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:38:37.163 [2024-12-09 05:33:19.501144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:38:37.163 [2024-12-09 05:33:19.501160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 01:38:37.163 [2024-12-09 05:33:19.501172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.712 ms 01:38:37.163 [2024-12-09 05:33:19.501191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:38:37.163 [2024-12-09 05:33:19.501319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:38:37.163 [2024-12-09 05:33:19.501333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 01:38:37.163 [2024-12-09 05:33:19.501344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 01:38:37.163 [2024-12-09 05:33:19.501354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:38:37.163 [2024-12-09 05:33:19.524389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:38:37.163 [2024-12-09 05:33:19.524423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 01:38:37.163 [2024-12-09 05:33:19.524437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.049 ms 01:38:37.163 [2024-12-09 05:33:19.524449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:38:37.163 [2024-12-09 05:33:19.543791] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 01:38:37.163 [2024-12-09 05:33:19.543829] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 01:38:37.163 [2024-12-09 05:33:19.543845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:38:37.163 [2024-12-09 05:33:19.543856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 01:38:37.163 [2024-12-09 05:33:19.543867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.308 ms 01:38:37.163 [2024-12-09 05:33:19.543878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:38:37.163 [2024-12-09 05:33:19.573605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:38:37.164 [2024-12-09 05:33:19.573643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 01:38:37.164 [2024-12-09 05:33:19.573658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.733 ms 01:38:37.164 [2024-12-09 05:33:19.573669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:38:37.164 [2024-12-09 05:33:19.591130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:38:37.164 [2024-12-09 05:33:19.591163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 01:38:37.164 [2024-12-09 05:33:19.591176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.440 ms 01:38:37.164 [2024-12-09 05:33:19.591186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:38:37.164 [2024-12-09 05:33:19.608696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:38:37.164 [2024-12-09 05:33:19.608732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 01:38:37.164 [2024-12-09 05:33:19.608745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.500 ms 01:38:37.164 [2024-12-09 05:33:19.608755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:38:37.164 [2024-12-09 05:33:19.609507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:38:37.164 [2024-12-09 05:33:19.609658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 01:38:37.164 [2024-12-09 05:33:19.609680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.653 ms 01:38:37.164 [2024-12-09 05:33:19.609692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:38:37.423 [2024-12-09 05:33:19.704729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:38:37.423 [2024-12-09 05:33:19.704999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 01:38:37.423 [2024-12-09 05:33:19.705026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 95.163 ms 01:38:37.423 [2024-12-09 05:33:19.705039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:38:37.423 [2024-12-09 05:33:19.715488] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 01:38:37.423 [2024-12-09 05:33:19.718953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:38:37.423 [2024-12-09 05:33:19.718984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 01:38:37.423 [2024-12-09 05:33:19.718998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.824 ms 01:38:37.423 [2024-12-09 05:33:19.719023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:38:37.423 [2024-12-09 05:33:19.719151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:38:37.423 [2024-12-09 05:33:19.719166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 01:38:37.423 [2024-12-09 05:33:19.719177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 01:38:37.423 [2024-12-09 05:33:19.719188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:38:37.423 [2024-12-09 05:33:19.719283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:38:37.423 [2024-12-09 05:33:19.719299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 01:38:37.423 [2024-12-09 05:33:19.719310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 01:38:37.423 [2024-12-09 05:33:19.719321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:38:37.423 [2024-12-09 05:33:19.719350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:38:37.423 [2024-12-09 05:33:19.719363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 01:38:37.423 [2024-12-09 05:33:19.719375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 01:38:37.423 [2024-12-09 05:33:19.719385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:38:37.423 [2024-12-09 05:33:19.719428] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 01:38:37.423 [2024-12-09 05:33:19.719442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:38:37.423 [2024-12-09 05:33:19.719453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 01:38:37.423 [2024-12-09 05:33:19.719483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 01:38:37.423 [2024-12-09 05:33:19.719499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:38:37.423 [2024-12-09 05:33:19.755144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:38:37.423 [2024-12-09 05:33:19.755184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 01:38:37.423 [2024-12-09 05:33:19.755199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.678 ms 01:38:37.423 [2024-12-09 05:33:19.755209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:38:37.423 [2024-12-09 05:33:19.755288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:38:37.423 [2024-12-09 05:33:19.755300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 01:38:37.423 [2024-12-09 05:33:19.755312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 01:38:37.423 [2024-12-09 05:33:19.755322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:38:37.423 [2024-12-09 05:33:19.756767] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 412.108 ms, result 0 01:38:38.359  [2024-12-09T05:33:22.192Z] Copying: 25/1024 [MB] (25 MBps) [2024-12-09T05:33:23.131Z] Copying: 49/1024 [MB] (23 MBps) [2024-12-09T05:33:24.069Z] Copying: 73/1024 [MB] (24 MBps) [2024-12-09T05:33:25.023Z] Copying: 96/1024 [MB] (23 MBps) [2024-12-09T05:33:25.964Z] Copying: 121/1024 [MB] (24 MBps) [2024-12-09T05:33:26.900Z] Copying: 145/1024 [MB] (23 MBps) [2024-12-09T05:33:27.834Z] Copying: 169/1024 [MB] (24 MBps) [2024-12-09T05:33:28.770Z] Copying: 194/1024 [MB] (24 MBps) [2024-12-09T05:33:30.148Z] Copying: 222/1024 [MB] (27 MBps) [2024-12-09T05:33:31.084Z] Copying: 247/1024 [MB] (25 MBps) [2024-12-09T05:33:32.019Z] Copying: 272/1024 [MB] (24 MBps) [2024-12-09T05:33:32.953Z] Copying: 299/1024 [MB] (26 MBps) [2024-12-09T05:33:33.889Z] Copying: 324/1024 [MB] (25 MBps) [2024-12-09T05:33:34.823Z] Copying: 350/1024 [MB] (25 MBps) [2024-12-09T05:33:35.756Z] Copying: 375/1024 [MB] (25 MBps) [2024-12-09T05:33:37.131Z] Copying: 402/1024 [MB] (26 MBps) [2024-12-09T05:33:38.065Z] Copying: 427/1024 [MB] (25 MBps) [2024-12-09T05:33:38.999Z] Copying: 453/1024 [MB] (25 MBps) [2024-12-09T05:33:39.954Z] Copying: 478/1024 [MB] (25 MBps) [2024-12-09T05:33:40.951Z] Copying: 504/1024 [MB] (25 MBps) [2024-12-09T05:33:41.884Z] Copying: 530/1024 [MB] (26 MBps) [2024-12-09T05:33:42.820Z] Copying: 556/1024 [MB] (25 MBps) [2024-12-09T05:33:43.758Z] Copying: 575/1024 [MB] (18 MBps) [2024-12-09T05:33:45.137Z] Copying: 601/1024 [MB] (25 MBps) [2024-12-09T05:33:46.070Z] Copying: 627/1024 [MB] (26 MBps) [2024-12-09T05:33:47.005Z] Copying: 653/1024 [MB] (26 MBps) [2024-12-09T05:33:47.940Z] Copying: 680/1024 [MB] (26 MBps) [2024-12-09T05:33:48.878Z] Copying: 705/1024 [MB] (25 MBps) [2024-12-09T05:33:49.814Z] Copying: 730/1024 [MB] (24 MBps) [2024-12-09T05:33:50.746Z] Copying: 756/1024 [MB] (25 MBps) [2024-12-09T05:33:51.732Z] Copying: 782/1024 [MB] (25 MBps) [2024-12-09T05:33:53.102Z] Copying: 807/1024 [MB] (24 MBps) [2024-12-09T05:33:54.036Z] Copying: 832/1024 [MB] (25 MBps) [2024-12-09T05:33:55.002Z] Copying: 857/1024 [MB] (25 MBps) [2024-12-09T05:33:55.949Z] Copying: 883/1024 [MB] (26 MBps) [2024-12-09T05:33:56.883Z] Copying: 909/1024 [MB] (25 MBps) [2024-12-09T05:33:57.818Z] Copying: 935/1024 [MB] (26 MBps) [2024-12-09T05:33:58.756Z] Copying: 961/1024 [MB] (26 MBps) [2024-12-09T05:34:00.132Z] Copying: 987/1024 [MB] (25 MBps) [2024-12-09T05:34:01.066Z] Copying: 1013/1024 [MB] (25 MBps) [2024-12-09T05:34:01.066Z] Copying: 1023/1024 [MB] (10 MBps) [2024-12-09T05:34:01.066Z] Copying: 1024/1024 [MB] (average 24 MBps)[2024-12-09 05:34:00.767906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:39:18.610 [2024-12-09 05:34:00.768110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 01:39:18.610 [2024-12-09 05:34:00.768136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 01:39:18.610 [2024-12-09 05:34:00.768148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:39:18.610 [2024-12-09 05:34:00.769329] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 01:39:18.610 [2024-12-09 05:34:00.773910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:39:18.610 [2024-12-09 05:34:00.773946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 01:39:18.610 [2024-12-09 05:34:00.773958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.554 ms 01:39:18.610 [2024-12-09 05:34:00.773977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:39:18.610 [2024-12-09 05:34:00.782872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:39:18.610 [2024-12-09 05:34:00.783027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 01:39:18.610 [2024-12-09 05:34:00.783049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.651 ms 01:39:18.610 [2024-12-09 05:34:00.783059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:39:18.610 [2024-12-09 05:34:00.805543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:39:18.610 [2024-12-09 05:34:00.805703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 01:39:18.610 [2024-12-09 05:34:00.805724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.497 ms 01:39:18.610 [2024-12-09 05:34:00.805735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:39:18.610 [2024-12-09 05:34:00.810400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:39:18.610 [2024-12-09 05:34:00.810432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 01:39:18.610 [2024-12-09 05:34:00.810444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.603 ms 01:39:18.610 [2024-12-09 05:34:00.810453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:39:18.610 [2024-12-09 05:34:00.846411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:39:18.610 [2024-12-09 05:34:00.846449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 01:39:18.610 [2024-12-09 05:34:00.846472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.952 ms 01:39:18.610 [2024-12-09 05:34:00.846482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:39:18.610 [2024-12-09 05:34:00.867560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:39:18.610 [2024-12-09 05:34:00.867731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 01:39:18.610 [2024-12-09 05:34:00.867752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.076 ms 01:39:18.610 [2024-12-09 05:34:00.867763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:39:18.610 [2024-12-09 05:34:00.979075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:39:18.610 [2024-12-09 05:34:00.979225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 01:39:18.610 [2024-12-09 05:34:00.979253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 111.452 ms 01:39:18.610 [2024-12-09 05:34:00.979263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:39:18.610 [2024-12-09 05:34:01.013898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:39:18.610 [2024-12-09 05:34:01.013934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 01:39:18.610 [2024-12-09 05:34:01.013947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.670 ms 01:39:18.610 [2024-12-09 05:34:01.013969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:39:18.610 [2024-12-09 05:34:01.048567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:39:18.610 [2024-12-09 05:34:01.048702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 01:39:18.610 [2024-12-09 05:34:01.048722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.619 ms 01:39:18.610 [2024-12-09 05:34:01.048732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:39:18.870 [2024-12-09 05:34:01.082305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:39:18.870 [2024-12-09 05:34:01.082341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 01:39:18.870 [2024-12-09 05:34:01.082353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.573 ms 01:39:18.870 [2024-12-09 05:34:01.082362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:39:18.870 [2024-12-09 05:34:01.115141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:39:18.870 [2024-12-09 05:34:01.115176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 01:39:18.870 [2024-12-09 05:34:01.115188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.761 ms 01:39:18.870 [2024-12-09 05:34:01.115197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:39:18.871 [2024-12-09 05:34:01.115232] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 01:39:18.871 [2024-12-09 05:34:01.115247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 102144 / 261120 wr_cnt: 1 state: open 01:39:18.871 [2024-12-09 05:34:01.115260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 01:39:18.871 [2024-12-09 05:34:01.115271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 01:39:18.871 [2024-12-09 05:34:01.115281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 01:39:18.871 [2024-12-09 05:34:01.115291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 01:39:18.871 [2024-12-09 05:34:01.115302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 01:39:18.871 [2024-12-09 05:34:01.115312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 01:39:18.871 [2024-12-09 05:34:01.115321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 01:39:18.871 [2024-12-09 05:34:01.115331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 01:39:18.871 [2024-12-09 05:34:01.115341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 01:39:18.871 [2024-12-09 05:34:01.115351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 01:39:18.871 [2024-12-09 05:34:01.115361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 01:39:18.871 [2024-12-09 05:34:01.115371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 01:39:18.871 [2024-12-09 05:34:01.115380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 01:39:18.871 [2024-12-09 05:34:01.115390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 01:39:18.871 [2024-12-09 05:34:01.115400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 01:39:18.871 [2024-12-09 05:34:01.115411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 01:39:18.871 [2024-12-09 05:34:01.115421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 01:39:18.871 [2024-12-09 05:34:01.115431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 01:39:18.871 [2024-12-09 05:34:01.115441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 01:39:18.871 [2024-12-09 05:34:01.115451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 01:39:18.871 [2024-12-09 05:34:01.115473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 01:39:18.871 [2024-12-09 05:34:01.115484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 01:39:18.871 [2024-12-09 05:34:01.115493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 01:39:18.871 [2024-12-09 05:34:01.115503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 01:39:18.871 [2024-12-09 05:34:01.115512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 01:39:18.871 [2024-12-09 05:34:01.115524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 01:39:18.871 [2024-12-09 05:34:01.115534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 01:39:18.871 [2024-12-09 05:34:01.115545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 01:39:18.871 [2024-12-09 05:34:01.115557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 01:39:18.871 [2024-12-09 05:34:01.115567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 01:39:18.871 [2024-12-09 05:34:01.115578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 01:39:18.871 [2024-12-09 05:34:01.115608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 01:39:18.871 [2024-12-09 05:34:01.115618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 01:39:18.871 [2024-12-09 05:34:01.115628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 01:39:18.871 [2024-12-09 05:34:01.115638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 01:39:18.871 [2024-12-09 05:34:01.115649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 01:39:18.871 [2024-12-09 05:34:01.115659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 01:39:18.871 [2024-12-09 05:34:01.115668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 01:39:18.871 [2024-12-09 05:34:01.115678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 01:39:18.871 [2024-12-09 05:34:01.115688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 01:39:18.871 [2024-12-09 05:34:01.115698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 01:39:18.871 [2024-12-09 05:34:01.115708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 01:39:18.871 [2024-12-09 05:34:01.115718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 01:39:18.871 [2024-12-09 05:34:01.115728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 01:39:18.871 [2024-12-09 05:34:01.115737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 01:39:18.871 [2024-12-09 05:34:01.115747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 01:39:18.871 [2024-12-09 05:34:01.115764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 01:39:18.871 [2024-12-09 05:34:01.115775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 01:39:18.871 [2024-12-09 05:34:01.115784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 01:39:18.871 [2024-12-09 05:34:01.115794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 01:39:18.871 [2024-12-09 05:34:01.115804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 01:39:18.871 [2024-12-09 05:34:01.115813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 01:39:18.871 [2024-12-09 05:34:01.115823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 01:39:18.871 [2024-12-09 05:34:01.115832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 01:39:18.871 [2024-12-09 05:34:01.115842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 01:39:18.871 [2024-12-09 05:34:01.115852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 01:39:18.871 [2024-12-09 05:34:01.115861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 01:39:18.871 [2024-12-09 05:34:01.115871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 01:39:18.871 [2024-12-09 05:34:01.115881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 01:39:18.871 [2024-12-09 05:34:01.115893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 01:39:18.871 [2024-12-09 05:34:01.115903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 01:39:18.871 [2024-12-09 05:34:01.115913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 01:39:18.871 [2024-12-09 05:34:01.115923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 01:39:18.871 [2024-12-09 05:34:01.115932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 01:39:18.871 [2024-12-09 05:34:01.115942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 01:39:18.871 [2024-12-09 05:34:01.115952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 01:39:18.871 [2024-12-09 05:34:01.115961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 01:39:18.871 [2024-12-09 05:34:01.115971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 01:39:18.871 [2024-12-09 05:34:01.115981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 01:39:18.871 [2024-12-09 05:34:01.115991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 01:39:18.871 [2024-12-09 05:34:01.116000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 01:39:18.871 [2024-12-09 05:34:01.116010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 01:39:18.871 [2024-12-09 05:34:01.116019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 01:39:18.871 [2024-12-09 05:34:01.116029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 01:39:18.871 [2024-12-09 05:34:01.116038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 01:39:18.871 [2024-12-09 05:34:01.116048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 01:39:18.871 [2024-12-09 05:34:01.116057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 01:39:18.871 [2024-12-09 05:34:01.116067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 01:39:18.871 [2024-12-09 05:34:01.116078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 01:39:18.871 [2024-12-09 05:34:01.116088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 01:39:18.871 [2024-12-09 05:34:01.116098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 01:39:18.871 [2024-12-09 05:34:01.116108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 01:39:18.871 [2024-12-09 05:34:01.116119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 01:39:18.871 [2024-12-09 05:34:01.116129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 01:39:18.871 [2024-12-09 05:34:01.116141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 01:39:18.871 [2024-12-09 05:34:01.116152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 01:39:18.872 [2024-12-09 05:34:01.116162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 01:39:18.872 [2024-12-09 05:34:01.116171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 01:39:18.872 [2024-12-09 05:34:01.116181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 01:39:18.872 [2024-12-09 05:34:01.116191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 01:39:18.872 [2024-12-09 05:34:01.116201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 01:39:18.872 [2024-12-09 05:34:01.116212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 01:39:18.872 [2024-12-09 05:34:01.116222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 01:39:18.872 [2024-12-09 05:34:01.116233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 01:39:18.872 [2024-12-09 05:34:01.116243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 01:39:18.872 [2024-12-09 05:34:01.116252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 01:39:18.872 [2024-12-09 05:34:01.116262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 01:39:18.872 [2024-12-09 05:34:01.116273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 01:39:18.872 [2024-12-09 05:34:01.116283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 01:39:18.872 [2024-12-09 05:34:01.116301] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 01:39:18.872 [2024-12-09 05:34:01.116310] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d5ef8fa2-4aaa-43c5-83fd-e3213962e517 01:39:18.872 [2024-12-09 05:34:01.116337] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 102144 01:39:18.872 [2024-12-09 05:34:01.116346] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 103104 01:39:18.872 [2024-12-09 05:34:01.116356] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 102144 01:39:18.872 [2024-12-09 05:34:01.116366] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0094 01:39:18.872 [2024-12-09 05:34:01.116376] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 01:39:18.872 [2024-12-09 05:34:01.116386] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 01:39:18.872 [2024-12-09 05:34:01.116395] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 01:39:18.872 [2024-12-09 05:34:01.116403] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 01:39:18.872 [2024-12-09 05:34:01.116411] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 01:39:18.872 [2024-12-09 05:34:01.116424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:39:18.872 [2024-12-09 05:34:01.116434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 01:39:18.872 [2024-12-09 05:34:01.116445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.195 ms 01:39:18.872 [2024-12-09 05:34:01.116455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:39:18.872 [2024-12-09 05:34:01.136086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:39:18.872 [2024-12-09 05:34:01.136119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 01:39:18.872 [2024-12-09 05:34:01.136131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.621 ms 01:39:18.872 [2024-12-09 05:34:01.136145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:39:18.872 [2024-12-09 05:34:01.136792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:39:18.872 [2024-12-09 05:34:01.136809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 01:39:18.872 [2024-12-09 05:34:01.136826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.628 ms 01:39:18.872 [2024-12-09 05:34:01.136836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:39:18.872 [2024-12-09 05:34:01.189835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:39:18.872 [2024-12-09 05:34:01.189870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 01:39:18.872 [2024-12-09 05:34:01.189883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:39:18.872 [2024-12-09 05:34:01.189894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:39:18.872 [2024-12-09 05:34:01.189959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:39:18.872 [2024-12-09 05:34:01.189970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 01:39:18.872 [2024-12-09 05:34:01.189984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:39:18.872 [2024-12-09 05:34:01.189994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:39:18.872 [2024-12-09 05:34:01.190072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:39:18.872 [2024-12-09 05:34:01.190086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 01:39:18.872 [2024-12-09 05:34:01.190096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:39:18.872 [2024-12-09 05:34:01.190106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:39:18.872 [2024-12-09 05:34:01.190122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:39:18.872 [2024-12-09 05:34:01.190133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 01:39:18.872 [2024-12-09 05:34:01.190143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:39:18.872 [2024-12-09 05:34:01.190153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:39:18.872 [2024-12-09 05:34:01.318832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:39:18.872 [2024-12-09 05:34:01.318892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 01:39:18.872 [2024-12-09 05:34:01.318910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:39:18.872 [2024-12-09 05:34:01.318921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:39:19.130 [2024-12-09 05:34:01.418986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:39:19.130 [2024-12-09 05:34:01.419057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 01:39:19.130 [2024-12-09 05:34:01.419073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:39:19.130 [2024-12-09 05:34:01.419091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:39:19.130 [2024-12-09 05:34:01.419197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:39:19.130 [2024-12-09 05:34:01.419210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 01:39:19.130 [2024-12-09 05:34:01.419221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:39:19.130 [2024-12-09 05:34:01.419231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:39:19.130 [2024-12-09 05:34:01.419289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:39:19.130 [2024-12-09 05:34:01.419301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 01:39:19.130 [2024-12-09 05:34:01.419312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:39:19.130 [2024-12-09 05:34:01.419323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:39:19.130 [2024-12-09 05:34:01.419453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:39:19.130 [2024-12-09 05:34:01.419487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 01:39:19.130 [2024-12-09 05:34:01.419499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:39:19.130 [2024-12-09 05:34:01.419509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:39:19.130 [2024-12-09 05:34:01.419551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:39:19.130 [2024-12-09 05:34:01.419563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 01:39:19.130 [2024-12-09 05:34:01.419574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:39:19.130 [2024-12-09 05:34:01.419583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:39:19.130 [2024-12-09 05:34:01.419635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:39:19.130 [2024-12-09 05:34:01.419647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 01:39:19.130 [2024-12-09 05:34:01.419658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:39:19.130 [2024-12-09 05:34:01.419668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:39:19.130 [2024-12-09 05:34:01.419746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:39:19.130 [2024-12-09 05:34:01.419762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 01:39:19.130 [2024-12-09 05:34:01.419774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:39:19.130 [2024-12-09 05:34:01.419785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:39:19.130 [2024-12-09 05:34:01.419935] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 654.061 ms, result 0 01:39:21.027 01:39:21.027 01:39:21.027 05:34:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 01:39:22.936 05:34:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 01:39:22.936 [2024-12-09 05:34:04.971161] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:39:22.936 [2024-12-09 05:34:04.971290] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82922 ] 01:39:22.936 [2024-12-09 05:34:05.155477] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:39:22.936 [2024-12-09 05:34:05.281301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:39:23.508 [2024-12-09 05:34:05.682878] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 01:39:23.508 [2024-12-09 05:34:05.683262] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 01:39:23.508 [2024-12-09 05:34:05.848142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:39:23.508 [2024-12-09 05:34:05.848199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 01:39:23.508 [2024-12-09 05:34:05.848216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 01:39:23.508 [2024-12-09 05:34:05.848227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:39:23.508 [2024-12-09 05:34:05.848282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:39:23.508 [2024-12-09 05:34:05.848298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 01:39:23.508 [2024-12-09 05:34:05.848310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 01:39:23.508 [2024-12-09 05:34:05.848321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:39:23.508 [2024-12-09 05:34:05.848343] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 01:39:23.508 [2024-12-09 05:34:05.849259] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 01:39:23.508 [2024-12-09 05:34:05.849291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:39:23.508 [2024-12-09 05:34:05.849303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 01:39:23.508 [2024-12-09 05:34:05.849314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.954 ms 01:39:23.508 [2024-12-09 05:34:05.849324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:39:23.508 [2024-12-09 05:34:05.851741] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 01:39:23.508 [2024-12-09 05:34:05.871199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:39:23.508 [2024-12-09 05:34:05.871239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 01:39:23.508 [2024-12-09 05:34:05.871253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.491 ms 01:39:23.508 [2024-12-09 05:34:05.871265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:39:23.508 [2024-12-09 05:34:05.871336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:39:23.508 [2024-12-09 05:34:05.871350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 01:39:23.508 [2024-12-09 05:34:05.871361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 01:39:23.508 [2024-12-09 05:34:05.871372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:39:23.508 [2024-12-09 05:34:05.883508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:39:23.508 [2024-12-09 05:34:05.883538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 01:39:23.508 [2024-12-09 05:34:05.883556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.082 ms 01:39:23.508 [2024-12-09 05:34:05.883566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:39:23.508 [2024-12-09 05:34:05.883656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:39:23.508 [2024-12-09 05:34:05.883671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 01:39:23.508 [2024-12-09 05:34:05.883683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 01:39:23.508 [2024-12-09 05:34:05.883694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:39:23.508 [2024-12-09 05:34:05.883749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:39:23.508 [2024-12-09 05:34:05.883763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 01:39:23.508 [2024-12-09 05:34:05.883775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 01:39:23.508 [2024-12-09 05:34:05.883790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:39:23.508 [2024-12-09 05:34:05.883815] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 01:39:23.508 [2024-12-09 05:34:05.889498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:39:23.508 [2024-12-09 05:34:05.889684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 01:39:23.508 [2024-12-09 05:34:05.889705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.699 ms 01:39:23.508 [2024-12-09 05:34:05.889728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:39:23.509 [2024-12-09 05:34:05.889765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:39:23.509 [2024-12-09 05:34:05.889777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 01:39:23.509 [2024-12-09 05:34:05.889789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 01:39:23.509 [2024-12-09 05:34:05.889799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:39:23.509 [2024-12-09 05:34:05.889838] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 01:39:23.509 [2024-12-09 05:34:05.889866] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 01:39:23.509 [2024-12-09 05:34:05.889909] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 01:39:23.509 [2024-12-09 05:34:05.889929] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 01:39:23.509 [2024-12-09 05:34:05.890020] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 01:39:23.509 [2024-12-09 05:34:05.890035] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 01:39:23.509 [2024-12-09 05:34:05.890058] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 01:39:23.509 [2024-12-09 05:34:05.890072] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 01:39:23.509 [2024-12-09 05:34:05.890085] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 01:39:23.509 [2024-12-09 05:34:05.890097] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 01:39:23.509 [2024-12-09 05:34:05.890108] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 01:39:23.509 [2024-12-09 05:34:05.890123] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 01:39:23.509 [2024-12-09 05:34:05.890133] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 01:39:23.509 [2024-12-09 05:34:05.890145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:39:23.509 [2024-12-09 05:34:05.890156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 01:39:23.509 [2024-12-09 05:34:05.890167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.311 ms 01:39:23.509 [2024-12-09 05:34:05.890177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:39:23.509 [2024-12-09 05:34:05.890249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:39:23.509 [2024-12-09 05:34:05.890261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 01:39:23.509 [2024-12-09 05:34:05.890272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 01:39:23.509 [2024-12-09 05:34:05.890283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:39:23.509 [2024-12-09 05:34:05.890381] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 01:39:23.509 [2024-12-09 05:34:05.890396] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 01:39:23.509 [2024-12-09 05:34:05.890408] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 01:39:23.509 [2024-12-09 05:34:05.890418] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:39:23.509 [2024-12-09 05:34:05.890429] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 01:39:23.509 [2024-12-09 05:34:05.890439] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 01:39:23.509 [2024-12-09 05:34:05.890449] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 01:39:23.509 [2024-12-09 05:34:05.890459] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 01:39:23.509 [2024-12-09 05:34:05.890469] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 01:39:23.509 [2024-12-09 05:34:05.890501] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 01:39:23.509 [2024-12-09 05:34:05.890514] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 01:39:23.509 [2024-12-09 05:34:05.890525] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 01:39:23.509 [2024-12-09 05:34:05.890534] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 01:39:23.509 [2024-12-09 05:34:05.890555] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 01:39:23.509 [2024-12-09 05:34:05.890565] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 01:39:23.509 [2024-12-09 05:34:05.890575] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:39:23.509 [2024-12-09 05:34:05.890585] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 01:39:23.509 [2024-12-09 05:34:05.890595] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 01:39:23.509 [2024-12-09 05:34:05.890603] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:39:23.509 [2024-12-09 05:34:05.890613] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 01:39:23.509 [2024-12-09 05:34:05.890622] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 01:39:23.509 [2024-12-09 05:34:05.890632] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:39:23.509 [2024-12-09 05:34:05.890641] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 01:39:23.509 [2024-12-09 05:34:05.890650] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 01:39:23.509 [2024-12-09 05:34:05.890659] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:39:23.509 [2024-12-09 05:34:05.890669] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 01:39:23.509 [2024-12-09 05:34:05.890677] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 01:39:23.509 [2024-12-09 05:34:05.890686] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:39:23.509 [2024-12-09 05:34:05.890695] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 01:39:23.509 [2024-12-09 05:34:05.890705] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 01:39:23.509 [2024-12-09 05:34:05.890714] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:39:23.509 [2024-12-09 05:34:05.890724] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 01:39:23.509 [2024-12-09 05:34:05.890735] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 01:39:23.509 [2024-12-09 05:34:05.890744] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 01:39:23.509 [2024-12-09 05:34:05.890753] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 01:39:23.509 [2024-12-09 05:34:05.890762] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 01:39:23.509 [2024-12-09 05:34:05.890771] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 01:39:23.509 [2024-12-09 05:34:05.890780] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 01:39:23.509 [2024-12-09 05:34:05.890789] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 01:39:23.509 [2024-12-09 05:34:05.890797] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:39:23.509 [2024-12-09 05:34:05.890806] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 01:39:23.509 [2024-12-09 05:34:05.890815] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 01:39:23.509 [2024-12-09 05:34:05.890825] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:39:23.509 [2024-12-09 05:34:05.890835] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 01:39:23.509 [2024-12-09 05:34:05.890845] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 01:39:23.509 [2024-12-09 05:34:05.890856] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 01:39:23.509 [2024-12-09 05:34:05.890865] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:39:23.509 [2024-12-09 05:34:05.890876] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 01:39:23.509 [2024-12-09 05:34:05.890885] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 01:39:23.509 [2024-12-09 05:34:05.890894] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 01:39:23.509 [2024-12-09 05:34:05.890904] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 01:39:23.509 [2024-12-09 05:34:05.890914] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 01:39:23.509 [2024-12-09 05:34:05.890923] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 01:39:23.509 [2024-12-09 05:34:05.890934] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 01:39:23.509 [2024-12-09 05:34:05.890952] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 01:39:23.509 [2024-12-09 05:34:05.890964] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 01:39:23.509 [2024-12-09 05:34:05.890974] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 01:39:23.509 [2024-12-09 05:34:05.890984] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 01:39:23.509 [2024-12-09 05:34:05.890994] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 01:39:23.509 [2024-12-09 05:34:05.891005] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 01:39:23.509 [2024-12-09 05:34:05.891024] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 01:39:23.509 [2024-12-09 05:34:05.891035] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 01:39:23.509 [2024-12-09 05:34:05.891046] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 01:39:23.509 [2024-12-09 05:34:05.891056] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 01:39:23.509 [2024-12-09 05:34:05.891067] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 01:39:23.509 [2024-12-09 05:34:05.891077] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 01:39:23.509 [2024-12-09 05:34:05.891087] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 01:39:23.509 [2024-12-09 05:34:05.891097] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 01:39:23.509 [2024-12-09 05:34:05.891107] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 01:39:23.509 [2024-12-09 05:34:05.891118] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 01:39:23.510 [2024-12-09 05:34:05.891130] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 01:39:23.510 [2024-12-09 05:34:05.891141] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 01:39:23.510 [2024-12-09 05:34:05.891151] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 01:39:23.510 [2024-12-09 05:34:05.891162] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 01:39:23.510 [2024-12-09 05:34:05.891174] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 01:39:23.510 [2024-12-09 05:34:05.891185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:39:23.510 [2024-12-09 05:34:05.891196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 01:39:23.510 [2024-12-09 05:34:05.891208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.859 ms 01:39:23.510 [2024-12-09 05:34:05.891218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:39:23.510 [2024-12-09 05:34:05.937457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:39:23.510 [2024-12-09 05:34:05.937506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 01:39:23.510 [2024-12-09 05:34:05.937526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.262 ms 01:39:23.510 [2024-12-09 05:34:05.937537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:39:23.510 [2024-12-09 05:34:05.937613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:39:23.510 [2024-12-09 05:34:05.937625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 01:39:23.510 [2024-12-09 05:34:05.937636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 01:39:23.510 [2024-12-09 05:34:05.937650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:39:23.769 [2024-12-09 05:34:06.013946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:39:23.769 [2024-12-09 05:34:06.014139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 01:39:23.769 [2024-12-09 05:34:06.014163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 76.369 ms 01:39:23.769 [2024-12-09 05:34:06.014175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:39:23.769 [2024-12-09 05:34:06.014223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:39:23.769 [2024-12-09 05:34:06.014243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 01:39:23.769 [2024-12-09 05:34:06.014255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 01:39:23.769 [2024-12-09 05:34:06.014266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:39:23.769 [2024-12-09 05:34:06.015130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:39:23.769 [2024-12-09 05:34:06.015154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 01:39:23.769 [2024-12-09 05:34:06.015167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.780 ms 01:39:23.769 [2024-12-09 05:34:06.015179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:39:23.769 [2024-12-09 05:34:06.015312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:39:23.769 [2024-12-09 05:34:06.015335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 01:39:23.769 [2024-12-09 05:34:06.015346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 01:39:23.769 [2024-12-09 05:34:06.015357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:39:23.769 [2024-12-09 05:34:06.039498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:39:23.769 [2024-12-09 05:34:06.039534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 01:39:23.769 [2024-12-09 05:34:06.039548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.156 ms 01:39:23.769 [2024-12-09 05:34:06.039559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:39:23.769 [2024-12-09 05:34:06.058715] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 01:39:23.769 [2024-12-09 05:34:06.058755] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 01:39:23.769 [2024-12-09 05:34:06.058770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:39:23.769 [2024-12-09 05:34:06.058781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 01:39:23.769 [2024-12-09 05:34:06.058793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.133 ms 01:39:23.769 [2024-12-09 05:34:06.058802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:39:23.769 [2024-12-09 05:34:06.087924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:39:23.769 [2024-12-09 05:34:06.087962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 01:39:23.769 [2024-12-09 05:34:06.087976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.123 ms 01:39:23.769 [2024-12-09 05:34:06.087987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:39:23.769 [2024-12-09 05:34:06.105009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:39:23.769 [2024-12-09 05:34:06.105045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 01:39:23.769 [2024-12-09 05:34:06.105058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.997 ms 01:39:23.769 [2024-12-09 05:34:06.105068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:39:23.769 [2024-12-09 05:34:06.122198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:39:23.769 [2024-12-09 05:34:06.122232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 01:39:23.769 [2024-12-09 05:34:06.122245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.120 ms 01:39:23.769 [2024-12-09 05:34:06.122255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:39:23.769 [2024-12-09 05:34:06.123082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:39:23.769 [2024-12-09 05:34:06.123120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 01:39:23.769 [2024-12-09 05:34:06.123133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.707 ms 01:39:23.769 [2024-12-09 05:34:06.123144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:39:23.769 [2024-12-09 05:34:06.217425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:39:23.769 [2024-12-09 05:34:06.217503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 01:39:23.769 [2024-12-09 05:34:06.217521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 94.411 ms 01:39:23.769 [2024-12-09 05:34:06.217532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:39:24.028 [2024-12-09 05:34:06.227579] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 01:39:24.028 [2024-12-09 05:34:06.230670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:39:24.028 [2024-12-09 05:34:06.230861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 01:39:24.028 [2024-12-09 05:34:06.230885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.112 ms 01:39:24.028 [2024-12-09 05:34:06.230898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:39:24.028 [2024-12-09 05:34:06.230976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:39:24.028 [2024-12-09 05:34:06.230991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 01:39:24.028 [2024-12-09 05:34:06.231009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 01:39:24.028 [2024-12-09 05:34:06.231028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:39:24.028 [2024-12-09 05:34:06.233226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:39:24.028 [2024-12-09 05:34:06.233263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 01:39:24.028 [2024-12-09 05:34:06.233277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.127 ms 01:39:24.028 [2024-12-09 05:34:06.233288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:39:24.028 [2024-12-09 05:34:06.233324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:39:24.028 [2024-12-09 05:34:06.233336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 01:39:24.028 [2024-12-09 05:34:06.233347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 01:39:24.028 [2024-12-09 05:34:06.233365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:39:24.028 [2024-12-09 05:34:06.233409] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 01:39:24.028 [2024-12-09 05:34:06.233423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:39:24.028 [2024-12-09 05:34:06.233434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 01:39:24.028 [2024-12-09 05:34:06.233455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 01:39:24.028 [2024-12-09 05:34:06.233465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:39:24.028 [2024-12-09 05:34:06.267843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:39:24.028 [2024-12-09 05:34:06.267882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 01:39:24.028 [2024-12-09 05:34:06.267902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.394 ms 01:39:24.028 [2024-12-09 05:34:06.267914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:39:24.028 [2024-12-09 05:34:06.267993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:39:24.028 [2024-12-09 05:34:06.268006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 01:39:24.028 [2024-12-09 05:34:06.268018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 01:39:24.028 [2024-12-09 05:34:06.268028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:39:24.028 [2024-12-09 05:34:06.269619] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 421.613 ms, result 0 01:39:25.405  [2024-12-09T05:34:08.799Z] Copying: 1436/1048576 [kB] (1436 kBps) [2024-12-09T05:34:09.750Z] Copying: 13/1024 [MB] (11 MBps) [2024-12-09T05:34:10.748Z] Copying: 45/1024 [MB] (32 MBps) [2024-12-09T05:34:11.682Z] Copying: 78/1024 [MB] (32 MBps) [2024-12-09T05:34:12.649Z] Copying: 112/1024 [MB] (34 MBps) [2024-12-09T05:34:13.581Z] Copying: 146/1024 [MB] (34 MBps) [2024-12-09T05:34:14.516Z] Copying: 179/1024 [MB] (32 MBps) [2024-12-09T05:34:15.893Z] Copying: 213/1024 [MB] (33 MBps) [2024-12-09T05:34:16.828Z] Copying: 247/1024 [MB] (33 MBps) [2024-12-09T05:34:17.765Z] Copying: 280/1024 [MB] (33 MBps) [2024-12-09T05:34:18.722Z] Copying: 314/1024 [MB] (33 MBps) [2024-12-09T05:34:19.661Z] Copying: 346/1024 [MB] (32 MBps) [2024-12-09T05:34:20.597Z] Copying: 379/1024 [MB] (33 MBps) [2024-12-09T05:34:21.532Z] Copying: 417/1024 [MB] (37 MBps) [2024-12-09T05:34:22.467Z] Copying: 454/1024 [MB] (37 MBps) [2024-12-09T05:34:23.842Z] Copying: 491/1024 [MB] (37 MBps) [2024-12-09T05:34:24.775Z] Copying: 528/1024 [MB] (36 MBps) [2024-12-09T05:34:25.732Z] Copying: 564/1024 [MB] (36 MBps) [2024-12-09T05:34:26.669Z] Copying: 596/1024 [MB] (32 MBps) [2024-12-09T05:34:27.606Z] Copying: 628/1024 [MB] (31 MBps) [2024-12-09T05:34:28.544Z] Copying: 661/1024 [MB] (32 MBps) [2024-12-09T05:34:29.482Z] Copying: 694/1024 [MB] (33 MBps) [2024-12-09T05:34:30.861Z] Copying: 727/1024 [MB] (33 MBps) [2024-12-09T05:34:31.800Z] Copying: 759/1024 [MB] (31 MBps) [2024-12-09T05:34:32.737Z] Copying: 794/1024 [MB] (34 MBps) [2024-12-09T05:34:33.674Z] Copying: 827/1024 [MB] (33 MBps) [2024-12-09T05:34:34.609Z] Copying: 861/1024 [MB] (33 MBps) [2024-12-09T05:34:35.548Z] Copying: 893/1024 [MB] (32 MBps) [2024-12-09T05:34:36.480Z] Copying: 931/1024 [MB] (38 MBps) [2024-12-09T05:34:37.857Z] Copying: 972/1024 [MB] (40 MBps) [2024-12-09T05:34:38.116Z] Copying: 1005/1024 [MB] (33 MBps) [2024-12-09T05:34:38.375Z] Copying: 1024/1024 [MB] (average 32 MBps)[2024-12-09 05:34:38.270433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:39:55.919 [2024-12-09 05:34:38.270540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 01:39:55.919 [2024-12-09 05:34:38.270565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 01:39:55.919 [2024-12-09 05:34:38.270580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:39:55.919 [2024-12-09 05:34:38.270617] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 01:39:55.919 [2024-12-09 05:34:38.275761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:39:55.919 [2024-12-09 05:34:38.275933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 01:39:55.919 [2024-12-09 05:34:38.275960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.126 ms 01:39:55.919 [2024-12-09 05:34:38.275972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:39:55.919 [2024-12-09 05:34:38.276259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:39:55.919 [2024-12-09 05:34:38.276276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 01:39:55.919 [2024-12-09 05:34:38.276289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.229 ms 01:39:55.919 [2024-12-09 05:34:38.276300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:39:55.919 [2024-12-09 05:34:38.289810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:39:55.919 [2024-12-09 05:34:38.289862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 01:39:55.919 [2024-12-09 05:34:38.289878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.512 ms 01:39:55.919 [2024-12-09 05:34:38.289891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:39:55.919 [2024-12-09 05:34:38.295341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:39:55.919 [2024-12-09 05:34:38.295515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 01:39:55.919 [2024-12-09 05:34:38.295546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.420 ms 01:39:55.919 [2024-12-09 05:34:38.295558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:39:55.919 [2024-12-09 05:34:38.334074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:39:55.919 [2024-12-09 05:34:38.334121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 01:39:55.919 [2024-12-09 05:34:38.334138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.422 ms 01:39:55.919 [2024-12-09 05:34:38.334149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:39:55.919 [2024-12-09 05:34:38.355004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:39:55.919 [2024-12-09 05:34:38.355054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 01:39:55.919 [2024-12-09 05:34:38.355069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.846 ms 01:39:55.919 [2024-12-09 05:34:38.355097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:39:55.919 [2024-12-09 05:34:38.357532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:39:55.919 [2024-12-09 05:34:38.357679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 01:39:55.919 [2024-12-09 05:34:38.357709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.392 ms 01:39:55.919 [2024-12-09 05:34:38.357721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:39:56.179 [2024-12-09 05:34:38.394365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:39:56.179 [2024-12-09 05:34:38.394413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 01:39:56.179 [2024-12-09 05:34:38.394429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.678 ms 01:39:56.179 [2024-12-09 05:34:38.394439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:39:56.179 [2024-12-09 05:34:38.428812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:39:56.179 [2024-12-09 05:34:38.428850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 01:39:56.179 [2024-12-09 05:34:38.428864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.374 ms 01:39:56.179 [2024-12-09 05:34:38.428875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:39:56.179 [2024-12-09 05:34:38.463167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:39:56.179 [2024-12-09 05:34:38.463335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 01:39:56.179 [2024-12-09 05:34:38.463356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.309 ms 01:39:56.179 [2024-12-09 05:34:38.463367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:39:56.179 [2024-12-09 05:34:38.497023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:39:56.179 [2024-12-09 05:34:38.497192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 01:39:56.179 [2024-12-09 05:34:38.497212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.622 ms 01:39:56.179 [2024-12-09 05:34:38.497223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:39:56.179 [2024-12-09 05:34:38.497260] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 01:39:56.179 [2024-12-09 05:34:38.497279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 01:39:56.179 [2024-12-09 05:34:38.497294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 01:39:56.179 [2024-12-09 05:34:38.497306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 01:39:56.179 [2024-12-09 05:34:38.497319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 01:39:56.179 [2024-12-09 05:34:38.497331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 01:39:56.179 [2024-12-09 05:34:38.497343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 01:39:56.179 [2024-12-09 05:34:38.497354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 01:39:56.179 [2024-12-09 05:34:38.497365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 01:39:56.179 [2024-12-09 05:34:38.497377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 01:39:56.179 [2024-12-09 05:34:38.497387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 01:39:56.179 [2024-12-09 05:34:38.497398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 01:39:56.179 [2024-12-09 05:34:38.497408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 01:39:56.179 [2024-12-09 05:34:38.497419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 01:39:56.179 [2024-12-09 05:34:38.497430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 01:39:56.179 [2024-12-09 05:34:38.497440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 01:39:56.179 [2024-12-09 05:34:38.497451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 01:39:56.179 [2024-12-09 05:34:38.497476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 01:39:56.179 [2024-12-09 05:34:38.497488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 01:39:56.179 [2024-12-09 05:34:38.497499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 01:39:56.179 [2024-12-09 05:34:38.497510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 01:39:56.179 [2024-12-09 05:34:38.497520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 01:39:56.179 [2024-12-09 05:34:38.497531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 01:39:56.179 [2024-12-09 05:34:38.497542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 01:39:56.179 [2024-12-09 05:34:38.497554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 01:39:56.179 [2024-12-09 05:34:38.497565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 01:39:56.179 [2024-12-09 05:34:38.497577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 01:39:56.179 [2024-12-09 05:34:38.497589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 01:39:56.179 [2024-12-09 05:34:38.497602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 01:39:56.179 [2024-12-09 05:34:38.497613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 01:39:56.179 [2024-12-09 05:34:38.497624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 01:39:56.179 [2024-12-09 05:34:38.497636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 01:39:56.179 [2024-12-09 05:34:38.497648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 01:39:56.179 [2024-12-09 05:34:38.497659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 01:39:56.179 [2024-12-09 05:34:38.497670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 01:39:56.179 [2024-12-09 05:34:38.497681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 01:39:56.179 [2024-12-09 05:34:38.497692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 01:39:56.179 [2024-12-09 05:34:38.497703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 01:39:56.179 [2024-12-09 05:34:38.497714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 01:39:56.179 [2024-12-09 05:34:38.497725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 01:39:56.179 [2024-12-09 05:34:38.497736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 01:39:56.179 [2024-12-09 05:34:38.497747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 01:39:56.179 [2024-12-09 05:34:38.497758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 01:39:56.179 [2024-12-09 05:34:38.497768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 01:39:56.179 [2024-12-09 05:34:38.497779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 01:39:56.179 [2024-12-09 05:34:38.497790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 01:39:56.179 [2024-12-09 05:34:38.497800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 01:39:56.179 [2024-12-09 05:34:38.497810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 01:39:56.179 [2024-12-09 05:34:38.497821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 01:39:56.179 [2024-12-09 05:34:38.497832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 01:39:56.179 [2024-12-09 05:34:38.497842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 01:39:56.179 [2024-12-09 05:34:38.497853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 01:39:56.180 [2024-12-09 05:34:38.497864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 01:39:56.180 [2024-12-09 05:34:38.497875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 01:39:56.180 [2024-12-09 05:34:38.497887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 01:39:56.180 [2024-12-09 05:34:38.497899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 01:39:56.180 [2024-12-09 05:34:38.497911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 01:39:56.180 [2024-12-09 05:34:38.497922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 01:39:56.180 [2024-12-09 05:34:38.497932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 01:39:56.180 [2024-12-09 05:34:38.497943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 01:39:56.180 [2024-12-09 05:34:38.497954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 01:39:56.180 [2024-12-09 05:34:38.497965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 01:39:56.180 [2024-12-09 05:34:38.497976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 01:39:56.180 [2024-12-09 05:34:38.497988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 01:39:56.180 [2024-12-09 05:34:38.498000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 01:39:56.180 [2024-12-09 05:34:38.498011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 01:39:56.180 [2024-12-09 05:34:38.498022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 01:39:56.180 [2024-12-09 05:34:38.498032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 01:39:56.180 [2024-12-09 05:34:38.498044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 01:39:56.180 [2024-12-09 05:34:38.498055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 01:39:56.180 [2024-12-09 05:34:38.498066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 01:39:56.180 [2024-12-09 05:34:38.498078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 01:39:56.180 [2024-12-09 05:34:38.498088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 01:39:56.180 [2024-12-09 05:34:38.498106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 01:39:56.180 [2024-12-09 05:34:38.498117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 01:39:56.180 [2024-12-09 05:34:38.498128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 01:39:56.180 [2024-12-09 05:34:38.498140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 01:39:56.180 [2024-12-09 05:34:38.498151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 01:39:56.180 [2024-12-09 05:34:38.498162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 01:39:56.180 [2024-12-09 05:34:38.498173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 01:39:56.180 [2024-12-09 05:34:38.498184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 01:39:56.180 [2024-12-09 05:34:38.498196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 01:39:56.180 [2024-12-09 05:34:38.498207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 01:39:56.180 [2024-12-09 05:34:38.498219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 01:39:56.180 [2024-12-09 05:34:38.498229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 01:39:56.180 [2024-12-09 05:34:38.498240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 01:39:56.180 [2024-12-09 05:34:38.498251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 01:39:56.180 [2024-12-09 05:34:38.498261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 01:39:56.180 [2024-12-09 05:34:38.498273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 01:39:56.180 [2024-12-09 05:34:38.498283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 01:39:56.180 [2024-12-09 05:34:38.498294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 01:39:56.180 [2024-12-09 05:34:38.498304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 01:39:56.180 [2024-12-09 05:34:38.498315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 01:39:56.180 [2024-12-09 05:34:38.498325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 01:39:56.180 [2024-12-09 05:34:38.498337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 01:39:56.180 [2024-12-09 05:34:38.498349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 01:39:56.180 [2024-12-09 05:34:38.498360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 01:39:56.180 [2024-12-09 05:34:38.498371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 01:39:56.180 [2024-12-09 05:34:38.498382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 01:39:56.180 [2024-12-09 05:34:38.498393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 01:39:56.180 [2024-12-09 05:34:38.498404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 01:39:56.180 [2024-12-09 05:34:38.498423] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 01:39:56.180 [2024-12-09 05:34:38.498435] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d5ef8fa2-4aaa-43c5-83fd-e3213962e517 01:39:56.180 [2024-12-09 05:34:38.498446] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 01:39:56.180 [2024-12-09 05:34:38.498702] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 162496 01:39:56.180 [2024-12-09 05:34:38.498747] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 160512 01:39:56.180 [2024-12-09 05:34:38.498780] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0124 01:39:56.180 [2024-12-09 05:34:38.498810] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 01:39:56.180 [2024-12-09 05:34:38.498854] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 01:39:56.180 [2024-12-09 05:34:38.498884] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 01:39:56.180 [2024-12-09 05:34:38.498913] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 01:39:56.180 [2024-12-09 05:34:38.498942] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 01:39:56.180 [2024-12-09 05:34:38.498973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:39:56.180 [2024-12-09 05:34:38.499012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 01:39:56.180 [2024-12-09 05:34:38.499162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.718 ms 01:39:56.180 [2024-12-09 05:34:38.499193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:39:56.180 [2024-12-09 05:34:38.519308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:39:56.180 [2024-12-09 05:34:38.519431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 01:39:56.180 [2024-12-09 05:34:38.519545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.084 ms 01:39:56.180 [2024-12-09 05:34:38.519582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:39:56.180 [2024-12-09 05:34:38.520226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:39:56.180 [2024-12-09 05:34:38.520325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 01:39:56.180 [2024-12-09 05:34:38.520394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.599 ms 01:39:56.180 [2024-12-09 05:34:38.520436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:39:56.180 [2024-12-09 05:34:38.573414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:39:56.180 [2024-12-09 05:34:38.573580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 01:39:56.180 [2024-12-09 05:34:38.573717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:39:56.180 [2024-12-09 05:34:38.573756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:39:56.180 [2024-12-09 05:34:38.573841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:39:56.180 [2024-12-09 05:34:38.573906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 01:39:56.180 [2024-12-09 05:34:38.573923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:39:56.180 [2024-12-09 05:34:38.573941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:39:56.180 [2024-12-09 05:34:38.574020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:39:56.180 [2024-12-09 05:34:38.574034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 01:39:56.180 [2024-12-09 05:34:38.574046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:39:56.180 [2024-12-09 05:34:38.574057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:39:56.180 [2024-12-09 05:34:38.574077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:39:56.180 [2024-12-09 05:34:38.574088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 01:39:56.180 [2024-12-09 05:34:38.574100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:39:56.180 [2024-12-09 05:34:38.574110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:39:56.439 [2024-12-09 05:34:38.703020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:39:56.439 [2024-12-09 05:34:38.703089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 01:39:56.440 [2024-12-09 05:34:38.703105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:39:56.440 [2024-12-09 05:34:38.703133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:39:56.440 [2024-12-09 05:34:38.805329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:39:56.440 [2024-12-09 05:34:38.805387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 01:39:56.440 [2024-12-09 05:34:38.805403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:39:56.440 [2024-12-09 05:34:38.805413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:39:56.440 [2024-12-09 05:34:38.805592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:39:56.440 [2024-12-09 05:34:38.805606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 01:39:56.440 [2024-12-09 05:34:38.805619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:39:56.440 [2024-12-09 05:34:38.805629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:39:56.440 [2024-12-09 05:34:38.805696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:39:56.440 [2024-12-09 05:34:38.805708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 01:39:56.440 [2024-12-09 05:34:38.805719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:39:56.440 [2024-12-09 05:34:38.805730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:39:56.440 [2024-12-09 05:34:38.805852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:39:56.440 [2024-12-09 05:34:38.805871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 01:39:56.440 [2024-12-09 05:34:38.805882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:39:56.440 [2024-12-09 05:34:38.805894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:39:56.440 [2024-12-09 05:34:38.805944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:39:56.440 [2024-12-09 05:34:38.805957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 01:39:56.440 [2024-12-09 05:34:38.805968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:39:56.440 [2024-12-09 05:34:38.805978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:39:56.440 [2024-12-09 05:34:38.806027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:39:56.440 [2024-12-09 05:34:38.806052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 01:39:56.440 [2024-12-09 05:34:38.806064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:39:56.440 [2024-12-09 05:34:38.806074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:39:56.440 [2024-12-09 05:34:38.806127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:39:56.440 [2024-12-09 05:34:38.806139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 01:39:56.440 [2024-12-09 05:34:38.806150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:39:56.440 [2024-12-09 05:34:38.806160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:39:56.440 [2024-12-09 05:34:38.806346] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 536.713 ms, result 0 01:39:57.818 01:39:57.818 01:39:57.818 05:34:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 01:39:59.337 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 01:39:59.337 05:34:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 01:39:59.596 [2024-12-09 05:34:41.873653] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:39:59.596 [2024-12-09 05:34:41.873795] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83292 ] 01:39:59.856 [2024-12-09 05:34:42.061191] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:39:59.856 [2024-12-09 05:34:42.191419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:40:00.427 [2024-12-09 05:34:42.593670] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 01:40:00.427 [2024-12-09 05:34:42.593761] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 01:40:00.427 [2024-12-09 05:34:42.758992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:40:00.427 [2024-12-09 05:34:42.759247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 01:40:00.427 [2024-12-09 05:34:42.759276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 01:40:00.427 [2024-12-09 05:34:42.759289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:00.427 [2024-12-09 05:34:42.759359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:40:00.427 [2024-12-09 05:34:42.759377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 01:40:00.427 [2024-12-09 05:34:42.759389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 01:40:00.427 [2024-12-09 05:34:42.759400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:00.427 [2024-12-09 05:34:42.759425] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 01:40:00.427 [2024-12-09 05:34:42.760387] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 01:40:00.427 [2024-12-09 05:34:42.760411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:40:00.427 [2024-12-09 05:34:42.760422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 01:40:00.427 [2024-12-09 05:34:42.760434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.993 ms 01:40:00.427 [2024-12-09 05:34:42.760444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:00.427 [2024-12-09 05:34:42.762791] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 01:40:00.427 [2024-12-09 05:34:42.782566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:40:00.427 [2024-12-09 05:34:42.782627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 01:40:00.428 [2024-12-09 05:34:42.782643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.808 ms 01:40:00.428 [2024-12-09 05:34:42.782655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:00.428 [2024-12-09 05:34:42.782725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:40:00.428 [2024-12-09 05:34:42.782741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 01:40:00.428 [2024-12-09 05:34:42.782752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 01:40:00.428 [2024-12-09 05:34:42.782764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:00.428 [2024-12-09 05:34:42.794974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:40:00.428 [2024-12-09 05:34:42.795008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 01:40:00.428 [2024-12-09 05:34:42.795027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.155 ms 01:40:00.428 [2024-12-09 05:34:42.795044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:00.428 [2024-12-09 05:34:42.795165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:40:00.428 [2024-12-09 05:34:42.795181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 01:40:00.428 [2024-12-09 05:34:42.795193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 01:40:00.428 [2024-12-09 05:34:42.795204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:00.428 [2024-12-09 05:34:42.795263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:40:00.428 [2024-12-09 05:34:42.795276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 01:40:00.428 [2024-12-09 05:34:42.795288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 01:40:00.428 [2024-12-09 05:34:42.795298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:00.428 [2024-12-09 05:34:42.795329] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 01:40:00.428 [2024-12-09 05:34:42.801298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:40:00.428 [2024-12-09 05:34:42.801333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 01:40:00.428 [2024-12-09 05:34:42.801351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.985 ms 01:40:00.428 [2024-12-09 05:34:42.801362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:00.428 [2024-12-09 05:34:42.801394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:40:00.428 [2024-12-09 05:34:42.801416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 01:40:00.428 [2024-12-09 05:34:42.801428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 01:40:00.428 [2024-12-09 05:34:42.801439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:00.428 [2024-12-09 05:34:42.801492] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 01:40:00.428 [2024-12-09 05:34:42.801521] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 01:40:00.428 [2024-12-09 05:34:42.801559] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 01:40:00.428 [2024-12-09 05:34:42.801583] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 01:40:00.428 [2024-12-09 05:34:42.801675] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 01:40:00.428 [2024-12-09 05:34:42.801690] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 01:40:00.428 [2024-12-09 05:34:42.801704] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 01:40:00.428 [2024-12-09 05:34:42.801719] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 01:40:00.428 [2024-12-09 05:34:42.801732] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 01:40:00.428 [2024-12-09 05:34:42.801744] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 01:40:00.428 [2024-12-09 05:34:42.801755] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 01:40:00.428 [2024-12-09 05:34:42.801771] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 01:40:00.428 [2024-12-09 05:34:42.801781] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 01:40:00.428 [2024-12-09 05:34:42.801793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:40:00.428 [2024-12-09 05:34:42.801803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 01:40:00.428 [2024-12-09 05:34:42.801815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.304 ms 01:40:00.428 [2024-12-09 05:34:42.801825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:00.428 [2024-12-09 05:34:42.801903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:40:00.428 [2024-12-09 05:34:42.801914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 01:40:00.428 [2024-12-09 05:34:42.801925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 01:40:00.428 [2024-12-09 05:34:42.801936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:00.428 [2024-12-09 05:34:42.802038] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 01:40:00.428 [2024-12-09 05:34:42.802053] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 01:40:00.428 [2024-12-09 05:34:42.802064] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 01:40:00.428 [2024-12-09 05:34:42.802076] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:40:00.428 [2024-12-09 05:34:42.802086] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 01:40:00.428 [2024-12-09 05:34:42.802096] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 01:40:00.428 [2024-12-09 05:34:42.802106] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 01:40:00.428 [2024-12-09 05:34:42.802117] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 01:40:00.428 [2024-12-09 05:34:42.802127] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 01:40:00.428 [2024-12-09 05:34:42.802137] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 01:40:00.428 [2024-12-09 05:34:42.802148] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 01:40:00.428 [2024-12-09 05:34:42.802159] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 01:40:00.428 [2024-12-09 05:34:42.802169] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 01:40:00.428 [2024-12-09 05:34:42.802190] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 01:40:00.428 [2024-12-09 05:34:42.802201] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 01:40:00.428 [2024-12-09 05:34:42.802211] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:40:00.428 [2024-12-09 05:34:42.802220] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 01:40:00.428 [2024-12-09 05:34:42.802230] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 01:40:00.428 [2024-12-09 05:34:42.802239] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:40:00.428 [2024-12-09 05:34:42.802250] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 01:40:00.428 [2024-12-09 05:34:42.802260] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 01:40:00.428 [2024-12-09 05:34:42.802270] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:40:00.428 [2024-12-09 05:34:42.802279] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 01:40:00.428 [2024-12-09 05:34:42.802289] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 01:40:00.428 [2024-12-09 05:34:42.802299] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:40:00.428 [2024-12-09 05:34:42.802308] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 01:40:00.428 [2024-12-09 05:34:42.802318] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 01:40:00.428 [2024-12-09 05:34:42.802328] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:40:00.428 [2024-12-09 05:34:42.802337] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 01:40:00.428 [2024-12-09 05:34:42.802346] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 01:40:00.428 [2024-12-09 05:34:42.802355] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:40:00.428 [2024-12-09 05:34:42.802364] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 01:40:00.428 [2024-12-09 05:34:42.802374] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 01:40:00.428 [2024-12-09 05:34:42.802383] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 01:40:00.428 [2024-12-09 05:34:42.802392] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 01:40:00.428 [2024-12-09 05:34:42.802401] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 01:40:00.428 [2024-12-09 05:34:42.802410] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 01:40:00.428 [2024-12-09 05:34:42.802419] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 01:40:00.428 [2024-12-09 05:34:42.802428] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 01:40:00.428 [2024-12-09 05:34:42.802437] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:40:00.428 [2024-12-09 05:34:42.802447] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 01:40:00.428 [2024-12-09 05:34:42.802456] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 01:40:00.428 [2024-12-09 05:34:42.802703] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:40:00.428 [2024-12-09 05:34:42.802740] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 01:40:00.428 [2024-12-09 05:34:42.802772] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 01:40:00.428 [2024-12-09 05:34:42.802803] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 01:40:00.428 [2024-12-09 05:34:42.802834] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:40:00.428 [2024-12-09 05:34:42.802864] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 01:40:00.428 [2024-12-09 05:34:42.802947] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 01:40:00.428 [2024-12-09 05:34:42.802982] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 01:40:00.428 [2024-12-09 05:34:42.803021] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 01:40:00.428 [2024-12-09 05:34:42.803054] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 01:40:00.428 [2024-12-09 05:34:42.803084] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 01:40:00.428 [2024-12-09 05:34:42.803116] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 01:40:00.428 [2024-12-09 05:34:42.803167] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 01:40:00.428 [2024-12-09 05:34:42.803308] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 01:40:00.429 [2024-12-09 05:34:42.803357] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 01:40:00.429 [2024-12-09 05:34:42.803405] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 01:40:00.429 [2024-12-09 05:34:42.803453] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 01:40:00.429 [2024-12-09 05:34:42.803520] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 01:40:00.429 [2024-12-09 05:34:42.803628] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 01:40:00.429 [2024-12-09 05:34:42.803679] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 01:40:00.429 [2024-12-09 05:34:42.803725] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 01:40:00.429 [2024-12-09 05:34:42.803772] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 01:40:00.429 [2024-12-09 05:34:42.803866] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 01:40:00.429 [2024-12-09 05:34:42.803915] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 01:40:00.429 [2024-12-09 05:34:42.803962] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 01:40:00.429 [2024-12-09 05:34:42.804009] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 01:40:00.429 [2024-12-09 05:34:42.804098] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 01:40:00.429 [2024-12-09 05:34:42.804148] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 01:40:00.429 [2024-12-09 05:34:42.804182] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 01:40:00.429 [2024-12-09 05:34:42.804195] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 01:40:00.429 [2024-12-09 05:34:42.804206] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 01:40:00.429 [2024-12-09 05:34:42.804217] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 01:40:00.429 [2024-12-09 05:34:42.804228] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 01:40:00.429 [2024-12-09 05:34:42.804242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:40:00.429 [2024-12-09 05:34:42.804254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 01:40:00.429 [2024-12-09 05:34:42.804267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.262 ms 01:40:00.429 [2024-12-09 05:34:42.804277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:00.429 [2024-12-09 05:34:42.853438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:40:00.429 [2024-12-09 05:34:42.853488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 01:40:00.429 [2024-12-09 05:34:42.853504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.175 ms 01:40:00.429 [2024-12-09 05:34:42.853521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:00.429 [2024-12-09 05:34:42.853597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:40:00.429 [2024-12-09 05:34:42.853627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 01:40:00.429 [2024-12-09 05:34:42.853639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 01:40:00.429 [2024-12-09 05:34:42.853650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:00.688 [2024-12-09 05:34:42.916277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:40:00.688 [2024-12-09 05:34:42.916318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 01:40:00.688 [2024-12-09 05:34:42.916333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.627 ms 01:40:00.688 [2024-12-09 05:34:42.916359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:00.688 [2024-12-09 05:34:42.916397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:40:00.688 [2024-12-09 05:34:42.916414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 01:40:00.688 [2024-12-09 05:34:42.916426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 01:40:00.688 [2024-12-09 05:34:42.916436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:00.688 [2024-12-09 05:34:42.917266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:40:00.688 [2024-12-09 05:34:42.917287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 01:40:00.688 [2024-12-09 05:34:42.917300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.757 ms 01:40:00.688 [2024-12-09 05:34:42.917311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:00.688 [2024-12-09 05:34:42.917444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:40:00.688 [2024-12-09 05:34:42.917460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 01:40:00.688 [2024-12-09 05:34:42.917489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 01:40:00.688 [2024-12-09 05:34:42.917500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:00.688 [2024-12-09 05:34:42.940840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:40:00.688 [2024-12-09 05:34:42.941011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 01:40:00.688 [2024-12-09 05:34:42.941138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.355 ms 01:40:00.688 [2024-12-09 05:34:42.941177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:00.688 [2024-12-09 05:34:42.961682] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 01:40:00.688 [2024-12-09 05:34:42.961867] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 01:40:00.688 [2024-12-09 05:34:42.961964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:40:00.688 [2024-12-09 05:34:42.961999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 01:40:00.688 [2024-12-09 05:34:42.962032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.681 ms 01:40:00.688 [2024-12-09 05:34:42.962062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:00.688 [2024-12-09 05:34:42.991818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:40:00.688 [2024-12-09 05:34:42.991960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 01:40:00.688 [2024-12-09 05:34:42.992078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.742 ms 01:40:00.688 [2024-12-09 05:34:42.992118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:00.688 [2024-12-09 05:34:43.010795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:40:00.688 [2024-12-09 05:34:43.010949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 01:40:00.688 [2024-12-09 05:34:43.011054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.383 ms 01:40:00.688 [2024-12-09 05:34:43.011091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:00.688 [2024-12-09 05:34:43.028955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:40:00.688 [2024-12-09 05:34:43.029116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 01:40:00.688 [2024-12-09 05:34:43.029137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.834 ms 01:40:00.688 [2024-12-09 05:34:43.029148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:00.688 [2024-12-09 05:34:43.030018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:40:00.688 [2024-12-09 05:34:43.030051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 01:40:00.688 [2024-12-09 05:34:43.030068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.730 ms 01:40:00.688 [2024-12-09 05:34:43.030079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:00.688 [2024-12-09 05:34:43.123515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:40:00.688 [2024-12-09 05:34:43.123590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 01:40:00.688 [2024-12-09 05:34:43.123617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 93.561 ms 01:40:00.689 [2024-12-09 05:34:43.123629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:00.689 [2024-12-09 05:34:43.134952] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 01:40:00.689 [2024-12-09 05:34:43.139127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:40:00.689 [2024-12-09 05:34:43.139157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 01:40:00.689 [2024-12-09 05:34:43.139172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.469 ms 01:40:00.689 [2024-12-09 05:34:43.139199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:00.689 [2024-12-09 05:34:43.139328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:40:00.689 [2024-12-09 05:34:43.139343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 01:40:00.689 [2024-12-09 05:34:43.139360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 01:40:00.689 [2024-12-09 05:34:43.139372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:00.689 [2024-12-09 05:34:43.140801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:40:00.689 [2024-12-09 05:34:43.140830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 01:40:00.689 [2024-12-09 05:34:43.140843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.387 ms 01:40:00.689 [2024-12-09 05:34:43.140853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:00.689 [2024-12-09 05:34:43.140879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:40:00.689 [2024-12-09 05:34:43.140892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 01:40:00.689 [2024-12-09 05:34:43.140902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 01:40:00.689 [2024-12-09 05:34:43.140913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:00.689 [2024-12-09 05:34:43.140963] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 01:40:00.689 [2024-12-09 05:34:43.140977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:40:00.689 [2024-12-09 05:34:43.140988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 01:40:00.689 [2024-12-09 05:34:43.140999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 01:40:00.689 [2024-12-09 05:34:43.141010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:00.947 [2024-12-09 05:34:43.177769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:40:00.948 [2024-12-09 05:34:43.177807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 01:40:00.948 [2024-12-09 05:34:43.177844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.796 ms 01:40:00.948 [2024-12-09 05:34:43.177855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:00.948 [2024-12-09 05:34:43.177938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:40:00.948 [2024-12-09 05:34:43.177952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 01:40:00.948 [2024-12-09 05:34:43.177964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 01:40:00.948 [2024-12-09 05:34:43.177975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:00.948 [2024-12-09 05:34:43.179535] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 420.652 ms, result 0 01:40:02.347  [2024-12-09T05:34:45.737Z] Copying: 25/1024 [MB] (25 MBps) [2024-12-09T05:34:46.673Z] Copying: 50/1024 [MB] (24 MBps) [2024-12-09T05:34:47.608Z] Copying: 73/1024 [MB] (23 MBps) [2024-12-09T05:34:48.544Z] Copying: 98/1024 [MB] (24 MBps) [2024-12-09T05:34:49.481Z] Copying: 121/1024 [MB] (23 MBps) [2024-12-09T05:34:50.420Z] Copying: 145/1024 [MB] (23 MBps) [2024-12-09T05:34:51.800Z] Copying: 170/1024 [MB] (24 MBps) [2024-12-09T05:34:52.738Z] Copying: 195/1024 [MB] (25 MBps) [2024-12-09T05:34:53.676Z] Copying: 220/1024 [MB] (25 MBps) [2024-12-09T05:34:54.615Z] Copying: 246/1024 [MB] (25 MBps) [2024-12-09T05:34:55.552Z] Copying: 271/1024 [MB] (25 MBps) [2024-12-09T05:34:56.489Z] Copying: 296/1024 [MB] (25 MBps) [2024-12-09T05:34:57.425Z] Copying: 321/1024 [MB] (24 MBps) [2024-12-09T05:34:58.799Z] Copying: 346/1024 [MB] (24 MBps) [2024-12-09T05:34:59.734Z] Copying: 371/1024 [MB] (24 MBps) [2024-12-09T05:35:00.670Z] Copying: 395/1024 [MB] (24 MBps) [2024-12-09T05:35:01.606Z] Copying: 424/1024 [MB] (28 MBps) [2024-12-09T05:35:02.542Z] Copying: 452/1024 [MB] (27 MBps) [2024-12-09T05:35:03.478Z] Copying: 479/1024 [MB] (27 MBps) [2024-12-09T05:35:04.415Z] Copying: 507/1024 [MB] (28 MBps) [2024-12-09T05:35:05.796Z] Copying: 535/1024 [MB] (27 MBps) [2024-12-09T05:35:06.364Z] Copying: 561/1024 [MB] (25 MBps) [2024-12-09T05:35:07.736Z] Copying: 586/1024 [MB] (24 MBps) [2024-12-09T05:35:08.669Z] Copying: 611/1024 [MB] (25 MBps) [2024-12-09T05:35:09.601Z] Copying: 637/1024 [MB] (25 MBps) [2024-12-09T05:35:10.537Z] Copying: 664/1024 [MB] (26 MBps) [2024-12-09T05:35:11.474Z] Copying: 691/1024 [MB] (26 MBps) [2024-12-09T05:35:12.451Z] Copying: 718/1024 [MB] (27 MBps) [2024-12-09T05:35:13.388Z] Copying: 745/1024 [MB] (27 MBps) [2024-12-09T05:35:14.766Z] Copying: 772/1024 [MB] (26 MBps) [2024-12-09T05:35:15.696Z] Copying: 798/1024 [MB] (26 MBps) [2024-12-09T05:35:16.630Z] Copying: 826/1024 [MB] (27 MBps) [2024-12-09T05:35:17.567Z] Copying: 854/1024 [MB] (28 MBps) [2024-12-09T05:35:18.501Z] Copying: 882/1024 [MB] (27 MBps) [2024-12-09T05:35:19.435Z] Copying: 907/1024 [MB] (25 MBps) [2024-12-09T05:35:20.368Z] Copying: 932/1024 [MB] (24 MBps) [2024-12-09T05:35:21.746Z] Copying: 956/1024 [MB] (24 MBps) [2024-12-09T05:35:22.684Z] Copying: 981/1024 [MB] (24 MBps) [2024-12-09T05:35:23.253Z] Copying: 1005/1024 [MB] (23 MBps) [2024-12-09T05:35:23.253Z] Copying: 1024/1024 [MB] (average 25 MBps)[2024-12-09 05:35:23.142003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:40:40.797 [2024-12-09 05:35:23.142112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 01:40:40.797 [2024-12-09 05:35:23.142149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 01:40:40.797 [2024-12-09 05:35:23.142172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:40.797 [2024-12-09 05:35:23.142218] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 01:40:40.797 [2024-12-09 05:35:23.152633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:40:40.797 [2024-12-09 05:35:23.152733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 01:40:40.797 [2024-12-09 05:35:23.152770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.392 ms 01:40:40.797 [2024-12-09 05:35:23.152799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:40.797 [2024-12-09 05:35:23.153316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:40:40.797 [2024-12-09 05:35:23.153350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 01:40:40.797 [2024-12-09 05:35:23.153379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.452 ms 01:40:40.797 [2024-12-09 05:35:23.153407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:40.797 [2024-12-09 05:35:23.158200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:40:40.797 [2024-12-09 05:35:23.158263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 01:40:40.797 [2024-12-09 05:35:23.158288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.754 ms 01:40:40.797 [2024-12-09 05:35:23.158320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:40.797 [2024-12-09 05:35:23.165309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:40:40.797 [2024-12-09 05:35:23.165523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 01:40:40.797 [2024-12-09 05:35:23.165552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.955 ms 01:40:40.797 [2024-12-09 05:35:23.165574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:40.797 [2024-12-09 05:35:23.204581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:40:40.797 [2024-12-09 05:35:23.204628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 01:40:40.797 [2024-12-09 05:35:23.204646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.943 ms 01:40:40.797 [2024-12-09 05:35:23.204675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:40.797 [2024-12-09 05:35:23.226568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:40:40.797 [2024-12-09 05:35:23.226618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 01:40:40.797 [2024-12-09 05:35:23.226635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.881 ms 01:40:40.797 [2024-12-09 05:35:23.226649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:40.797 [2024-12-09 05:35:23.228888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:40:40.797 [2024-12-09 05:35:23.228936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 01:40:40.797 [2024-12-09 05:35:23.228953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.185 ms 01:40:40.797 [2024-12-09 05:35:23.228966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:41.058 [2024-12-09 05:35:23.265256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:40:41.058 [2024-12-09 05:35:23.265302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 01:40:41.058 [2024-12-09 05:35:23.265318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.326 ms 01:40:41.058 [2024-12-09 05:35:23.265346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:41.058 [2024-12-09 05:35:23.300578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:40:41.058 [2024-12-09 05:35:23.300623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 01:40:41.058 [2024-12-09 05:35:23.300641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.233 ms 01:40:41.058 [2024-12-09 05:35:23.300673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:41.058 [2024-12-09 05:35:23.336090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:40:41.058 [2024-12-09 05:35:23.336266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 01:40:41.058 [2024-12-09 05:35:23.336311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.415 ms 01:40:41.058 [2024-12-09 05:35:23.336324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:41.058 [2024-12-09 05:35:23.371699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:40:41.058 [2024-12-09 05:35:23.371742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 01:40:41.058 [2024-12-09 05:35:23.371759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.235 ms 01:40:41.058 [2024-12-09 05:35:23.371787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:41.058 [2024-12-09 05:35:23.371829] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 01:40:41.058 [2024-12-09 05:35:23.371859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 01:40:41.058 [2024-12-09 05:35:23.371881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 01:40:41.058 [2024-12-09 05:35:23.371895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 01:40:41.058 [2024-12-09 05:35:23.371908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 01:40:41.058 [2024-12-09 05:35:23.371922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 01:40:41.058 [2024-12-09 05:35:23.371935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 01:40:41.058 [2024-12-09 05:35:23.371948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 01:40:41.058 [2024-12-09 05:35:23.371961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 01:40:41.058 [2024-12-09 05:35:23.371973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 01:40:41.058 [2024-12-09 05:35:23.371986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 01:40:41.058 [2024-12-09 05:35:23.371999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 01:40:41.058 [2024-12-09 05:35:23.372011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 01:40:41.058 [2024-12-09 05:35:23.372024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 01:40:41.058 [2024-12-09 05:35:23.372037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 01:40:41.058 [2024-12-09 05:35:23.372049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 01:40:41.058 [2024-12-09 05:35:23.372062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 01:40:41.058 [2024-12-09 05:35:23.372074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 01:40:41.058 [2024-12-09 05:35:23.372098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 01:40:41.058 [2024-12-09 05:35:23.372110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 01:40:41.058 [2024-12-09 05:35:23.372123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 01:40:41.058 [2024-12-09 05:35:23.372135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 01:40:41.058 [2024-12-09 05:35:23.372147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 01:40:41.058 [2024-12-09 05:35:23.372159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 01:40:41.058 [2024-12-09 05:35:23.372171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 01:40:41.058 [2024-12-09 05:35:23.372184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 01:40:41.058 [2024-12-09 05:35:23.372196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 01:40:41.058 [2024-12-09 05:35:23.372211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 01:40:41.058 [2024-12-09 05:35:23.372224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 01:40:41.058 [2024-12-09 05:35:23.372237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 01:40:41.058 [2024-12-09 05:35:23.372249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 01:40:41.058 [2024-12-09 05:35:23.372261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 01:40:41.058 [2024-12-09 05:35:23.372276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 01:40:41.058 [2024-12-09 05:35:23.372289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 01:40:41.058 [2024-12-09 05:35:23.372301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 01:40:41.058 [2024-12-09 05:35:23.372314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 01:40:41.058 [2024-12-09 05:35:23.372327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 01:40:41.058 [2024-12-09 05:35:23.372339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 01:40:41.058 [2024-12-09 05:35:23.372352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 01:40:41.058 [2024-12-09 05:35:23.372365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 01:40:41.058 [2024-12-09 05:35:23.372378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 01:40:41.058 [2024-12-09 05:35:23.372391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 01:40:41.058 [2024-12-09 05:35:23.372404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 01:40:41.058 [2024-12-09 05:35:23.372417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 01:40:41.058 [2024-12-09 05:35:23.372429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 01:40:41.058 [2024-12-09 05:35:23.372441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 01:40:41.058 [2024-12-09 05:35:23.372454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 01:40:41.059 [2024-12-09 05:35:23.372467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 01:40:41.059 [2024-12-09 05:35:23.372498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 01:40:41.059 [2024-12-09 05:35:23.372511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 01:40:41.059 [2024-12-09 05:35:23.372523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 01:40:41.059 [2024-12-09 05:35:23.372552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 01:40:41.059 [2024-12-09 05:35:23.372566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 01:40:41.059 [2024-12-09 05:35:23.372579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 01:40:41.059 [2024-12-09 05:35:23.372592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 01:40:41.059 [2024-12-09 05:35:23.372625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 01:40:41.059 [2024-12-09 05:35:23.372640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 01:40:41.059 [2024-12-09 05:35:23.372654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 01:40:41.059 [2024-12-09 05:35:23.372667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 01:40:41.059 [2024-12-09 05:35:23.372681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 01:40:41.059 [2024-12-09 05:35:23.372694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 01:40:41.059 [2024-12-09 05:35:23.372707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 01:40:41.059 [2024-12-09 05:35:23.372722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 01:40:41.059 [2024-12-09 05:35:23.372735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 01:40:41.059 [2024-12-09 05:35:23.372750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 01:40:41.059 [2024-12-09 05:35:23.372762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 01:40:41.059 [2024-12-09 05:35:23.372775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 01:40:41.059 [2024-12-09 05:35:23.372788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 01:40:41.059 [2024-12-09 05:35:23.372801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 01:40:41.059 [2024-12-09 05:35:23.372814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 01:40:41.059 [2024-12-09 05:35:23.372827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 01:40:41.059 [2024-12-09 05:35:23.372840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 01:40:41.059 [2024-12-09 05:35:23.372853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 01:40:41.059 [2024-12-09 05:35:23.372865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 01:40:41.059 [2024-12-09 05:35:23.372883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 01:40:41.059 [2024-12-09 05:35:23.372905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 01:40:41.059 [2024-12-09 05:35:23.372921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 01:40:41.059 [2024-12-09 05:35:23.372934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 01:40:41.059 [2024-12-09 05:35:23.372948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 01:40:41.059 [2024-12-09 05:35:23.372966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 01:40:41.059 [2024-12-09 05:35:23.372996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 01:40:41.059 [2024-12-09 05:35:23.373021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 01:40:41.059 [2024-12-09 05:35:23.373044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 01:40:41.059 [2024-12-09 05:35:23.373068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 01:40:41.059 [2024-12-09 05:35:23.373090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 01:40:41.059 [2024-12-09 05:35:23.373112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 01:40:41.059 [2024-12-09 05:35:23.373134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 01:40:41.059 [2024-12-09 05:35:23.373158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 01:40:41.059 [2024-12-09 05:35:23.373182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 01:40:41.059 [2024-12-09 05:35:23.373205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 01:40:41.059 [2024-12-09 05:35:23.373230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 01:40:41.059 [2024-12-09 05:35:23.373255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 01:40:41.059 [2024-12-09 05:35:23.373279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 01:40:41.059 [2024-12-09 05:35:23.373303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 01:40:41.059 [2024-12-09 05:35:23.373319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 01:40:41.059 [2024-12-09 05:35:23.373336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 01:40:41.059 [2024-12-09 05:35:23.373353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 01:40:41.059 [2024-12-09 05:35:23.373369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 01:40:41.059 [2024-12-09 05:35:23.373385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 01:40:41.059 [2024-12-09 05:35:23.373401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 01:40:41.059 [2024-12-09 05:35:23.373417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 01:40:41.059 [2024-12-09 05:35:23.373442] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 01:40:41.059 [2024-12-09 05:35:23.373458] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d5ef8fa2-4aaa-43c5-83fd-e3213962e517 01:40:41.059 [2024-12-09 05:35:23.373473] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 01:40:41.059 [2024-12-09 05:35:23.373503] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 01:40:41.059 [2024-12-09 05:35:23.373518] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 01:40:41.059 [2024-12-09 05:35:23.373534] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 01:40:41.059 [2024-12-09 05:35:23.373566] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 01:40:41.059 [2024-12-09 05:35:23.373582] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 01:40:41.059 [2024-12-09 05:35:23.373596] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 01:40:41.059 [2024-12-09 05:35:23.373610] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 01:40:41.059 [2024-12-09 05:35:23.373626] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 01:40:41.059 [2024-12-09 05:35:23.373642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:40:41.059 [2024-12-09 05:35:23.373658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 01:40:41.059 [2024-12-09 05:35:23.373675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.817 ms 01:40:41.059 [2024-12-09 05:35:23.373696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:41.059 [2024-12-09 05:35:23.394673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:40:41.059 [2024-12-09 05:35:23.394713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 01:40:41.059 [2024-12-09 05:35:23.394729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.958 ms 01:40:41.059 [2024-12-09 05:35:23.394758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:41.059 [2024-12-09 05:35:23.395342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:40:41.059 [2024-12-09 05:35:23.395369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 01:40:41.059 [2024-12-09 05:35:23.395383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.558 ms 01:40:41.059 [2024-12-09 05:35:23.395396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:41.059 [2024-12-09 05:35:23.447754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:40:41.059 [2024-12-09 05:35:23.447799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 01:40:41.059 [2024-12-09 05:35:23.447816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:40:41.059 [2024-12-09 05:35:23.447829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:41.059 [2024-12-09 05:35:23.447898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:40:41.059 [2024-12-09 05:35:23.447920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 01:40:41.059 [2024-12-09 05:35:23.447933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:40:41.059 [2024-12-09 05:35:23.447946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:41.059 [2024-12-09 05:35:23.448024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:40:41.059 [2024-12-09 05:35:23.448040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 01:40:41.059 [2024-12-09 05:35:23.448054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:40:41.059 [2024-12-09 05:35:23.448067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:41.059 [2024-12-09 05:35:23.448088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:40:41.059 [2024-12-09 05:35:23.448101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 01:40:41.059 [2024-12-09 05:35:23.448120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:40:41.059 [2024-12-09 05:35:23.448132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:41.319 [2024-12-09 05:35:23.572906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:40:41.319 [2024-12-09 05:35:23.572971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 01:40:41.319 [2024-12-09 05:35:23.573007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:40:41.319 [2024-12-09 05:35:23.573022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:41.319 [2024-12-09 05:35:23.675009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:40:41.319 [2024-12-09 05:35:23.675287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 01:40:41.319 [2024-12-09 05:35:23.675500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:40:41.319 [2024-12-09 05:35:23.675640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:41.319 [2024-12-09 05:35:23.675954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:40:41.319 [2024-12-09 05:35:23.675993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 01:40:41.319 [2024-12-09 05:35:23.676013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:40:41.319 [2024-12-09 05:35:23.676029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:41.319 [2024-12-09 05:35:23.676103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:40:41.319 [2024-12-09 05:35:23.676121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 01:40:41.319 [2024-12-09 05:35:23.676137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:40:41.319 [2024-12-09 05:35:23.676160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:41.319 [2024-12-09 05:35:23.676314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:40:41.319 [2024-12-09 05:35:23.676334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 01:40:41.319 [2024-12-09 05:35:23.676351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:40:41.319 [2024-12-09 05:35:23.676368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:41.319 [2024-12-09 05:35:23.676421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:40:41.319 [2024-12-09 05:35:23.676438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 01:40:41.319 [2024-12-09 05:35:23.676455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:40:41.319 [2024-12-09 05:35:23.676494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:41.319 [2024-12-09 05:35:23.676561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:40:41.319 [2024-12-09 05:35:23.676578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 01:40:41.319 [2024-12-09 05:35:23.676595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:40:41.319 [2024-12-09 05:35:23.676610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:41.319 [2024-12-09 05:35:23.676674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:40:41.319 [2024-12-09 05:35:23.676692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 01:40:41.319 [2024-12-09 05:35:23.676708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:40:41.319 [2024-12-09 05:35:23.676728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:41.319 [2024-12-09 05:35:23.676902] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 535.728 ms, result 0 01:40:42.698 01:40:42.698 01:40:42.698 05:35:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 01:40:44.599 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 01:40:44.599 05:35:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 01:40:44.599 05:35:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 01:40:44.599 05:35:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 01:40:44.599 05:35:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 01:40:44.599 05:35:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 01:40:44.599 05:35:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 01:40:44.599 05:35:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 01:40:44.599 Process with pid 81468 is not found 01:40:44.599 05:35:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 81468 01:40:44.599 05:35:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 81468 ']' 01:40:44.599 05:35:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 81468 01:40:44.599 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (81468) - No such process 01:40:44.599 05:35:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 81468 is not found' 01:40:44.599 05:35:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 01:40:44.864 Remove shared memory files 01:40:44.865 05:35:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 01:40:44.865 05:35:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 01:40:44.865 05:35:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 01:40:44.865 05:35:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 01:40:44.865 05:35:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 01:40:44.865 05:35:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 01:40:44.865 05:35:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 01:40:44.865 ************************************ 01:40:44.865 END TEST ftl_dirty_shutdown 01:40:44.865 ************************************ 01:40:44.865 01:40:44.865 real 3m41.799s 01:40:44.865 user 4m10.940s 01:40:44.865 sys 0m41.073s 01:40:44.865 05:35:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 01:40:44.865 05:35:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 01:40:44.865 05:35:27 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 01:40:44.865 05:35:27 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:40:44.865 05:35:27 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 01:40:44.865 05:35:27 ftl -- common/autotest_common.sh@10 -- # set +x 01:40:45.149 ************************************ 01:40:45.149 START TEST ftl_upgrade_shutdown 01:40:45.149 ************************************ 01:40:45.149 05:35:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 01:40:45.149 * Looking for test storage... 01:40:45.149 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 01:40:45.149 05:35:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:40:45.149 05:35:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 01:40:45.149 05:35:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:40:45.149 05:35:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:40:45.149 05:35:27 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:40:45.149 05:35:27 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 01:40:45.149 05:35:27 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 01:40:45.149 05:35:27 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 01:40:45.149 05:35:27 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 01:40:45.149 05:35:27 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 01:40:45.149 05:35:27 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 01:40:45.149 05:35:27 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 01:40:45.149 05:35:27 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 01:40:45.149 05:35:27 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 01:40:45.149 05:35:27 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:40:45.149 05:35:27 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 01:40:45.149 05:35:27 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 01:40:45.149 05:35:27 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 01:40:45.149 05:35:27 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:40:45.149 05:35:27 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 01:40:45.149 05:35:27 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 01:40:45.149 05:35:27 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:40:45.149 05:35:27 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 01:40:45.149 05:35:27 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 01:40:45.149 05:35:27 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 01:40:45.149 05:35:27 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 01:40:45.149 05:35:27 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:40:45.149 05:35:27 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 01:40:45.149 05:35:27 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 01:40:45.149 05:35:27 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:40:45.149 05:35:27 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:40:45.149 05:35:27 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 01:40:45.149 05:35:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:40:45.149 05:35:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:40:45.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:40:45.149 --rc genhtml_branch_coverage=1 01:40:45.149 --rc genhtml_function_coverage=1 01:40:45.149 --rc genhtml_legend=1 01:40:45.149 --rc geninfo_all_blocks=1 01:40:45.149 --rc geninfo_unexecuted_blocks=1 01:40:45.149 01:40:45.149 ' 01:40:45.149 05:35:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:40:45.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:40:45.149 --rc genhtml_branch_coverage=1 01:40:45.149 --rc genhtml_function_coverage=1 01:40:45.149 --rc genhtml_legend=1 01:40:45.149 --rc geninfo_all_blocks=1 01:40:45.149 --rc geninfo_unexecuted_blocks=1 01:40:45.149 01:40:45.149 ' 01:40:45.149 05:35:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:40:45.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:40:45.149 --rc genhtml_branch_coverage=1 01:40:45.149 --rc genhtml_function_coverage=1 01:40:45.149 --rc genhtml_legend=1 01:40:45.149 --rc geninfo_all_blocks=1 01:40:45.149 --rc geninfo_unexecuted_blocks=1 01:40:45.149 01:40:45.149 ' 01:40:45.149 05:35:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:40:45.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:40:45.149 --rc genhtml_branch_coverage=1 01:40:45.149 --rc genhtml_function_coverage=1 01:40:45.149 --rc genhtml_legend=1 01:40:45.149 --rc geninfo_all_blocks=1 01:40:45.149 --rc geninfo_unexecuted_blocks=1 01:40:45.149 01:40:45.149 ' 01:40:45.149 05:35:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 01:40:45.149 05:35:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 01:40:45.149 05:35:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 01:40:45.149 05:35:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 01:40:45.149 05:35:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 01:40:45.149 05:35:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 01:40:45.149 05:35:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:40:45.149 05:35:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 01:40:45.149 05:35:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 01:40:45.149 05:35:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:40:45.149 05:35:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:40:45.149 05:35:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 01:40:45.149 05:35:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 01:40:45.149 05:35:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 01:40:45.149 05:35:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 01:40:45.149 05:35:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 01:40:45.149 05:35:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 01:40:45.149 05:35:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:40:45.149 05:35:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:40:45.149 05:35:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 01:40:45.149 05:35:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 01:40:45.149 05:35:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 01:40:45.149 05:35:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 01:40:45.149 05:35:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 01:40:45.149 05:35:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 01:40:45.149 05:35:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 01:40:45.149 05:35:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 01:40:45.150 05:35:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:40:45.150 05:35:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:40:45.150 05:35:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 01:40:45.150 05:35:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 01:40:45.150 05:35:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 01:40:45.150 05:35:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 01:40:45.150 05:35:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 01:40:45.150 05:35:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 01:40:45.150 05:35:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 01:40:45.150 05:35:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 01:40:45.150 05:35:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 01:40:45.150 05:35:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 01:40:45.150 05:35:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 01:40:45.150 05:35:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 01:40:45.150 05:35:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 01:40:45.150 05:35:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 01:40:45.150 05:35:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 01:40:45.150 05:35:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 01:40:45.150 05:35:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 01:40:45.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:40:45.150 05:35:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=83811 01:40:45.150 05:35:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 01:40:45.150 05:35:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 83811 01:40:45.150 05:35:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 01:40:45.150 05:35:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83811 ']' 01:40:45.150 05:35:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:40:45.150 05:35:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 01:40:45.150 05:35:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:40:45.150 05:35:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 01:40:45.150 05:35:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 01:40:45.426 [2024-12-09 05:35:27.714726] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:40:45.426 [2024-12-09 05:35:27.715081] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83811 ] 01:40:45.685 [2024-12-09 05:35:27.904427] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:40:45.685 [2024-12-09 05:35:28.027196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:40:46.620 05:35:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:40:46.620 05:35:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 01:40:46.620 05:35:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 01:40:46.620 05:35:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 01:40:46.620 05:35:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 01:40:46.620 05:35:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 01:40:46.620 05:35:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 01:40:46.620 05:35:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 01:40:46.620 05:35:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 01:40:46.620 05:35:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 01:40:46.620 05:35:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 01:40:46.620 05:35:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 01:40:46.620 05:35:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 01:40:46.620 05:35:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 01:40:46.620 05:35:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 01:40:46.620 05:35:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 01:40:46.620 05:35:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 01:40:46.620 05:35:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 01:40:46.620 05:35:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 01:40:46.620 05:35:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 01:40:46.620 05:35:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 01:40:46.620 05:35:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 01:40:46.620 05:35:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 01:40:46.879 05:35:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 01:40:46.879 05:35:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 01:40:46.879 05:35:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 01:40:46.879 05:35:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 01:40:46.879 05:35:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 01:40:46.879 05:35:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 01:40:46.879 05:35:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 01:40:46.879 05:35:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 01:40:47.445 05:35:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 01:40:47.445 { 01:40:47.445 "name": "basen1", 01:40:47.445 "aliases": [ 01:40:47.445 "5806b8df-e16d-431a-a7a5-d0e8f9d9286b" 01:40:47.445 ], 01:40:47.445 "product_name": "NVMe disk", 01:40:47.445 "block_size": 4096, 01:40:47.445 "num_blocks": 1310720, 01:40:47.445 "uuid": "5806b8df-e16d-431a-a7a5-d0e8f9d9286b", 01:40:47.445 "numa_id": -1, 01:40:47.445 "assigned_rate_limits": { 01:40:47.445 "rw_ios_per_sec": 0, 01:40:47.445 "rw_mbytes_per_sec": 0, 01:40:47.445 "r_mbytes_per_sec": 0, 01:40:47.445 "w_mbytes_per_sec": 0 01:40:47.445 }, 01:40:47.445 "claimed": true, 01:40:47.445 "claim_type": "read_many_write_one", 01:40:47.445 "zoned": false, 01:40:47.445 "supported_io_types": { 01:40:47.445 "read": true, 01:40:47.445 "write": true, 01:40:47.445 "unmap": true, 01:40:47.445 "flush": true, 01:40:47.445 "reset": true, 01:40:47.445 "nvme_admin": true, 01:40:47.445 "nvme_io": true, 01:40:47.445 "nvme_io_md": false, 01:40:47.445 "write_zeroes": true, 01:40:47.445 "zcopy": false, 01:40:47.445 "get_zone_info": false, 01:40:47.445 "zone_management": false, 01:40:47.445 "zone_append": false, 01:40:47.445 "compare": true, 01:40:47.445 "compare_and_write": false, 01:40:47.445 "abort": true, 01:40:47.445 "seek_hole": false, 01:40:47.445 "seek_data": false, 01:40:47.445 "copy": true, 01:40:47.445 "nvme_iov_md": false 01:40:47.445 }, 01:40:47.445 "driver_specific": { 01:40:47.445 "nvme": [ 01:40:47.445 { 01:40:47.445 "pci_address": "0000:00:11.0", 01:40:47.445 "trid": { 01:40:47.445 "trtype": "PCIe", 01:40:47.445 "traddr": "0000:00:11.0" 01:40:47.445 }, 01:40:47.445 "ctrlr_data": { 01:40:47.445 "cntlid": 0, 01:40:47.445 "vendor_id": "0x1b36", 01:40:47.445 "model_number": "QEMU NVMe Ctrl", 01:40:47.445 "serial_number": "12341", 01:40:47.445 "firmware_revision": "8.0.0", 01:40:47.445 "subnqn": "nqn.2019-08.org.qemu:12341", 01:40:47.445 "oacs": { 01:40:47.445 "security": 0, 01:40:47.445 "format": 1, 01:40:47.445 "firmware": 0, 01:40:47.445 "ns_manage": 1 01:40:47.445 }, 01:40:47.445 "multi_ctrlr": false, 01:40:47.445 "ana_reporting": false 01:40:47.445 }, 01:40:47.445 "vs": { 01:40:47.445 "nvme_version": "1.4" 01:40:47.445 }, 01:40:47.445 "ns_data": { 01:40:47.445 "id": 1, 01:40:47.445 "can_share": false 01:40:47.445 } 01:40:47.445 } 01:40:47.445 ], 01:40:47.445 "mp_policy": "active_passive" 01:40:47.445 } 01:40:47.445 } 01:40:47.445 ]' 01:40:47.445 05:35:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 01:40:47.445 05:35:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 01:40:47.445 05:35:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 01:40:47.445 05:35:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 01:40:47.445 05:35:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 01:40:47.445 05:35:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 01:40:47.445 05:35:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 01:40:47.445 05:35:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 01:40:47.445 05:35:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 01:40:47.445 05:35:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 01:40:47.445 05:35:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 01:40:47.445 05:35:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=f40c1519-414a-424c-90a5-f1c7635a2e2f 01:40:47.445 05:35:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 01:40:47.445 05:35:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f40c1519-414a-424c-90a5-f1c7635a2e2f 01:40:47.702 05:35:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 01:40:47.960 05:35:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=6a0f8776-91fe-4d94-abcd-7c1d440af63c 01:40:47.960 05:35:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 6a0f8776-91fe-4d94-abcd-7c1d440af63c 01:40:48.220 05:35:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=5c1a1cf8-c24e-4c8b-9b9e-9f0404c78f9b 01:40:48.220 05:35:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 5c1a1cf8-c24e-4c8b-9b9e-9f0404c78f9b ]] 01:40:48.220 05:35:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 5c1a1cf8-c24e-4c8b-9b9e-9f0404c78f9b 5120 01:40:48.220 05:35:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 01:40:48.220 05:35:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 01:40:48.220 05:35:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=5c1a1cf8-c24e-4c8b-9b9e-9f0404c78f9b 01:40:48.220 05:35:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 01:40:48.220 05:35:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 5c1a1cf8-c24e-4c8b-9b9e-9f0404c78f9b 01:40:48.220 05:35:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=5c1a1cf8-c24e-4c8b-9b9e-9f0404c78f9b 01:40:48.220 05:35:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 01:40:48.220 05:35:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 01:40:48.220 05:35:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 01:40:48.220 05:35:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5c1a1cf8-c24e-4c8b-9b9e-9f0404c78f9b 01:40:48.477 05:35:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 01:40:48.477 { 01:40:48.477 "name": "5c1a1cf8-c24e-4c8b-9b9e-9f0404c78f9b", 01:40:48.477 "aliases": [ 01:40:48.477 "lvs/basen1p0" 01:40:48.477 ], 01:40:48.477 "product_name": "Logical Volume", 01:40:48.477 "block_size": 4096, 01:40:48.477 "num_blocks": 5242880, 01:40:48.477 "uuid": "5c1a1cf8-c24e-4c8b-9b9e-9f0404c78f9b", 01:40:48.477 "assigned_rate_limits": { 01:40:48.477 "rw_ios_per_sec": 0, 01:40:48.477 "rw_mbytes_per_sec": 0, 01:40:48.477 "r_mbytes_per_sec": 0, 01:40:48.477 "w_mbytes_per_sec": 0 01:40:48.477 }, 01:40:48.477 "claimed": false, 01:40:48.477 "zoned": false, 01:40:48.477 "supported_io_types": { 01:40:48.477 "read": true, 01:40:48.477 "write": true, 01:40:48.477 "unmap": true, 01:40:48.477 "flush": false, 01:40:48.477 "reset": true, 01:40:48.477 "nvme_admin": false, 01:40:48.477 "nvme_io": false, 01:40:48.477 "nvme_io_md": false, 01:40:48.477 "write_zeroes": true, 01:40:48.477 "zcopy": false, 01:40:48.477 "get_zone_info": false, 01:40:48.477 "zone_management": false, 01:40:48.477 "zone_append": false, 01:40:48.477 "compare": false, 01:40:48.478 "compare_and_write": false, 01:40:48.478 "abort": false, 01:40:48.478 "seek_hole": true, 01:40:48.478 "seek_data": true, 01:40:48.478 "copy": false, 01:40:48.478 "nvme_iov_md": false 01:40:48.478 }, 01:40:48.478 "driver_specific": { 01:40:48.478 "lvol": { 01:40:48.478 "lvol_store_uuid": "6a0f8776-91fe-4d94-abcd-7c1d440af63c", 01:40:48.478 "base_bdev": "basen1", 01:40:48.478 "thin_provision": true, 01:40:48.478 "num_allocated_clusters": 0, 01:40:48.478 "snapshot": false, 01:40:48.478 "clone": false, 01:40:48.478 "esnap_clone": false 01:40:48.478 } 01:40:48.478 } 01:40:48.478 } 01:40:48.478 ]' 01:40:48.478 05:35:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 01:40:48.478 05:35:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 01:40:48.478 05:35:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 01:40:48.478 05:35:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 01:40:48.478 05:35:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 01:40:48.478 05:35:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 01:40:48.478 05:35:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 01:40:48.478 05:35:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 01:40:48.478 05:35:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 01:40:48.735 05:35:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 01:40:48.736 05:35:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 01:40:48.736 05:35:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 01:40:48.993 05:35:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 01:40:48.993 05:35:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 01:40:48.993 05:35:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 5c1a1cf8-c24e-4c8b-9b9e-9f0404c78f9b -c cachen1p0 --l2p_dram_limit 2 01:40:49.254 [2024-12-09 05:35:31.461001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:40:49.254 [2024-12-09 05:35:31.461078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 01:40:49.254 [2024-12-09 05:35:31.461105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 01:40:49.254 [2024-12-09 05:35:31.461117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:40:49.254 [2024-12-09 05:35:31.461213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:40:49.254 [2024-12-09 05:35:31.461227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 01:40:49.254 [2024-12-09 05:35:31.461243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.065 ms 01:40:49.254 [2024-12-09 05:35:31.461257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:40:49.254 [2024-12-09 05:35:31.461286] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 01:40:49.254 [2024-12-09 05:35:31.462402] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 01:40:49.254 [2024-12-09 05:35:31.462449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:40:49.254 [2024-12-09 05:35:31.462481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 01:40:49.254 [2024-12-09 05:35:31.462499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.165 ms 01:40:49.254 [2024-12-09 05:35:31.462513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:40:49.254 [2024-12-09 05:35:31.462620] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 162f066f-41b9-45f4-beb4-c6fa7d398deb 01:40:49.254 [2024-12-09 05:35:31.465225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:40:49.254 [2024-12-09 05:35:31.465423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 01:40:49.254 [2024-12-09 05:35:31.465450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 01:40:49.254 [2024-12-09 05:35:31.465478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:40:49.254 [2024-12-09 05:35:31.479767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:40:49.254 [2024-12-09 05:35:31.480001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 01:40:49.254 [2024-12-09 05:35:31.480030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.177 ms 01:40:49.254 [2024-12-09 05:35:31.480047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:40:49.254 [2024-12-09 05:35:31.480117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:40:49.254 [2024-12-09 05:35:31.480137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 01:40:49.254 [2024-12-09 05:35:31.480152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 01:40:49.254 [2024-12-09 05:35:31.480172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:40:49.254 [2024-12-09 05:35:31.480245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:40:49.254 [2024-12-09 05:35:31.480263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 01:40:49.254 [2024-12-09 05:35:31.480285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 01:40:49.254 [2024-12-09 05:35:31.480301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:40:49.254 [2024-12-09 05:35:31.480335] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 01:40:49.254 [2024-12-09 05:35:31.486581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:40:49.254 [2024-12-09 05:35:31.486751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 01:40:49.254 [2024-12-09 05:35:31.486801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.263 ms 01:40:49.254 [2024-12-09 05:35:31.486815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:40:49.254 [2024-12-09 05:35:31.486859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:40:49.254 [2024-12-09 05:35:31.486872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 01:40:49.254 [2024-12-09 05:35:31.486890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 01:40:49.254 [2024-12-09 05:35:31.486903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:40:49.254 [2024-12-09 05:35:31.486950] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 01:40:49.254 [2024-12-09 05:35:31.487101] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 01:40:49.254 [2024-12-09 05:35:31.487128] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 01:40:49.254 [2024-12-09 05:35:31.487144] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 01:40:49.254 [2024-12-09 05:35:31.487164] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 01:40:49.254 [2024-12-09 05:35:31.487179] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 01:40:49.254 [2024-12-09 05:35:31.487197] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 01:40:49.254 [2024-12-09 05:35:31.487215] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 01:40:49.254 [2024-12-09 05:35:31.487231] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 01:40:49.254 [2024-12-09 05:35:31.487244] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 01:40:49.254 [2024-12-09 05:35:31.487262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:40:49.254 [2024-12-09 05:35:31.487275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 01:40:49.254 [2024-12-09 05:35:31.487292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.315 ms 01:40:49.254 [2024-12-09 05:35:31.487305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:40:49.254 [2024-12-09 05:35:31.487388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:40:49.254 [2024-12-09 05:35:31.487416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 01:40:49.254 [2024-12-09 05:35:31.487435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.056 ms 01:40:49.254 [2024-12-09 05:35:31.487448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:40:49.254 [2024-12-09 05:35:31.487582] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 01:40:49.254 [2024-12-09 05:35:31.487599] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 01:40:49.254 [2024-12-09 05:35:31.487617] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 01:40:49.254 [2024-12-09 05:35:31.487630] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 01:40:49.255 [2024-12-09 05:35:31.487647] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 01:40:49.255 [2024-12-09 05:35:31.487659] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 01:40:49.255 [2024-12-09 05:35:31.487675] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 01:40:49.255 [2024-12-09 05:35:31.487687] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 01:40:49.255 [2024-12-09 05:35:31.487702] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 01:40:49.255 [2024-12-09 05:35:31.487714] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 01:40:49.255 [2024-12-09 05:35:31.487729] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 01:40:49.255 [2024-12-09 05:35:31.487741] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 01:40:49.255 [2024-12-09 05:35:31.487758] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 01:40:49.255 [2024-12-09 05:35:31.487770] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 01:40:49.255 [2024-12-09 05:35:31.487787] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 01:40:49.255 [2024-12-09 05:35:31.487799] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 01:40:49.255 [2024-12-09 05:35:31.487818] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 01:40:49.255 [2024-12-09 05:35:31.487830] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 01:40:49.255 [2024-12-09 05:35:31.487847] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 01:40:49.255 [2024-12-09 05:35:31.487860] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 01:40:49.255 [2024-12-09 05:35:31.487876] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 01:40:49.255 [2024-12-09 05:35:31.487887] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 01:40:49.255 [2024-12-09 05:35:31.487903] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 01:40:49.255 [2024-12-09 05:35:31.487915] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 01:40:49.255 [2024-12-09 05:35:31.487930] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 01:40:49.255 [2024-12-09 05:35:31.487941] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 01:40:49.255 [2024-12-09 05:35:31.487956] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 01:40:49.255 [2024-12-09 05:35:31.487967] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 01:40:49.255 [2024-12-09 05:35:31.487982] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 01:40:49.255 [2024-12-09 05:35:31.487994] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 01:40:49.255 [2024-12-09 05:35:31.488009] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 01:40:49.255 [2024-12-09 05:35:31.488020] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 01:40:49.255 [2024-12-09 05:35:31.488038] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 01:40:49.255 [2024-12-09 05:35:31.488049] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 01:40:49.255 [2024-12-09 05:35:31.488064] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 01:40:49.255 [2024-12-09 05:35:31.488075] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 01:40:49.255 [2024-12-09 05:35:31.488090] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 01:40:49.255 [2024-12-09 05:35:31.488101] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 01:40:49.255 [2024-12-09 05:35:31.488116] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 01:40:49.255 [2024-12-09 05:35:31.488128] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 01:40:49.255 [2024-12-09 05:35:31.488142] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 01:40:49.255 [2024-12-09 05:35:31.488154] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 01:40:49.255 [2024-12-09 05:35:31.488168] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 01:40:49.255 [2024-12-09 05:35:31.488179] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 01:40:49.255 [2024-12-09 05:35:31.488195] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 01:40:49.255 [2024-12-09 05:35:31.488209] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 01:40:49.255 [2024-12-09 05:35:31.488227] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 01:40:49.255 [2024-12-09 05:35:31.488240] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 01:40:49.255 [2024-12-09 05:35:31.488259] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 01:40:49.255 [2024-12-09 05:35:31.488271] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 01:40:49.255 [2024-12-09 05:35:31.488287] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 01:40:49.255 [2024-12-09 05:35:31.488299] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 01:40:49.255 [2024-12-09 05:35:31.488315] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 01:40:49.255 [2024-12-09 05:35:31.488333] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 01:40:49.255 [2024-12-09 05:35:31.488356] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 01:40:49.255 [2024-12-09 05:35:31.488370] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 01:40:49.255 [2024-12-09 05:35:31.488387] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 01:40:49.255 [2024-12-09 05:35:31.488399] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 01:40:49.255 [2024-12-09 05:35:31.488415] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 01:40:49.255 [2024-12-09 05:35:31.488427] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 01:40:49.255 [2024-12-09 05:35:31.488444] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 01:40:49.255 [2024-12-09 05:35:31.488457] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 01:40:49.255 [2024-12-09 05:35:31.488486] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 01:40:49.255 [2024-12-09 05:35:31.488498] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 01:40:49.255 [2024-12-09 05:35:31.488518] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 01:40:49.255 [2024-12-09 05:35:31.488531] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 01:40:49.255 [2024-12-09 05:35:31.488547] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 01:40:49.255 [2024-12-09 05:35:31.488560] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 01:40:49.255 [2024-12-09 05:35:31.488579] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 01:40:49.255 [2024-12-09 05:35:31.488592] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 01:40:49.255 [2024-12-09 05:35:31.488610] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 01:40:49.255 [2024-12-09 05:35:31.488624] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 01:40:49.255 [2024-12-09 05:35:31.488640] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 01:40:49.255 [2024-12-09 05:35:31.488653] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 01:40:49.255 [2024-12-09 05:35:31.488668] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 01:40:49.255 [2024-12-09 05:35:31.488682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:40:49.255 [2024-12-09 05:35:31.488698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 01:40:49.255 [2024-12-09 05:35:31.488712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.164 ms 01:40:49.255 [2024-12-09 05:35:31.488729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:40:49.255 [2024-12-09 05:35:31.488781] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 01:40:49.255 [2024-12-09 05:35:31.488805] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 01:40:53.450 [2024-12-09 05:35:35.306901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:40:53.450 [2024-12-09 05:35:35.307271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 01:40:53.450 [2024-12-09 05:35:35.307308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3824.316 ms 01:40:53.450 [2024-12-09 05:35:35.307328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:40:53.450 [2024-12-09 05:35:35.356565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:40:53.450 [2024-12-09 05:35:35.356668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 01:40:53.450 [2024-12-09 05:35:35.356690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 48.905 ms 01:40:53.450 [2024-12-09 05:35:35.356710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:40:53.450 [2024-12-09 05:35:35.356829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:40:53.450 [2024-12-09 05:35:35.356849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 01:40:53.450 [2024-12-09 05:35:35.356865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 01:40:53.450 [2024-12-09 05:35:35.356891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:40:53.450 [2024-12-09 05:35:35.409838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:40:53.450 [2024-12-09 05:35:35.409904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 01:40:53.450 [2024-12-09 05:35:35.409924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 52.979 ms 01:40:53.450 [2024-12-09 05:35:35.409942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:40:53.450 [2024-12-09 05:35:35.410004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:40:53.450 [2024-12-09 05:35:35.410022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 01:40:53.450 [2024-12-09 05:35:35.410037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 01:40:53.450 [2024-12-09 05:35:35.410053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:40:53.450 [2024-12-09 05:35:35.410959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:40:53.450 [2024-12-09 05:35:35.410983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 01:40:53.450 [2024-12-09 05:35:35.411021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.798 ms 01:40:53.450 [2024-12-09 05:35:35.411038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:40:53.450 [2024-12-09 05:35:35.411088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:40:53.450 [2024-12-09 05:35:35.411111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 01:40:53.450 [2024-12-09 05:35:35.411124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 01:40:53.450 [2024-12-09 05:35:35.411145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:40:53.450 [2024-12-09 05:35:35.436778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:40:53.450 [2024-12-09 05:35:35.436839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 01:40:53.450 [2024-12-09 05:35:35.436874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.646 ms 01:40:53.450 [2024-12-09 05:35:35.436894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:40:53.450 [2024-12-09 05:35:35.464617] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 01:40:53.450 [2024-12-09 05:35:35.466421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:40:53.450 [2024-12-09 05:35:35.466454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 01:40:53.450 [2024-12-09 05:35:35.466490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 29.453 ms 01:40:53.450 [2024-12-09 05:35:35.466504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:40:53.450 [2024-12-09 05:35:35.503559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:40:53.450 [2024-12-09 05:35:35.503619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 01:40:53.450 [2024-12-09 05:35:35.503658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 37.061 ms 01:40:53.450 [2024-12-09 05:35:35.503672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:40:53.450 [2024-12-09 05:35:35.503772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:40:53.450 [2024-12-09 05:35:35.503786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 01:40:53.450 [2024-12-09 05:35:35.503808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.064 ms 01:40:53.450 [2024-12-09 05:35:35.503821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:40:53.450 [2024-12-09 05:35:35.540002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:40:53.450 [2024-12-09 05:35:35.540218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 01:40:53.450 [2024-12-09 05:35:35.540253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 36.171 ms 01:40:53.450 [2024-12-09 05:35:35.540268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:40:53.450 [2024-12-09 05:35:35.575900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:40:53.450 [2024-12-09 05:35:35.575945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 01:40:53.450 [2024-12-09 05:35:35.575965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.643 ms 01:40:53.450 [2024-12-09 05:35:35.575977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:40:53.450 [2024-12-09 05:35:35.576751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:40:53.450 [2024-12-09 05:35:35.576778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 01:40:53.450 [2024-12-09 05:35:35.576802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.742 ms 01:40:53.450 [2024-12-09 05:35:35.576815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:40:53.450 [2024-12-09 05:35:35.677722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:40:53.450 [2024-12-09 05:35:35.677788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 01:40:53.450 [2024-12-09 05:35:35.677816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 100.996 ms 01:40:53.450 [2024-12-09 05:35:35.677830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:40:53.450 [2024-12-09 05:35:35.716240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:40:53.450 [2024-12-09 05:35:35.716296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 01:40:53.450 [2024-12-09 05:35:35.716335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 38.364 ms 01:40:53.450 [2024-12-09 05:35:35.716350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:40:53.450 [2024-12-09 05:35:35.752385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:40:53.450 [2024-12-09 05:35:35.752434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 01:40:53.450 [2024-12-09 05:35:35.752457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 36.038 ms 01:40:53.450 [2024-12-09 05:35:35.752484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:40:53.450 [2024-12-09 05:35:35.789063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:40:53.450 [2024-12-09 05:35:35.789113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 01:40:53.450 [2024-12-09 05:35:35.789136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 36.569 ms 01:40:53.450 [2024-12-09 05:35:35.789148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:40:53.450 [2024-12-09 05:35:35.789209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:40:53.450 [2024-12-09 05:35:35.789231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 01:40:53.450 [2024-12-09 05:35:35.789253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 01:40:53.450 [2024-12-09 05:35:35.789265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:40:53.450 [2024-12-09 05:35:35.789415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:40:53.450 [2024-12-09 05:35:35.789434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 01:40:53.450 [2024-12-09 05:35:35.789452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.047 ms 01:40:53.450 [2024-12-09 05:35:35.789465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:40:53.451 [2024-12-09 05:35:35.790997] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4336.435 ms, result 0 01:40:53.451 { 01:40:53.451 "name": "ftl", 01:40:53.451 "uuid": "162f066f-41b9-45f4-beb4-c6fa7d398deb" 01:40:53.451 } 01:40:53.451 05:35:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 01:40:53.709 [2024-12-09 05:35:36.017415] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:40:53.709 05:35:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 01:40:53.968 05:35:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 01:40:54.227 [2024-12-09 05:35:36.437235] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 01:40:54.227 05:35:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 01:40:54.227 [2024-12-09 05:35:36.624360] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 01:40:54.227 05:35:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 01:40:54.794 Fill FTL, iteration 1 01:40:54.794 05:35:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 01:40:54.794 05:35:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 01:40:54.794 05:35:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 01:40:54.794 05:35:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 01:40:54.794 05:35:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 01:40:54.794 05:35:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 01:40:54.794 05:35:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 01:40:54.794 05:35:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 01:40:54.794 05:35:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 01:40:54.794 05:35:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 01:40:54.794 05:35:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 01:40:54.795 05:35:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 01:40:54.795 05:35:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 01:40:54.795 05:35:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 01:40:54.795 05:35:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 01:40:54.795 05:35:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 01:40:54.795 05:35:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=83945 01:40:54.795 05:35:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 01:40:54.795 05:35:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 01:40:54.795 05:35:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 83945 /var/tmp/spdk.tgt.sock 01:40:54.795 05:35:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83945 ']' 01:40:54.795 05:35:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 01:40:54.795 05:35:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 01:40:54.795 05:35:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 01:40:54.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 01:40:54.795 05:35:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 01:40:54.795 05:35:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 01:40:54.795 [2024-12-09 05:35:37.108388] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:40:54.795 [2024-12-09 05:35:37.108542] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83945 ] 01:40:55.053 [2024-12-09 05:35:37.296534] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:40:55.053 [2024-12-09 05:35:37.439156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:40:56.429 05:35:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:40:56.429 05:35:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 01:40:56.429 05:35:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 01:40:56.429 ftln1 01:40:56.429 05:35:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 01:40:56.429 05:35:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 01:40:56.687 05:35:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 01:40:56.687 05:35:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 83945 01:40:56.687 05:35:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 83945 ']' 01:40:56.687 05:35:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 83945 01:40:56.687 05:35:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 01:40:56.687 05:35:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:40:56.687 05:35:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83945 01:40:56.687 killing process with pid 83945 01:40:56.687 05:35:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:40:56.687 05:35:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:40:56.687 05:35:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83945' 01:40:56.687 05:35:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 83945 01:40:56.687 05:35:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 83945 01:40:59.233 05:35:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 01:40:59.234 05:35:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 01:40:59.234 [2024-12-09 05:35:41.642444] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:40:59.234 [2024-12-09 05:35:41.642641] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84003 ] 01:40:59.493 [2024-12-09 05:35:41.831302] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:40:59.751 [2024-12-09 05:35:41.970170] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:41:01.135  [2024-12-09T05:35:44.529Z] Copying: 243/1024 [MB] (243 MBps) [2024-12-09T05:35:45.907Z] Copying: 487/1024 [MB] (244 MBps) [2024-12-09T05:35:46.844Z] Copying: 732/1024 [MB] (245 MBps) [2024-12-09T05:35:46.844Z] Copying: 974/1024 [MB] (242 MBps) [2024-12-09T05:35:48.231Z] Copying: 1024/1024 [MB] (average 242 MBps) 01:41:05.775 01:41:05.775 Calculate MD5 checksum, iteration 1 01:41:05.775 05:35:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 01:41:05.775 05:35:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 01:41:05.775 05:35:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 01:41:05.775 05:35:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 01:41:05.775 05:35:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 01:41:05.775 05:35:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 01:41:05.775 05:35:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 01:41:05.775 05:35:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 01:41:05.775 [2024-12-09 05:35:48.120338] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:41:05.775 [2024-12-09 05:35:48.120716] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84067 ] 01:41:06.033 [2024-12-09 05:35:48.309147] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:41:06.033 [2024-12-09 05:35:48.447431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:41:07.933  [2024-12-09T05:35:50.646Z] Copying: 628/1024 [MB] (628 MBps) [2024-12-09T05:35:52.015Z] Copying: 1024/1024 [MB] (average 621 MBps) 01:41:09.559 01:41:09.559 05:35:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 01:41:09.559 05:35:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 01:41:11.459 05:35:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 01:41:11.459 05:35:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=8e679b8cf171aa48e8fa818622fe72a9 01:41:11.459 Fill FTL, iteration 2 01:41:11.459 05:35:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 01:41:11.459 05:35:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 01:41:11.459 05:35:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 01:41:11.459 05:35:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 01:41:11.459 05:35:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 01:41:11.459 05:35:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 01:41:11.459 05:35:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 01:41:11.459 05:35:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 01:41:11.459 05:35:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 01:41:11.459 [2024-12-09 05:35:53.521651] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:41:11.459 [2024-12-09 05:35:53.522647] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84124 ] 01:41:11.459 [2024-12-09 05:35:53.709627] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:41:11.459 [2024-12-09 05:35:53.847543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:41:13.359  [2024-12-09T05:35:56.382Z] Copying: 242/1024 [MB] (242 MBps) [2024-12-09T05:35:57.759Z] Copying: 479/1024 [MB] (237 MBps) [2024-12-09T05:35:58.694Z] Copying: 723/1024 [MB] (244 MBps) [2024-12-09T05:35:58.694Z] Copying: 970/1024 [MB] (247 MBps) [2024-12-09T05:36:00.072Z] Copying: 1024/1024 [MB] (average 242 MBps) 01:41:17.616 01:41:17.616 05:35:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 01:41:17.616 Calculate MD5 checksum, iteration 2 01:41:17.616 05:35:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 01:41:17.616 05:35:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 01:41:17.616 05:35:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 01:41:17.616 05:35:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 01:41:17.616 05:35:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 01:41:17.616 05:35:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 01:41:17.616 05:35:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 01:41:17.616 [2024-12-09 05:36:00.002236] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:41:17.616 [2024-12-09 05:36:00.002691] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84195 ] 01:41:17.873 [2024-12-09 05:36:00.195063] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:41:18.131 [2024-12-09 05:36:00.333650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:41:20.065  [2024-12-09T05:36:02.779Z] Copying: 684/1024 [MB] (684 MBps) [2024-12-09T05:36:04.153Z] Copying: 1024/1024 [MB] (average 669 MBps) 01:41:21.697 01:41:21.697 05:36:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 01:41:21.697 05:36:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 01:41:23.597 05:36:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 01:41:23.597 05:36:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=6213c8edac3dab4953e6660622bd0f81 01:41:23.597 05:36:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 01:41:23.597 05:36:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 01:41:23.597 05:36:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 01:41:23.855 [2024-12-09 05:36:06.067290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:41:23.855 [2024-12-09 05:36:06.067880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 01:41:23.855 [2024-12-09 05:36:06.068127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 01:41:23.855 [2024-12-09 05:36:06.068148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:41:23.855 [2024-12-09 05:36:06.068202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:41:23.855 [2024-12-09 05:36:06.068224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 01:41:23.855 [2024-12-09 05:36:06.068237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 01:41:23.855 [2024-12-09 05:36:06.068249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:41:23.855 [2024-12-09 05:36:06.068271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:41:23.855 [2024-12-09 05:36:06.068283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 01:41:23.855 [2024-12-09 05:36:06.068295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 01:41:23.855 [2024-12-09 05:36:06.068308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:41:23.855 [2024-12-09 05:36:06.068401] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 1.083 ms, result 0 01:41:23.855 true 01:41:23.855 05:36:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 01:41:23.855 { 01:41:23.855 "name": "ftl", 01:41:23.855 "properties": [ 01:41:23.855 { 01:41:23.855 "name": "superblock_version", 01:41:23.855 "value": 5, 01:41:23.855 "read-only": true 01:41:23.855 }, 01:41:23.855 { 01:41:23.855 "name": "base_device", 01:41:23.855 "bands": [ 01:41:23.855 { 01:41:23.855 "id": 0, 01:41:23.855 "state": "FREE", 01:41:23.855 "validity": 0.0 01:41:23.855 }, 01:41:23.855 { 01:41:23.855 "id": 1, 01:41:23.855 "state": "FREE", 01:41:23.855 "validity": 0.0 01:41:23.855 }, 01:41:23.855 { 01:41:23.855 "id": 2, 01:41:23.855 "state": "FREE", 01:41:23.855 "validity": 0.0 01:41:23.855 }, 01:41:23.855 { 01:41:23.855 "id": 3, 01:41:23.855 "state": "FREE", 01:41:23.855 "validity": 0.0 01:41:23.855 }, 01:41:23.855 { 01:41:23.855 "id": 4, 01:41:23.855 "state": "FREE", 01:41:23.855 "validity": 0.0 01:41:23.855 }, 01:41:23.855 { 01:41:23.855 "id": 5, 01:41:23.855 "state": "FREE", 01:41:23.855 "validity": 0.0 01:41:23.855 }, 01:41:23.855 { 01:41:23.855 "id": 6, 01:41:23.855 "state": "FREE", 01:41:23.855 "validity": 0.0 01:41:23.855 }, 01:41:23.855 { 01:41:23.855 "id": 7, 01:41:23.855 "state": "FREE", 01:41:23.855 "validity": 0.0 01:41:23.855 }, 01:41:23.855 { 01:41:23.855 "id": 8, 01:41:23.855 "state": "FREE", 01:41:23.855 "validity": 0.0 01:41:23.855 }, 01:41:23.855 { 01:41:23.855 "id": 9, 01:41:23.855 "state": "FREE", 01:41:23.855 "validity": 0.0 01:41:23.855 }, 01:41:23.855 { 01:41:23.855 "id": 10, 01:41:23.855 "state": "FREE", 01:41:23.855 "validity": 0.0 01:41:23.855 }, 01:41:23.855 { 01:41:23.855 "id": 11, 01:41:23.855 "state": "FREE", 01:41:23.855 "validity": 0.0 01:41:23.855 }, 01:41:23.855 { 01:41:23.855 "id": 12, 01:41:23.855 "state": "FREE", 01:41:23.855 "validity": 0.0 01:41:23.855 }, 01:41:23.855 { 01:41:23.855 "id": 13, 01:41:23.855 "state": "FREE", 01:41:23.855 "validity": 0.0 01:41:23.855 }, 01:41:23.855 { 01:41:23.855 "id": 14, 01:41:23.855 "state": "FREE", 01:41:23.855 "validity": 0.0 01:41:23.855 }, 01:41:23.855 { 01:41:23.855 "id": 15, 01:41:23.855 "state": "FREE", 01:41:23.855 "validity": 0.0 01:41:23.855 }, 01:41:23.855 { 01:41:23.855 "id": 16, 01:41:23.855 "state": "FREE", 01:41:23.855 "validity": 0.0 01:41:23.855 }, 01:41:23.855 { 01:41:23.855 "id": 17, 01:41:23.855 "state": "FREE", 01:41:23.855 "validity": 0.0 01:41:23.855 } 01:41:23.855 ], 01:41:23.855 "read-only": true 01:41:23.855 }, 01:41:23.855 { 01:41:23.855 "name": "cache_device", 01:41:23.855 "type": "bdev", 01:41:23.855 "chunks": [ 01:41:23.855 { 01:41:23.855 "id": 0, 01:41:23.855 "state": "INACTIVE", 01:41:23.855 "utilization": 0.0 01:41:23.855 }, 01:41:23.855 { 01:41:23.855 "id": 1, 01:41:23.855 "state": "CLOSED", 01:41:23.855 "utilization": 1.0 01:41:23.855 }, 01:41:23.855 { 01:41:23.855 "id": 2, 01:41:23.855 "state": "CLOSED", 01:41:23.855 "utilization": 1.0 01:41:23.855 }, 01:41:23.855 { 01:41:23.855 "id": 3, 01:41:23.855 "state": "OPEN", 01:41:23.855 "utilization": 0.001953125 01:41:23.855 }, 01:41:23.855 { 01:41:23.855 "id": 4, 01:41:23.855 "state": "OPEN", 01:41:23.855 "utilization": 0.0 01:41:23.855 } 01:41:23.855 ], 01:41:23.856 "read-only": true 01:41:23.856 }, 01:41:23.856 { 01:41:23.856 "name": "verbose_mode", 01:41:23.856 "value": true, 01:41:23.856 "unit": "", 01:41:23.856 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 01:41:23.856 }, 01:41:23.856 { 01:41:23.856 "name": "prep_upgrade_on_shutdown", 01:41:23.856 "value": false, 01:41:23.856 "unit": "", 01:41:23.856 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 01:41:23.856 } 01:41:23.856 ] 01:41:23.856 } 01:41:23.856 05:36:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 01:41:24.115 [2024-12-09 05:36:06.447244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:41:24.115 [2024-12-09 05:36:06.447295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 01:41:24.115 [2024-12-09 05:36:06.447313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 01:41:24.115 [2024-12-09 05:36:06.447324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:41:24.115 [2024-12-09 05:36:06.447351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:41:24.115 [2024-12-09 05:36:06.447362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 01:41:24.115 [2024-12-09 05:36:06.447373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 01:41:24.115 [2024-12-09 05:36:06.447383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:41:24.115 [2024-12-09 05:36:06.447403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:41:24.115 [2024-12-09 05:36:06.447413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 01:41:24.115 [2024-12-09 05:36:06.447424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 01:41:24.115 [2024-12-09 05:36:06.447433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:41:24.115 [2024-12-09 05:36:06.447513] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.243 ms, result 0 01:41:24.115 true 01:41:24.115 05:36:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 01:41:24.115 05:36:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 01:41:24.115 05:36:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 01:41:24.373 05:36:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 01:41:24.373 05:36:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 01:41:24.373 05:36:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 01:41:24.632 [2024-12-09 05:36:06.887196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:41:24.632 [2024-12-09 05:36:06.887248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 01:41:24.632 [2024-12-09 05:36:06.887265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 01:41:24.632 [2024-12-09 05:36:06.887276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:41:24.632 [2024-12-09 05:36:06.887302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:41:24.632 [2024-12-09 05:36:06.887313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 01:41:24.632 [2024-12-09 05:36:06.887325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 01:41:24.632 [2024-12-09 05:36:06.887335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:41:24.632 [2024-12-09 05:36:06.887354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:41:24.632 [2024-12-09 05:36:06.887365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 01:41:24.632 [2024-12-09 05:36:06.887375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 01:41:24.632 [2024-12-09 05:36:06.887385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:41:24.632 [2024-12-09 05:36:06.887443] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.237 ms, result 0 01:41:24.632 true 01:41:24.632 05:36:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 01:41:24.632 { 01:41:24.632 "name": "ftl", 01:41:24.632 "properties": [ 01:41:24.632 { 01:41:24.632 "name": "superblock_version", 01:41:24.632 "value": 5, 01:41:24.632 "read-only": true 01:41:24.632 }, 01:41:24.632 { 01:41:24.632 "name": "base_device", 01:41:24.632 "bands": [ 01:41:24.632 { 01:41:24.632 "id": 0, 01:41:24.632 "state": "FREE", 01:41:24.632 "validity": 0.0 01:41:24.632 }, 01:41:24.632 { 01:41:24.632 "id": 1, 01:41:24.632 "state": "FREE", 01:41:24.632 "validity": 0.0 01:41:24.632 }, 01:41:24.632 { 01:41:24.632 "id": 2, 01:41:24.632 "state": "FREE", 01:41:24.632 "validity": 0.0 01:41:24.632 }, 01:41:24.632 { 01:41:24.632 "id": 3, 01:41:24.632 "state": "FREE", 01:41:24.632 "validity": 0.0 01:41:24.632 }, 01:41:24.632 { 01:41:24.632 "id": 4, 01:41:24.632 "state": "FREE", 01:41:24.632 "validity": 0.0 01:41:24.632 }, 01:41:24.632 { 01:41:24.632 "id": 5, 01:41:24.632 "state": "FREE", 01:41:24.632 "validity": 0.0 01:41:24.632 }, 01:41:24.632 { 01:41:24.632 "id": 6, 01:41:24.632 "state": "FREE", 01:41:24.632 "validity": 0.0 01:41:24.632 }, 01:41:24.632 { 01:41:24.632 "id": 7, 01:41:24.632 "state": "FREE", 01:41:24.632 "validity": 0.0 01:41:24.632 }, 01:41:24.632 { 01:41:24.632 "id": 8, 01:41:24.632 "state": "FREE", 01:41:24.632 "validity": 0.0 01:41:24.632 }, 01:41:24.632 { 01:41:24.632 "id": 9, 01:41:24.632 "state": "FREE", 01:41:24.632 "validity": 0.0 01:41:24.632 }, 01:41:24.632 { 01:41:24.632 "id": 10, 01:41:24.632 "state": "FREE", 01:41:24.632 "validity": 0.0 01:41:24.632 }, 01:41:24.632 { 01:41:24.632 "id": 11, 01:41:24.632 "state": "FREE", 01:41:24.632 "validity": 0.0 01:41:24.632 }, 01:41:24.632 { 01:41:24.632 "id": 12, 01:41:24.632 "state": "FREE", 01:41:24.632 "validity": 0.0 01:41:24.632 }, 01:41:24.632 { 01:41:24.632 "id": 13, 01:41:24.632 "state": "FREE", 01:41:24.632 "validity": 0.0 01:41:24.632 }, 01:41:24.632 { 01:41:24.632 "id": 14, 01:41:24.632 "state": "FREE", 01:41:24.632 "validity": 0.0 01:41:24.632 }, 01:41:24.632 { 01:41:24.632 "id": 15, 01:41:24.632 "state": "FREE", 01:41:24.632 "validity": 0.0 01:41:24.632 }, 01:41:24.632 { 01:41:24.632 "id": 16, 01:41:24.632 "state": "FREE", 01:41:24.632 "validity": 0.0 01:41:24.632 }, 01:41:24.632 { 01:41:24.632 "id": 17, 01:41:24.632 "state": "FREE", 01:41:24.632 "validity": 0.0 01:41:24.632 } 01:41:24.632 ], 01:41:24.632 "read-only": true 01:41:24.632 }, 01:41:24.632 { 01:41:24.632 "name": "cache_device", 01:41:24.632 "type": "bdev", 01:41:24.632 "chunks": [ 01:41:24.632 { 01:41:24.632 "id": 0, 01:41:24.632 "state": "INACTIVE", 01:41:24.632 "utilization": 0.0 01:41:24.632 }, 01:41:24.632 { 01:41:24.632 "id": 1, 01:41:24.632 "state": "CLOSED", 01:41:24.632 "utilization": 1.0 01:41:24.632 }, 01:41:24.632 { 01:41:24.632 "id": 2, 01:41:24.632 "state": "CLOSED", 01:41:24.632 "utilization": 1.0 01:41:24.632 }, 01:41:24.632 { 01:41:24.632 "id": 3, 01:41:24.632 "state": "OPEN", 01:41:24.632 "utilization": 0.001953125 01:41:24.632 }, 01:41:24.632 { 01:41:24.632 "id": 4, 01:41:24.632 "state": "OPEN", 01:41:24.632 "utilization": 0.0 01:41:24.632 } 01:41:24.632 ], 01:41:24.632 "read-only": true 01:41:24.632 }, 01:41:24.632 { 01:41:24.632 "name": "verbose_mode", 01:41:24.632 "value": true, 01:41:24.632 "unit": "", 01:41:24.632 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 01:41:24.632 }, 01:41:24.632 { 01:41:24.632 "name": "prep_upgrade_on_shutdown", 01:41:24.632 "value": true, 01:41:24.632 "unit": "", 01:41:24.632 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 01:41:24.632 } 01:41:24.632 ] 01:41:24.632 } 01:41:24.891 05:36:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 01:41:24.891 05:36:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 83811 ]] 01:41:24.891 05:36:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 83811 01:41:24.891 05:36:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 83811 ']' 01:41:24.891 05:36:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 83811 01:41:24.891 05:36:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 01:41:24.891 05:36:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:41:24.891 05:36:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83811 01:41:24.891 killing process with pid 83811 01:41:24.891 05:36:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:41:24.891 05:36:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:41:24.891 05:36:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83811' 01:41:24.891 05:36:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 83811 01:41:24.891 05:36:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 83811 01:41:26.269 [2024-12-09 05:36:08.312990] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 01:41:26.269 [2024-12-09 05:36:08.333056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:41:26.269 [2024-12-09 05:36:08.333099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 01:41:26.269 [2024-12-09 05:36:08.333116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 01:41:26.269 [2024-12-09 05:36:08.333127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:41:26.269 [2024-12-09 05:36:08.333151] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 01:41:26.269 [2024-12-09 05:36:08.337646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:41:26.269 [2024-12-09 05:36:08.337676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 01:41:26.269 [2024-12-09 05:36:08.337689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.485 ms 01:41:26.269 [2024-12-09 05:36:08.337704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:41:34.385 [2024-12-09 05:36:15.619203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:41:34.385 [2024-12-09 05:36:15.619539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 01:41:34.385 [2024-12-09 05:36:15.619580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7293.284 ms 01:41:34.385 [2024-12-09 05:36:15.619593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:41:34.385 [2024-12-09 05:36:15.620667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:41:34.385 [2024-12-09 05:36:15.620695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 01:41:34.385 [2024-12-09 05:36:15.620707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.049 ms 01:41:34.385 [2024-12-09 05:36:15.620718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:41:34.385 [2024-12-09 05:36:15.621630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:41:34.385 [2024-12-09 05:36:15.621649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 01:41:34.385 [2024-12-09 05:36:15.621662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.882 ms 01:41:34.385 [2024-12-09 05:36:15.621679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:41:34.385 [2024-12-09 05:36:15.636261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:41:34.385 [2024-12-09 05:36:15.636301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 01:41:34.385 [2024-12-09 05:36:15.636316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.569 ms 01:41:34.385 [2024-12-09 05:36:15.636327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:41:34.385 [2024-12-09 05:36:15.645584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:41:34.385 [2024-12-09 05:36:15.645623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 01:41:34.385 [2024-12-09 05:36:15.645636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.237 ms 01:41:34.385 [2024-12-09 05:36:15.645647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:41:34.385 [2024-12-09 05:36:15.645742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:41:34.385 [2024-12-09 05:36:15.645762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 01:41:34.385 [2024-12-09 05:36:15.645773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.058 ms 01:41:34.385 [2024-12-09 05:36:15.645783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:41:34.385 [2024-12-09 05:36:15.659829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:41:34.385 [2024-12-09 05:36:15.659863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 01:41:34.385 [2024-12-09 05:36:15.659876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.051 ms 01:41:34.385 [2024-12-09 05:36:15.659886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:41:34.385 [2024-12-09 05:36:15.673652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:41:34.385 [2024-12-09 05:36:15.673821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 01:41:34.385 [2024-12-09 05:36:15.673844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.755 ms 01:41:34.385 [2024-12-09 05:36:15.673854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:41:34.385 [2024-12-09 05:36:15.687893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:41:34.385 [2024-12-09 05:36:15.687926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 01:41:34.385 [2024-12-09 05:36:15.687939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.989 ms 01:41:34.385 [2024-12-09 05:36:15.687949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:41:34.385 [2024-12-09 05:36:15.701767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:41:34.385 [2024-12-09 05:36:15.701800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 01:41:34.385 [2024-12-09 05:36:15.701813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.765 ms 01:41:34.385 [2024-12-09 05:36:15.701822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:41:34.385 [2024-12-09 05:36:15.701855] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 01:41:34.385 [2024-12-09 05:36:15.701885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 01:41:34.385 [2024-12-09 05:36:15.701899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 01:41:34.385 [2024-12-09 05:36:15.701909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 01:41:34.385 [2024-12-09 05:36:15.701920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 01:41:34.385 [2024-12-09 05:36:15.701931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 01:41:34.385 [2024-12-09 05:36:15.701942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 01:41:34.385 [2024-12-09 05:36:15.701952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 01:41:34.385 [2024-12-09 05:36:15.701963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 01:41:34.385 [2024-12-09 05:36:15.701974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 01:41:34.385 [2024-12-09 05:36:15.701984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 01:41:34.385 [2024-12-09 05:36:15.701994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 01:41:34.385 [2024-12-09 05:36:15.702004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 01:41:34.385 [2024-12-09 05:36:15.702015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 01:41:34.385 [2024-12-09 05:36:15.702025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 01:41:34.385 [2024-12-09 05:36:15.702035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 01:41:34.385 [2024-12-09 05:36:15.702046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 01:41:34.385 [2024-12-09 05:36:15.702056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 01:41:34.385 [2024-12-09 05:36:15.702065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 01:41:34.385 [2024-12-09 05:36:15.702078] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 01:41:34.385 [2024-12-09 05:36:15.702088] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 162f066f-41b9-45f4-beb4-c6fa7d398deb 01:41:34.385 [2024-12-09 05:36:15.702099] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 01:41:34.385 [2024-12-09 05:36:15.702109] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 01:41:34.386 [2024-12-09 05:36:15.702119] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 01:41:34.386 [2024-12-09 05:36:15.702130] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 01:41:34.386 [2024-12-09 05:36:15.702145] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 01:41:34.386 [2024-12-09 05:36:15.702155] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 01:41:34.386 [2024-12-09 05:36:15.702169] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 01:41:34.386 [2024-12-09 05:36:15.702178] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 01:41:34.386 [2024-12-09 05:36:15.702188] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 01:41:34.386 [2024-12-09 05:36:15.702198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:41:34.386 [2024-12-09 05:36:15.702209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 01:41:34.386 [2024-12-09 05:36:15.702219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.345 ms 01:41:34.386 [2024-12-09 05:36:15.702230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:41:34.386 [2024-12-09 05:36:15.722080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:41:34.386 [2024-12-09 05:36:15.722113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 01:41:34.386 [2024-12-09 05:36:15.722132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.864 ms 01:41:34.386 [2024-12-09 05:36:15.722143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:41:34.386 [2024-12-09 05:36:15.722749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:41:34.386 [2024-12-09 05:36:15.722771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 01:41:34.386 [2024-12-09 05:36:15.722783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.574 ms 01:41:34.386 [2024-12-09 05:36:15.722793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:41:34.386 [2024-12-09 05:36:15.787932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 01:41:34.386 [2024-12-09 05:36:15.787973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 01:41:34.386 [2024-12-09 05:36:15.787987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 01:41:34.386 [2024-12-09 05:36:15.787998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:41:34.386 [2024-12-09 05:36:15.788032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 01:41:34.386 [2024-12-09 05:36:15.788044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 01:41:34.386 [2024-12-09 05:36:15.788055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 01:41:34.386 [2024-12-09 05:36:15.788065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:41:34.386 [2024-12-09 05:36:15.788155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 01:41:34.386 [2024-12-09 05:36:15.788170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 01:41:34.386 [2024-12-09 05:36:15.788186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 01:41:34.386 [2024-12-09 05:36:15.788197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:41:34.386 [2024-12-09 05:36:15.788215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 01:41:34.386 [2024-12-09 05:36:15.788226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 01:41:34.386 [2024-12-09 05:36:15.788236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 01:41:34.386 [2024-12-09 05:36:15.788247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:41:34.386 [2024-12-09 05:36:15.912489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 01:41:34.386 [2024-12-09 05:36:15.912546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 01:41:34.386 [2024-12-09 05:36:15.912568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 01:41:34.386 [2024-12-09 05:36:15.912579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:41:34.386 [2024-12-09 05:36:16.010689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 01:41:34.386 [2024-12-09 05:36:16.010963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 01:41:34.386 [2024-12-09 05:36:16.010988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 01:41:34.386 [2024-12-09 05:36:16.011000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:41:34.386 [2024-12-09 05:36:16.011173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 01:41:34.386 [2024-12-09 05:36:16.011187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 01:41:34.386 [2024-12-09 05:36:16.011199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 01:41:34.386 [2024-12-09 05:36:16.011215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:41:34.386 [2024-12-09 05:36:16.011265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 01:41:34.386 [2024-12-09 05:36:16.011278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 01:41:34.386 [2024-12-09 05:36:16.011289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 01:41:34.386 [2024-12-09 05:36:16.011300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:41:34.386 [2024-12-09 05:36:16.011429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 01:41:34.386 [2024-12-09 05:36:16.011444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 01:41:34.386 [2024-12-09 05:36:16.011456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 01:41:34.386 [2024-12-09 05:36:16.011467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:41:34.386 [2024-12-09 05:36:16.011538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 01:41:34.386 [2024-12-09 05:36:16.011552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 01:41:34.386 [2024-12-09 05:36:16.011564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 01:41:34.386 [2024-12-09 05:36:16.011575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:41:34.386 [2024-12-09 05:36:16.011625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 01:41:34.386 [2024-12-09 05:36:16.011638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 01:41:34.386 [2024-12-09 05:36:16.011649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 01:41:34.386 [2024-12-09 05:36:16.011660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:41:34.386 [2024-12-09 05:36:16.011720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 01:41:34.386 [2024-12-09 05:36:16.011734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 01:41:34.386 [2024-12-09 05:36:16.011745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 01:41:34.386 [2024-12-09 05:36:16.011756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:41:34.386 [2024-12-09 05:36:16.011909] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 7691.277 ms, result 0 01:41:37.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:41:37.808 05:36:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 01:41:37.808 05:36:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 01:41:37.808 05:36:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 01:41:37.808 05:36:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 01:41:37.808 05:36:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 01:41:37.808 05:36:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=84407 01:41:37.808 05:36:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 01:41:37.808 05:36:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 84407 01:41:37.808 05:36:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84407 ']' 01:41:37.808 05:36:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:41:37.808 05:36:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 01:41:37.808 05:36:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:41:37.808 05:36:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 01:41:37.808 05:36:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 01:41:37.808 05:36:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 01:41:37.808 [2024-12-09 05:36:19.934682] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:41:37.808 [2024-12-09 05:36:19.934812] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84407 ] 01:41:37.808 [2024-12-09 05:36:20.117512] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:41:37.808 [2024-12-09 05:36:20.245248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:41:39.186 [2024-12-09 05:36:21.316670] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 01:41:39.186 [2024-12-09 05:36:21.316757] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 01:41:39.186 [2024-12-09 05:36:21.464380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:41:39.186 [2024-12-09 05:36:21.464614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 01:41:39.186 [2024-12-09 05:36:21.464642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 01:41:39.186 [2024-12-09 05:36:21.464654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:41:39.186 [2024-12-09 05:36:21.464739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:41:39.186 [2024-12-09 05:36:21.464754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 01:41:39.186 [2024-12-09 05:36:21.464765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.049 ms 01:41:39.186 [2024-12-09 05:36:21.464776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:41:39.186 [2024-12-09 05:36:21.464800] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 01:41:39.186 [2024-12-09 05:36:21.465763] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 01:41:39.186 [2024-12-09 05:36:21.465792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:41:39.186 [2024-12-09 05:36:21.465803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 01:41:39.186 [2024-12-09 05:36:21.465814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.998 ms 01:41:39.186 [2024-12-09 05:36:21.465825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:41:39.186 [2024-12-09 05:36:21.468224] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 01:41:39.186 [2024-12-09 05:36:21.488947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:41:39.186 [2024-12-09 05:36:21.488992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 01:41:39.186 [2024-12-09 05:36:21.489008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.757 ms 01:41:39.186 [2024-12-09 05:36:21.489019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:41:39.186 [2024-12-09 05:36:21.489089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:41:39.186 [2024-12-09 05:36:21.489103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 01:41:39.186 [2024-12-09 05:36:21.489115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 01:41:39.186 [2024-12-09 05:36:21.489126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:41:39.186 [2024-12-09 05:36:21.501404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:41:39.186 [2024-12-09 05:36:21.501600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 01:41:39.186 [2024-12-09 05:36:21.501625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.213 ms 01:41:39.186 [2024-12-09 05:36:21.501636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:41:39.186 [2024-12-09 05:36:21.501726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:41:39.186 [2024-12-09 05:36:21.501741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 01:41:39.186 [2024-12-09 05:36:21.501753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.052 ms 01:41:39.186 [2024-12-09 05:36:21.501765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:41:39.186 [2024-12-09 05:36:21.501832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:41:39.186 [2024-12-09 05:36:21.501850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 01:41:39.186 [2024-12-09 05:36:21.501863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 01:41:39.186 [2024-12-09 05:36:21.501874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:41:39.186 [2024-12-09 05:36:21.501904] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 01:41:39.186 [2024-12-09 05:36:21.507707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:41:39.186 [2024-12-09 05:36:21.507741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 01:41:39.186 [2024-12-09 05:36:21.507759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.820 ms 01:41:39.186 [2024-12-09 05:36:21.507769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:41:39.186 [2024-12-09 05:36:21.507799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:41:39.186 [2024-12-09 05:36:21.507810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 01:41:39.186 [2024-12-09 05:36:21.507822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 01:41:39.186 [2024-12-09 05:36:21.507833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:41:39.186 [2024-12-09 05:36:21.507877] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 01:41:39.186 [2024-12-09 05:36:21.507919] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 01:41:39.186 [2024-12-09 05:36:21.507959] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 01:41:39.186 [2024-12-09 05:36:21.507978] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 01:41:39.186 [2024-12-09 05:36:21.508071] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 01:41:39.186 [2024-12-09 05:36:21.508086] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 01:41:39.186 [2024-12-09 05:36:21.508100] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 01:41:39.186 [2024-12-09 05:36:21.508113] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 01:41:39.186 [2024-12-09 05:36:21.508129] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 01:41:39.186 [2024-12-09 05:36:21.508142] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 01:41:39.186 [2024-12-09 05:36:21.508153] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 01:41:39.186 [2024-12-09 05:36:21.508163] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 01:41:39.186 [2024-12-09 05:36:21.508173] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 01:41:39.186 [2024-12-09 05:36:21.508184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:41:39.186 [2024-12-09 05:36:21.508195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 01:41:39.186 [2024-12-09 05:36:21.508206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.312 ms 01:41:39.186 [2024-12-09 05:36:21.508216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:41:39.186 [2024-12-09 05:36:21.508289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:41:39.186 [2024-12-09 05:36:21.508300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 01:41:39.186 [2024-12-09 05:36:21.508316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 01:41:39.186 [2024-12-09 05:36:21.508326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:41:39.186 [2024-12-09 05:36:21.508424] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 01:41:39.186 [2024-12-09 05:36:21.508438] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 01:41:39.186 [2024-12-09 05:36:21.508450] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 01:41:39.186 [2024-12-09 05:36:21.508476] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 01:41:39.186 [2024-12-09 05:36:21.508488] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 01:41:39.186 [2024-12-09 05:36:21.508498] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 01:41:39.187 [2024-12-09 05:36:21.508509] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 01:41:39.187 [2024-12-09 05:36:21.508518] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 01:41:39.187 [2024-12-09 05:36:21.508530] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 01:41:39.187 [2024-12-09 05:36:21.508540] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 01:41:39.187 [2024-12-09 05:36:21.508552] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 01:41:39.187 [2024-12-09 05:36:21.508576] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 01:41:39.187 [2024-12-09 05:36:21.508587] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 01:41:39.187 [2024-12-09 05:36:21.508596] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 01:41:39.187 [2024-12-09 05:36:21.508606] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 01:41:39.187 [2024-12-09 05:36:21.508615] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 01:41:39.187 [2024-12-09 05:36:21.508625] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 01:41:39.187 [2024-12-09 05:36:21.508634] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 01:41:39.187 [2024-12-09 05:36:21.508644] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 01:41:39.187 [2024-12-09 05:36:21.508654] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 01:41:39.187 [2024-12-09 05:36:21.508664] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 01:41:39.187 [2024-12-09 05:36:21.508673] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 01:41:39.187 [2024-12-09 05:36:21.508683] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 01:41:39.187 [2024-12-09 05:36:21.508705] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 01:41:39.187 [2024-12-09 05:36:21.508714] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 01:41:39.187 [2024-12-09 05:36:21.508730] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 01:41:39.187 [2024-12-09 05:36:21.508740] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 01:41:39.187 [2024-12-09 05:36:21.508749] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 01:41:39.187 [2024-12-09 05:36:21.508758] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 01:41:39.187 [2024-12-09 05:36:21.508768] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 01:41:39.187 [2024-12-09 05:36:21.508777] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 01:41:39.187 [2024-12-09 05:36:21.508786] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 01:41:39.187 [2024-12-09 05:36:21.508795] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 01:41:39.187 [2024-12-09 05:36:21.508804] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 01:41:39.187 [2024-12-09 05:36:21.508814] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 01:41:39.187 [2024-12-09 05:36:21.508823] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 01:41:39.187 [2024-12-09 05:36:21.508832] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 01:41:39.187 [2024-12-09 05:36:21.508841] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 01:41:39.187 [2024-12-09 05:36:21.508851] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 01:41:39.187 [2024-12-09 05:36:21.508859] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 01:41:39.187 [2024-12-09 05:36:21.508868] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 01:41:39.187 [2024-12-09 05:36:21.508877] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 01:41:39.187 [2024-12-09 05:36:21.508887] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 01:41:39.187 [2024-12-09 05:36:21.508896] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 01:41:39.187 [2024-12-09 05:36:21.508906] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 01:41:39.187 [2024-12-09 05:36:21.508916] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 01:41:39.187 [2024-12-09 05:36:21.508931] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 01:41:39.187 [2024-12-09 05:36:21.508942] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 01:41:39.187 [2024-12-09 05:36:21.508952] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 01:41:39.187 [2024-12-09 05:36:21.508962] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 01:41:39.187 [2024-12-09 05:36:21.508987] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 01:41:39.187 [2024-12-09 05:36:21.508996] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 01:41:39.187 [2024-12-09 05:36:21.509006] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 01:41:39.187 [2024-12-09 05:36:21.509017] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 01:41:39.187 [2024-12-09 05:36:21.509030] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 01:41:39.187 [2024-12-09 05:36:21.509042] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 01:41:39.187 [2024-12-09 05:36:21.509053] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 01:41:39.187 [2024-12-09 05:36:21.509063] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 01:41:39.187 [2024-12-09 05:36:21.509074] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 01:41:39.187 [2024-12-09 05:36:21.509084] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 01:41:39.187 [2024-12-09 05:36:21.509096] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 01:41:39.187 [2024-12-09 05:36:21.509106] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 01:41:39.187 [2024-12-09 05:36:21.509117] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 01:41:39.187 [2024-12-09 05:36:21.509128] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 01:41:39.187 [2024-12-09 05:36:21.509139] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 01:41:39.187 [2024-12-09 05:36:21.509150] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 01:41:39.187 [2024-12-09 05:36:21.509161] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 01:41:39.187 [2024-12-09 05:36:21.509171] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 01:41:39.187 [2024-12-09 05:36:21.509182] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 01:41:39.187 [2024-12-09 05:36:21.509192] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 01:41:39.187 [2024-12-09 05:36:21.509203] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 01:41:39.187 [2024-12-09 05:36:21.509216] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 01:41:39.187 [2024-12-09 05:36:21.509226] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 01:41:39.187 [2024-12-09 05:36:21.509236] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 01:41:39.187 [2024-12-09 05:36:21.509251] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 01:41:39.187 [2024-12-09 05:36:21.509262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:41:39.187 [2024-12-09 05:36:21.509273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 01:41:39.187 [2024-12-09 05:36:21.509284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.894 ms 01:41:39.187 [2024-12-09 05:36:21.509294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:41:39.187 [2024-12-09 05:36:21.509347] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 01:41:39.187 [2024-12-09 05:36:21.509365] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 01:41:43.376 [2024-12-09 05:36:25.280315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:41:43.376 [2024-12-09 05:36:25.280387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 01:41:43.376 [2024-12-09 05:36:25.280407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3777.087 ms 01:41:43.376 [2024-12-09 05:36:25.280418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:41:43.376 [2024-12-09 05:36:25.327226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:41:43.376 [2024-12-09 05:36:25.327510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 01:41:43.376 [2024-12-09 05:36:25.327541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 46.486 ms 01:41:43.376 [2024-12-09 05:36:25.327554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:41:43.376 [2024-12-09 05:36:25.327682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:41:43.376 [2024-12-09 05:36:25.327696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 01:41:43.376 [2024-12-09 05:36:25.327710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 01:41:43.376 [2024-12-09 05:36:25.327721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:41:43.376 [2024-12-09 05:36:25.379549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:41:43.376 [2024-12-09 05:36:25.379596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 01:41:43.376 [2024-12-09 05:36:25.379616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 51.866 ms 01:41:43.376 [2024-12-09 05:36:25.379627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:41:43.376 [2024-12-09 05:36:25.379676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:41:43.376 [2024-12-09 05:36:25.379689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 01:41:43.376 [2024-12-09 05:36:25.379701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 01:41:43.376 [2024-12-09 05:36:25.379711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:41:43.376 [2024-12-09 05:36:25.380521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:41:43.376 [2024-12-09 05:36:25.380538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 01:41:43.376 [2024-12-09 05:36:25.380551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.733 ms 01:41:43.376 [2024-12-09 05:36:25.380569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:41:43.376 [2024-12-09 05:36:25.380633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:41:43.376 [2024-12-09 05:36:25.380662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 01:41:43.376 [2024-12-09 05:36:25.380674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 01:41:43.376 [2024-12-09 05:36:25.380684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:41:43.376 [2024-12-09 05:36:25.405712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:41:43.376 [2024-12-09 05:36:25.405750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 01:41:43.376 [2024-12-09 05:36:25.405765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.043 ms 01:41:43.376 [2024-12-09 05:36:25.405776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:41:43.376 [2024-12-09 05:36:25.452505] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 01:41:43.376 [2024-12-09 05:36:25.452568] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 01:41:43.376 [2024-12-09 05:36:25.452590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:41:43.376 [2024-12-09 05:36:25.452604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 01:41:43.376 [2024-12-09 05:36:25.452618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 46.755 ms 01:41:43.376 [2024-12-09 05:36:25.452630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:41:43.376 [2024-12-09 05:36:25.472266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:41:43.376 [2024-12-09 05:36:25.472444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 01:41:43.376 [2024-12-09 05:36:25.472480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.613 ms 01:41:43.376 [2024-12-09 05:36:25.472492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:41:43.376 [2024-12-09 05:36:25.489547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:41:43.376 [2024-12-09 05:36:25.489582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 01:41:43.376 [2024-12-09 05:36:25.489595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.028 ms 01:41:43.376 [2024-12-09 05:36:25.489604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:41:43.376 [2024-12-09 05:36:25.506275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:41:43.376 [2024-12-09 05:36:25.506310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 01:41:43.376 [2024-12-09 05:36:25.506323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.655 ms 01:41:43.376 [2024-12-09 05:36:25.506333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:41:43.376 [2024-12-09 05:36:25.507129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:41:43.376 [2024-12-09 05:36:25.507248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 01:41:43.376 [2024-12-09 05:36:25.507261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.686 ms 01:41:43.376 [2024-12-09 05:36:25.507272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:41:43.376 [2024-12-09 05:36:25.600154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:41:43.376 [2024-12-09 05:36:25.600222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 01:41:43.377 [2024-12-09 05:36:25.600240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 93.004 ms 01:41:43.377 [2024-12-09 05:36:25.600252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:41:43.377 [2024-12-09 05:36:25.610822] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 01:41:43.377 [2024-12-09 05:36:25.611807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:41:43.377 [2024-12-09 05:36:25.612032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 01:41:43.377 [2024-12-09 05:36:25.612055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.520 ms 01:41:43.377 [2024-12-09 05:36:25.612069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:41:43.377 [2024-12-09 05:36:25.612177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:41:43.377 [2024-12-09 05:36:25.612195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 01:41:43.377 [2024-12-09 05:36:25.612209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 01:41:43.377 [2024-12-09 05:36:25.612220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:41:43.377 [2024-12-09 05:36:25.612296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:41:43.377 [2024-12-09 05:36:25.612310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 01:41:43.377 [2024-12-09 05:36:25.612322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 01:41:43.377 [2024-12-09 05:36:25.612334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:41:43.377 [2024-12-09 05:36:25.612365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:41:43.377 [2024-12-09 05:36:25.612377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 01:41:43.377 [2024-12-09 05:36:25.612393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 01:41:43.377 [2024-12-09 05:36:25.612404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:41:43.377 [2024-12-09 05:36:25.612448] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 01:41:43.377 [2024-12-09 05:36:25.612479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:41:43.377 [2024-12-09 05:36:25.612492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 01:41:43.377 [2024-12-09 05:36:25.612503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 01:41:43.377 [2024-12-09 05:36:25.612514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:41:43.377 [2024-12-09 05:36:25.648962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:41:43.377 [2024-12-09 05:36:25.649009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 01:41:43.377 [2024-12-09 05:36:25.649024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 36.479 ms 01:41:43.377 [2024-12-09 05:36:25.649034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:41:43.377 [2024-12-09 05:36:25.649119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:41:43.377 [2024-12-09 05:36:25.649132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 01:41:43.377 [2024-12-09 05:36:25.649144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.044 ms 01:41:43.377 [2024-12-09 05:36:25.649154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:41:43.377 [2024-12-09 05:36:25.650655] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4192.492 ms, result 0 01:41:43.377 [2024-12-09 05:36:25.665310] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:41:43.377 [2024-12-09 05:36:25.681299] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 01:41:43.377 [2024-12-09 05:36:25.690260] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 01:41:43.636 05:36:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:41:43.636 05:36:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 01:41:43.636 05:36:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 01:41:43.636 05:36:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 01:41:43.636 05:36:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 01:41:43.896 [2024-12-09 05:36:26.149614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:41:43.896 [2024-12-09 05:36:26.149653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 01:41:43.896 [2024-12-09 05:36:26.149673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 01:41:43.896 [2024-12-09 05:36:26.149684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:41:43.896 [2024-12-09 05:36:26.149707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:41:43.896 [2024-12-09 05:36:26.149719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 01:41:43.896 [2024-12-09 05:36:26.149730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 01:41:43.896 [2024-12-09 05:36:26.149739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:41:43.896 [2024-12-09 05:36:26.149761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:41:43.896 [2024-12-09 05:36:26.149772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 01:41:43.896 [2024-12-09 05:36:26.149783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 01:41:43.896 [2024-12-09 05:36:26.149793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:41:43.896 [2024-12-09 05:36:26.149850] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.229 ms, result 0 01:41:43.896 true 01:41:43.896 05:36:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 01:41:44.155 { 01:41:44.155 "name": "ftl", 01:41:44.155 "properties": [ 01:41:44.155 { 01:41:44.155 "name": "superblock_version", 01:41:44.155 "value": 5, 01:41:44.155 "read-only": true 01:41:44.155 }, 01:41:44.155 { 01:41:44.155 "name": "base_device", 01:41:44.155 "bands": [ 01:41:44.155 { 01:41:44.155 "id": 0, 01:41:44.155 "state": "CLOSED", 01:41:44.155 "validity": 1.0 01:41:44.155 }, 01:41:44.155 { 01:41:44.155 "id": 1, 01:41:44.155 "state": "CLOSED", 01:41:44.155 "validity": 1.0 01:41:44.155 }, 01:41:44.155 { 01:41:44.156 "id": 2, 01:41:44.156 "state": "CLOSED", 01:41:44.156 "validity": 0.007843137254901933 01:41:44.156 }, 01:41:44.156 { 01:41:44.156 "id": 3, 01:41:44.156 "state": "FREE", 01:41:44.156 "validity": 0.0 01:41:44.156 }, 01:41:44.156 { 01:41:44.156 "id": 4, 01:41:44.156 "state": "FREE", 01:41:44.156 "validity": 0.0 01:41:44.156 }, 01:41:44.156 { 01:41:44.156 "id": 5, 01:41:44.156 "state": "FREE", 01:41:44.156 "validity": 0.0 01:41:44.156 }, 01:41:44.156 { 01:41:44.156 "id": 6, 01:41:44.156 "state": "FREE", 01:41:44.156 "validity": 0.0 01:41:44.156 }, 01:41:44.156 { 01:41:44.156 "id": 7, 01:41:44.156 "state": "FREE", 01:41:44.156 "validity": 0.0 01:41:44.156 }, 01:41:44.156 { 01:41:44.156 "id": 8, 01:41:44.156 "state": "FREE", 01:41:44.156 "validity": 0.0 01:41:44.156 }, 01:41:44.156 { 01:41:44.156 "id": 9, 01:41:44.156 "state": "FREE", 01:41:44.156 "validity": 0.0 01:41:44.156 }, 01:41:44.156 { 01:41:44.156 "id": 10, 01:41:44.156 "state": "FREE", 01:41:44.156 "validity": 0.0 01:41:44.156 }, 01:41:44.156 { 01:41:44.156 "id": 11, 01:41:44.156 "state": "FREE", 01:41:44.156 "validity": 0.0 01:41:44.156 }, 01:41:44.156 { 01:41:44.156 "id": 12, 01:41:44.156 "state": "FREE", 01:41:44.156 "validity": 0.0 01:41:44.156 }, 01:41:44.156 { 01:41:44.156 "id": 13, 01:41:44.156 "state": "FREE", 01:41:44.156 "validity": 0.0 01:41:44.156 }, 01:41:44.156 { 01:41:44.156 "id": 14, 01:41:44.156 "state": "FREE", 01:41:44.156 "validity": 0.0 01:41:44.156 }, 01:41:44.156 { 01:41:44.156 "id": 15, 01:41:44.156 "state": "FREE", 01:41:44.156 "validity": 0.0 01:41:44.156 }, 01:41:44.156 { 01:41:44.156 "id": 16, 01:41:44.156 "state": "FREE", 01:41:44.156 "validity": 0.0 01:41:44.156 }, 01:41:44.156 { 01:41:44.156 "id": 17, 01:41:44.156 "state": "FREE", 01:41:44.156 "validity": 0.0 01:41:44.156 } 01:41:44.156 ], 01:41:44.156 "read-only": true 01:41:44.156 }, 01:41:44.156 { 01:41:44.156 "name": "cache_device", 01:41:44.156 "type": "bdev", 01:41:44.156 "chunks": [ 01:41:44.156 { 01:41:44.156 "id": 0, 01:41:44.156 "state": "INACTIVE", 01:41:44.156 "utilization": 0.0 01:41:44.156 }, 01:41:44.156 { 01:41:44.156 "id": 1, 01:41:44.156 "state": "OPEN", 01:41:44.156 "utilization": 0.0 01:41:44.156 }, 01:41:44.156 { 01:41:44.156 "id": 2, 01:41:44.156 "state": "OPEN", 01:41:44.156 "utilization": 0.0 01:41:44.156 }, 01:41:44.156 { 01:41:44.156 "id": 3, 01:41:44.156 "state": "FREE", 01:41:44.156 "utilization": 0.0 01:41:44.156 }, 01:41:44.156 { 01:41:44.156 "id": 4, 01:41:44.156 "state": "FREE", 01:41:44.156 "utilization": 0.0 01:41:44.156 } 01:41:44.156 ], 01:41:44.156 "read-only": true 01:41:44.156 }, 01:41:44.156 { 01:41:44.156 "name": "verbose_mode", 01:41:44.156 "value": true, 01:41:44.156 "unit": "", 01:41:44.156 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 01:41:44.156 }, 01:41:44.156 { 01:41:44.156 "name": "prep_upgrade_on_shutdown", 01:41:44.156 "value": false, 01:41:44.156 "unit": "", 01:41:44.156 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 01:41:44.156 } 01:41:44.156 ] 01:41:44.156 } 01:41:44.156 05:36:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 01:41:44.156 05:36:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 01:41:44.156 05:36:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 01:41:44.156 05:36:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 01:41:44.156 05:36:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 01:41:44.156 05:36:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 01:41:44.156 05:36:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 01:41:44.156 05:36:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 01:41:44.415 05:36:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 01:41:44.415 05:36:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 01:41:44.415 05:36:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 01:41:44.415 05:36:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 01:41:44.415 05:36:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 01:41:44.415 05:36:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 01:41:44.415 Validate MD5 checksum, iteration 1 01:41:44.415 05:36:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 01:41:44.415 05:36:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 01:41:44.415 05:36:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 01:41:44.415 05:36:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 01:41:44.415 05:36:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 01:41:44.415 05:36:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 01:41:44.415 05:36:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 01:41:44.674 [2024-12-09 05:36:26.898522] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:41:44.674 [2024-12-09 05:36:26.898641] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84500 ] 01:41:44.674 [2024-12-09 05:36:27.084254] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:41:44.933 [2024-12-09 05:36:27.197855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:41:46.837  [2024-12-09T05:36:29.552Z] Copying: 607/1024 [MB] (607 MBps) [2024-12-09T05:36:31.457Z] Copying: 1024/1024 [MB] (average 607 MBps) 01:41:49.001 01:41:49.001 05:36:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 01:41:49.001 05:36:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 01:41:50.933 05:36:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 01:41:50.933 Validate MD5 checksum, iteration 2 01:41:50.933 05:36:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=8e679b8cf171aa48e8fa818622fe72a9 01:41:50.933 05:36:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 8e679b8cf171aa48e8fa818622fe72a9 != \8\e\6\7\9\b\8\c\f\1\7\1\a\a\4\8\e\8\f\a\8\1\8\6\2\2\f\e\7\2\a\9 ]] 01:41:50.933 05:36:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 01:41:50.933 05:36:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 01:41:50.933 05:36:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 01:41:50.933 05:36:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 01:41:50.933 05:36:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 01:41:50.933 05:36:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 01:41:50.933 05:36:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 01:41:50.933 05:36:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 01:41:50.933 05:36:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 01:41:50.933 [2024-12-09 05:36:33.145613] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:41:50.933 [2024-12-09 05:36:33.145935] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84570 ] 01:41:50.933 [2024-12-09 05:36:33.332048] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:41:51.193 [2024-12-09 05:36:33.463375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:41:53.101  [2024-12-09T05:36:35.816Z] Copying: 663/1024 [MB] (663 MBps) [2024-12-09T05:36:39.108Z] Copying: 1024/1024 [MB] (average 669 MBps) 01:41:56.652 01:41:56.652 05:36:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 01:41:56.652 05:36:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 01:41:58.037 05:36:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 01:41:58.037 05:36:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=6213c8edac3dab4953e6660622bd0f81 01:41:58.037 05:36:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 6213c8edac3dab4953e6660622bd0f81 != \6\2\1\3\c\8\e\d\a\c\3\d\a\b\4\9\5\3\e\6\6\6\0\6\2\2\b\d\0\f\8\1 ]] 01:41:58.037 05:36:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 01:41:58.037 05:36:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 01:41:58.037 05:36:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 01:41:58.037 05:36:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 84407 ]] 01:41:58.037 05:36:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 84407 01:41:58.037 05:36:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 01:41:58.037 05:36:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 01:41:58.037 05:36:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 01:41:58.037 05:36:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 01:41:58.037 05:36:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 01:41:58.037 05:36:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=84648 01:41:58.037 05:36:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 01:41:58.037 05:36:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 84648 01:41:58.038 05:36:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 01:41:58.038 05:36:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84648 ']' 01:41:58.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:41:58.038 05:36:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:41:58.038 05:36:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 01:41:58.038 05:36:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:41:58.038 05:36:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 01:41:58.038 05:36:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 01:41:58.038 [2024-12-09 05:36:40.382710] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:41:58.038 [2024-12-09 05:36:40.383051] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84648 ] 01:41:58.038 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 84407 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 01:41:58.302 [2024-12-09 05:36:40.571078] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:41:58.302 [2024-12-09 05:36:40.692771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:41:59.687 [2024-12-09 05:36:41.774082] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 01:41:59.687 [2024-12-09 05:36:41.774177] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 01:41:59.687 [2024-12-09 05:36:41.922105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:41:59.687 [2024-12-09 05:36:41.922148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 01:41:59.687 [2024-12-09 05:36:41.922166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 01:41:59.687 [2024-12-09 05:36:41.922176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:41:59.687 [2024-12-09 05:36:41.922241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:41:59.687 [2024-12-09 05:36:41.922254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 01:41:59.687 [2024-12-09 05:36:41.922266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.041 ms 01:41:59.687 [2024-12-09 05:36:41.922276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:41:59.687 [2024-12-09 05:36:41.922298] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 01:41:59.687 [2024-12-09 05:36:41.923322] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 01:41:59.687 [2024-12-09 05:36:41.923354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:41:59.687 [2024-12-09 05:36:41.923365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 01:41:59.687 [2024-12-09 05:36:41.923377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.061 ms 01:41:59.687 [2024-12-09 05:36:41.923387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:41:59.687 [2024-12-09 05:36:41.923831] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 01:41:59.687 [2024-12-09 05:36:41.948842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:41:59.687 [2024-12-09 05:36:41.948876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 01:41:59.687 [2024-12-09 05:36:41.948890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.052 ms 01:41:59.687 [2024-12-09 05:36:41.948901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:41:59.687 [2024-12-09 05:36:41.962230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:41:59.687 [2024-12-09 05:36:41.962266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 01:41:59.687 [2024-12-09 05:36:41.962278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 01:41:59.687 [2024-12-09 05:36:41.962287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:41:59.687 [2024-12-09 05:36:41.962828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:41:59.687 [2024-12-09 05:36:41.962844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 01:41:59.687 [2024-12-09 05:36:41.962856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.460 ms 01:41:59.687 [2024-12-09 05:36:41.962867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:41:59.687 [2024-12-09 05:36:41.962936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:41:59.687 [2024-12-09 05:36:41.962966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 01:41:59.687 [2024-12-09 05:36:41.962977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.047 ms 01:41:59.687 [2024-12-09 05:36:41.962988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:41:59.687 [2024-12-09 05:36:41.963024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:41:59.688 [2024-12-09 05:36:41.963040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 01:41:59.688 [2024-12-09 05:36:41.963052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 01:41:59.688 [2024-12-09 05:36:41.963062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:41:59.688 [2024-12-09 05:36:41.963087] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 01:41:59.688 [2024-12-09 05:36:41.967101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:41:59.688 [2024-12-09 05:36:41.967128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 01:41:59.688 [2024-12-09 05:36:41.967141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.026 ms 01:41:59.688 [2024-12-09 05:36:41.967156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:41:59.688 [2024-12-09 05:36:41.967183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:41:59.688 [2024-12-09 05:36:41.967194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 01:41:59.688 [2024-12-09 05:36:41.967205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 01:41:59.688 [2024-12-09 05:36:41.967215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:41:59.688 [2024-12-09 05:36:41.967253] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 01:41:59.688 [2024-12-09 05:36:41.967278] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 01:41:59.688 [2024-12-09 05:36:41.967314] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 01:41:59.688 [2024-12-09 05:36:41.967336] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 01:41:59.688 [2024-12-09 05:36:41.967428] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 01:41:59.688 [2024-12-09 05:36:41.967442] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 01:41:59.688 [2024-12-09 05:36:41.967456] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 01:41:59.688 [2024-12-09 05:36:41.967481] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 01:41:59.688 [2024-12-09 05:36:41.967493] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 01:41:59.688 [2024-12-09 05:36:41.967506] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 01:41:59.688 [2024-12-09 05:36:41.967517] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 01:41:59.688 [2024-12-09 05:36:41.967527] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 01:41:59.688 [2024-12-09 05:36:41.967537] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 01:41:59.688 [2024-12-09 05:36:41.967552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:41:59.688 [2024-12-09 05:36:41.967562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 01:41:59.688 [2024-12-09 05:36:41.967573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.301 ms 01:41:59.688 [2024-12-09 05:36:41.967584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:41:59.688 [2024-12-09 05:36:41.967667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:41:59.688 [2024-12-09 05:36:41.967679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 01:41:59.688 [2024-12-09 05:36:41.967689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.065 ms 01:41:59.688 [2024-12-09 05:36:41.967700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:41:59.688 [2024-12-09 05:36:41.967788] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 01:41:59.688 [2024-12-09 05:36:41.967804] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 01:41:59.688 [2024-12-09 05:36:41.967832] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 01:41:59.688 [2024-12-09 05:36:41.967843] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 01:41:59.688 [2024-12-09 05:36:41.967855] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 01:41:59.688 [2024-12-09 05:36:41.967864] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 01:41:59.688 [2024-12-09 05:36:41.967875] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 01:41:59.688 [2024-12-09 05:36:41.967884] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 01:41:59.688 [2024-12-09 05:36:41.967894] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 01:41:59.688 [2024-12-09 05:36:41.967903] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 01:41:59.688 [2024-12-09 05:36:41.967913] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 01:41:59.688 [2024-12-09 05:36:41.967921] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 01:41:59.688 [2024-12-09 05:36:41.967930] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 01:41:59.688 [2024-12-09 05:36:41.967939] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 01:41:59.688 [2024-12-09 05:36:41.967949] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 01:41:59.688 [2024-12-09 05:36:41.967958] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 01:41:59.688 [2024-12-09 05:36:41.967968] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 01:41:59.688 [2024-12-09 05:36:41.967977] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 01:41:59.688 [2024-12-09 05:36:41.967987] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 01:41:59.688 [2024-12-09 05:36:41.967996] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 01:41:59.688 [2024-12-09 05:36:41.968006] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 01:41:59.688 [2024-12-09 05:36:41.968027] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 01:41:59.688 [2024-12-09 05:36:41.968036] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 01:41:59.688 [2024-12-09 05:36:41.968045] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 01:41:59.688 [2024-12-09 05:36:41.968055] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 01:41:59.688 [2024-12-09 05:36:41.968064] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 01:41:59.688 [2024-12-09 05:36:41.968084] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 01:41:59.688 [2024-12-09 05:36:41.968093] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 01:41:59.688 [2024-12-09 05:36:41.968102] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 01:41:59.688 [2024-12-09 05:36:41.968112] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 01:41:59.688 [2024-12-09 05:36:41.968121] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 01:41:59.688 [2024-12-09 05:36:41.968132] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 01:41:59.688 [2024-12-09 05:36:41.968142] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 01:41:59.688 [2024-12-09 05:36:41.968167] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 01:41:59.688 [2024-12-09 05:36:41.968177] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 01:41:59.688 [2024-12-09 05:36:41.968187] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 01:41:59.688 [2024-12-09 05:36:41.968196] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 01:41:59.688 [2024-12-09 05:36:41.968205] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 01:41:59.688 [2024-12-09 05:36:41.968214] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 01:41:59.688 [2024-12-09 05:36:41.968224] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 01:41:59.688 [2024-12-09 05:36:41.968232] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 01:41:59.688 [2024-12-09 05:36:41.968242] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 01:41:59.688 [2024-12-09 05:36:41.968251] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 01:41:59.688 [2024-12-09 05:36:41.968260] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 01:41:59.689 [2024-12-09 05:36:41.968271] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 01:41:59.689 [2024-12-09 05:36:41.968281] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 01:41:59.689 [2024-12-09 05:36:41.968291] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 01:41:59.689 [2024-12-09 05:36:41.968301] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 01:41:59.689 [2024-12-09 05:36:41.968311] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 01:41:59.689 [2024-12-09 05:36:41.968319] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 01:41:59.689 [2024-12-09 05:36:41.968329] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 01:41:59.689 [2024-12-09 05:36:41.968338] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 01:41:59.689 [2024-12-09 05:36:41.968347] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 01:41:59.689 [2024-12-09 05:36:41.968358] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 01:41:59.689 [2024-12-09 05:36:41.968371] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 01:41:59.689 [2024-12-09 05:36:41.968383] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 01:41:59.689 [2024-12-09 05:36:41.968394] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 01:41:59.689 [2024-12-09 05:36:41.968404] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 01:41:59.689 [2024-12-09 05:36:41.968413] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 01:41:59.689 [2024-12-09 05:36:41.968424] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 01:41:59.689 [2024-12-09 05:36:41.968434] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 01:41:59.689 [2024-12-09 05:36:41.968444] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 01:41:59.689 [2024-12-09 05:36:41.968455] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 01:41:59.689 [2024-12-09 05:36:41.968466] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 01:41:59.689 [2024-12-09 05:36:41.968477] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 01:41:59.689 [2024-12-09 05:36:41.968487] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 01:41:59.689 [2024-12-09 05:36:41.968498] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 01:41:59.689 [2024-12-09 05:36:41.968508] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 01:41:59.689 [2024-12-09 05:36:41.968530] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 01:41:59.689 [2024-12-09 05:36:41.968541] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 01:41:59.689 [2024-12-09 05:36:41.968553] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 01:41:59.689 [2024-12-09 05:36:41.968570] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 01:41:59.689 [2024-12-09 05:36:41.968581] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 01:41:59.689 [2024-12-09 05:36:41.968592] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 01:41:59.689 [2024-12-09 05:36:41.968603] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 01:41:59.689 [2024-12-09 05:36:41.968614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:41:59.689 [2024-12-09 05:36:41.968625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 01:41:59.689 [2024-12-09 05:36:41.968636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.881 ms 01:41:59.689 [2024-12-09 05:36:41.968646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:41:59.689 [2024-12-09 05:36:42.012059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:41:59.689 [2024-12-09 05:36:42.012104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 01:41:59.689 [2024-12-09 05:36:42.012119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 43.432 ms 01:41:59.689 [2024-12-09 05:36:42.012146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:41:59.689 [2024-12-09 05:36:42.012199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:41:59.689 [2024-12-09 05:36:42.012211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 01:41:59.689 [2024-12-09 05:36:42.012223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 01:41:59.689 [2024-12-09 05:36:42.012233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:41:59.689 [2024-12-09 05:36:42.061974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:41:59.689 [2024-12-09 05:36:42.062174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 01:41:59.689 [2024-12-09 05:36:42.062198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 49.764 ms 01:41:59.689 [2024-12-09 05:36:42.062211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:41:59.689 [2024-12-09 05:36:42.062254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:41:59.689 [2024-12-09 05:36:42.062267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 01:41:59.689 [2024-12-09 05:36:42.062279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 01:41:59.689 [2024-12-09 05:36:42.062297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:41:59.689 [2024-12-09 05:36:42.062437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:41:59.689 [2024-12-09 05:36:42.062452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 01:41:59.689 [2024-12-09 05:36:42.062485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.058 ms 01:41:59.689 [2024-12-09 05:36:42.062497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:41:59.689 [2024-12-09 05:36:42.062547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:41:59.689 [2024-12-09 05:36:42.062560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 01:41:59.689 [2024-12-09 05:36:42.062572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 01:41:59.689 [2024-12-09 05:36:42.062590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:41:59.689 [2024-12-09 05:36:42.087062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:41:59.689 [2024-12-09 05:36:42.087100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 01:41:59.689 [2024-12-09 05:36:42.087114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.487 ms 01:41:59.689 [2024-12-09 05:36:42.087130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:41:59.689 [2024-12-09 05:36:42.087255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:41:59.689 [2024-12-09 05:36:42.087271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 01:41:59.689 [2024-12-09 05:36:42.087299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 01:41:59.689 [2024-12-09 05:36:42.087310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:41:59.949 [2024-12-09 05:36:42.142405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:41:59.949 [2024-12-09 05:36:42.142449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 01:41:59.949 [2024-12-09 05:36:42.142477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 55.161 ms 01:41:59.949 [2024-12-09 05:36:42.142490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:41:59.949 [2024-12-09 05:36:42.157175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:41:59.949 [2024-12-09 05:36:42.157329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 01:41:59.949 [2024-12-09 05:36:42.157353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.736 ms 01:41:59.949 [2024-12-09 05:36:42.157365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:41:59.949 [2024-12-09 05:36:42.252380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:41:59.949 [2024-12-09 05:36:42.252473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 01:41:59.949 [2024-12-09 05:36:42.252493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 95.097 ms 01:41:59.949 [2024-12-09 05:36:42.252505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:41:59.949 [2024-12-09 05:36:42.252782] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 01:41:59.949 [2024-12-09 05:36:42.252971] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 01:41:59.949 [2024-12-09 05:36:42.253184] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 01:41:59.949 [2024-12-09 05:36:42.253370] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 01:41:59.949 [2024-12-09 05:36:42.253387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:41:59.949 [2024-12-09 05:36:42.253399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 01:41:59.949 [2024-12-09 05:36:42.253411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.803 ms 01:41:59.949 [2024-12-09 05:36:42.253423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:41:59.949 [2024-12-09 05:36:42.253519] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 01:41:59.949 [2024-12-09 05:36:42.253537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:41:59.949 [2024-12-09 05:36:42.253555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 01:41:59.949 [2024-12-09 05:36:42.253567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 01:41:59.949 [2024-12-09 05:36:42.253578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:41:59.949 [2024-12-09 05:36:42.275440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:41:59.949 [2024-12-09 05:36:42.275629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 01:41:59.949 [2024-12-09 05:36:42.275655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.847 ms 01:41:59.949 [2024-12-09 05:36:42.275669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:41:59.949 [2024-12-09 05:36:42.288841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:41:59.949 [2024-12-09 05:36:42.288879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 01:41:59.949 [2024-12-09 05:36:42.288892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 01:41:59.949 [2024-12-09 05:36:42.288902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:41:59.949 [2024-12-09 05:36:42.289034] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 01:41:59.949 [2024-12-09 05:36:42.289345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:41:59.949 [2024-12-09 05:36:42.289355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 01:41:59.949 [2024-12-09 05:36:42.289366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.313 ms 01:41:59.949 [2024-12-09 05:36:42.289376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:42:00.517 [2024-12-09 05:36:42.895561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:42:00.517 [2024-12-09 05:36:42.895700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 01:42:00.517 [2024-12-09 05:36:42.895727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 605.939 ms 01:42:00.517 [2024-12-09 05:36:42.895742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:42:00.517 [2024-12-09 05:36:42.902816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:42:00.517 [2024-12-09 05:36:42.902932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 01:42:00.517 [2024-12-09 05:36:42.902954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.837 ms 01:42:00.517 [2024-12-09 05:36:42.902988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:42:00.517 [2024-12-09 05:36:42.903672] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 01:42:00.517 [2024-12-09 05:36:42.903717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:42:00.517 [2024-12-09 05:36:42.903731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 01:42:00.517 [2024-12-09 05:36:42.903745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.651 ms 01:42:00.517 [2024-12-09 05:36:42.903756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:42:00.517 [2024-12-09 05:36:42.903806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:42:00.517 [2024-12-09 05:36:42.903825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 01:42:00.517 [2024-12-09 05:36:42.903837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 01:42:00.517 [2024-12-09 05:36:42.903859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:42:00.517 [2024-12-09 05:36:42.903912] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 615.876 ms, result 0 01:42:00.517 [2024-12-09 05:36:42.903975] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 01:42:00.517 [2024-12-09 05:36:42.904210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:42:00.517 [2024-12-09 05:36:42.904226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 01:42:00.517 [2024-12-09 05:36:42.904237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.238 ms 01:42:00.517 [2024-12-09 05:36:42.904247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:42:01.085 [2024-12-09 05:36:43.529174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:42:01.085 [2024-12-09 05:36:43.529571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 01:42:01.086 [2024-12-09 05:36:43.529626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 623.901 ms 01:42:01.086 [2024-12-09 05:36:43.529639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:42:01.086 [2024-12-09 05:36:43.535898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:42:01.086 [2024-12-09 05:36:43.535943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 01:42:01.086 [2024-12-09 05:36:43.535958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.519 ms 01:42:01.086 [2024-12-09 05:36:43.535969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:42:01.086 [2024-12-09 05:36:43.536504] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 01:42:01.086 [2024-12-09 05:36:43.536531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:42:01.086 [2024-12-09 05:36:43.536543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 01:42:01.086 [2024-12-09 05:36:43.536555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.529 ms 01:42:01.086 [2024-12-09 05:36:43.536566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:42:01.086 [2024-12-09 05:36:43.536601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:42:01.086 [2024-12-09 05:36:43.536614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 01:42:01.086 [2024-12-09 05:36:43.536625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 01:42:01.086 [2024-12-09 05:36:43.536636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:42:01.086 [2024-12-09 05:36:43.536680] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 633.731 ms, result 0 01:42:01.086 [2024-12-09 05:36:43.536736] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 01:42:01.086 [2024-12-09 05:36:43.536752] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 01:42:01.086 [2024-12-09 05:36:43.536766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:42:01.086 [2024-12-09 05:36:43.536778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 01:42:01.086 [2024-12-09 05:36:43.536790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1249.786 ms 01:42:01.086 [2024-12-09 05:36:43.536801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:42:01.086 [2024-12-09 05:36:43.536836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:42:01.086 [2024-12-09 05:36:43.536855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 01:42:01.086 [2024-12-09 05:36:43.536866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 01:42:01.086 [2024-12-09 05:36:43.536877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:42:01.345 [2024-12-09 05:36:43.549606] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 01:42:01.345 [2024-12-09 05:36:43.549901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:42:01.345 [2024-12-09 05:36:43.549949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 01:42:01.345 [2024-12-09 05:36:43.550037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.026 ms 01:42:01.345 [2024-12-09 05:36:43.550073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:42:01.345 [2024-12-09 05:36:43.550804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:42:01.345 [2024-12-09 05:36:43.550940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 01:42:01.345 [2024-12-09 05:36:43.551032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.573 ms 01:42:01.345 [2024-12-09 05:36:43.551071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:42:01.345 [2024-12-09 05:36:43.553156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:42:01.345 [2024-12-09 05:36:43.553271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 01:42:01.345 [2024-12-09 05:36:43.553411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.039 ms 01:42:01.345 [2024-12-09 05:36:43.553450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:42:01.345 [2024-12-09 05:36:43.553541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:42:01.345 [2024-12-09 05:36:43.553578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 01:42:01.345 [2024-12-09 05:36:43.553675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 01:42:01.345 [2024-12-09 05:36:43.553712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:42:01.345 [2024-12-09 05:36:43.553857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:42:01.345 [2024-12-09 05:36:43.553949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 01:42:01.345 [2024-12-09 05:36:43.553986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 01:42:01.345 [2024-12-09 05:36:43.554015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:42:01.345 [2024-12-09 05:36:43.554104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:42:01.345 [2024-12-09 05:36:43.554142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 01:42:01.345 [2024-12-09 05:36:43.554173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 01:42:01.345 [2024-12-09 05:36:43.554251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:42:01.345 [2024-12-09 05:36:43.554330] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 01:42:01.345 [2024-12-09 05:36:43.554365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:42:01.345 [2024-12-09 05:36:43.554396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 01:42:01.345 [2024-12-09 05:36:43.554486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 01:42:01.345 [2024-12-09 05:36:43.554522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:42:01.345 [2024-12-09 05:36:43.554634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:42:01.345 [2024-12-09 05:36:43.554721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 01:42:01.346 [2024-12-09 05:36:43.554757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.046 ms 01:42:01.346 [2024-12-09 05:36:43.554788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:42:01.346 [2024-12-09 05:36:43.556119] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1636.128 ms, result 0 01:42:01.346 [2024-12-09 05:36:43.570615] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:42:01.346 [2024-12-09 05:36:43.586592] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 01:42:01.346 [2024-12-09 05:36:43.597387] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 01:42:01.346 Validate MD5 checksum, iteration 1 01:42:01.346 05:36:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:42:01.346 05:36:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 01:42:01.346 05:36:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 01:42:01.346 05:36:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 01:42:01.346 05:36:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 01:42:01.346 05:36:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 01:42:01.346 05:36:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 01:42:01.346 05:36:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 01:42:01.346 05:36:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 01:42:01.346 05:36:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 01:42:01.346 05:36:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 01:42:01.346 05:36:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 01:42:01.346 05:36:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 01:42:01.346 05:36:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 01:42:01.346 05:36:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 01:42:01.346 [2024-12-09 05:36:43.748936] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:42:01.346 [2024-12-09 05:36:43.749243] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84684 ] 01:42:01.605 [2024-12-09 05:36:43.922632] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:42:01.865 [2024-12-09 05:36:44.064361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:42:03.778  [2024-12-09T05:36:46.492Z] Copying: 682/1024 [MB] (682 MBps) [2024-12-09T05:36:48.397Z] Copying: 1024/1024 [MB] (average 665 MBps) 01:42:05.941 01:42:05.941 05:36:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 01:42:05.941 05:36:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 01:42:07.847 05:36:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 01:42:07.847 Validate MD5 checksum, iteration 2 01:42:07.847 05:36:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=8e679b8cf171aa48e8fa818622fe72a9 01:42:07.847 05:36:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 8e679b8cf171aa48e8fa818622fe72a9 != \8\e\6\7\9\b\8\c\f\1\7\1\a\a\4\8\e\8\f\a\8\1\8\6\2\2\f\e\7\2\a\9 ]] 01:42:07.847 05:36:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 01:42:07.847 05:36:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 01:42:07.847 05:36:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 01:42:07.847 05:36:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 01:42:07.847 05:36:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 01:42:07.847 05:36:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 01:42:07.847 05:36:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 01:42:07.847 05:36:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 01:42:07.847 05:36:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 01:42:07.847 [2024-12-09 05:36:50.043285] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:42:07.847 [2024-12-09 05:36:50.043410] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84751 ] 01:42:07.847 [2024-12-09 05:36:50.226584] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:42:08.107 [2024-12-09 05:36:50.377963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:42:10.011  [2024-12-09T05:36:53.096Z] Copying: 592/1024 [MB] (592 MBps) [2024-12-09T05:36:54.501Z] Copying: 1024/1024 [MB] (average 596 MBps) 01:42:12.045 01:42:12.045 05:36:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 01:42:12.045 05:36:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 01:42:13.947 05:36:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 01:42:13.947 05:36:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=6213c8edac3dab4953e6660622bd0f81 01:42:13.947 05:36:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 6213c8edac3dab4953e6660622bd0f81 != \6\2\1\3\c\8\e\d\a\c\3\d\a\b\4\9\5\3\e\6\6\6\0\6\2\2\b\d\0\f\8\1 ]] 01:42:13.947 05:36:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 01:42:13.947 05:36:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 01:42:13.947 05:36:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 01:42:13.947 05:36:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 01:42:13.947 05:36:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 01:42:13.947 05:36:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 01:42:13.947 05:36:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 01:42:13.947 05:36:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 01:42:13.947 05:36:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 01:42:13.947 05:36:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 01:42:13.947 05:36:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 84648 ]] 01:42:13.947 05:36:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 84648 01:42:13.947 05:36:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 84648 ']' 01:42:13.947 05:36:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 84648 01:42:13.947 05:36:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 01:42:13.947 05:36:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:42:13.947 05:36:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84648 01:42:13.947 killing process with pid 84648 01:42:13.947 05:36:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:42:13.947 05:36:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:42:13.947 05:36:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84648' 01:42:13.947 05:36:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 84648 01:42:13.947 05:36:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 84648 01:42:15.326 [2024-12-09 05:36:57.534830] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 01:42:15.326 [2024-12-09 05:36:57.554007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:42:15.326 [2024-12-09 05:36:57.554054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 01:42:15.326 [2024-12-09 05:36:57.554072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 01:42:15.326 [2024-12-09 05:36:57.554100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:42:15.326 [2024-12-09 05:36:57.554125] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 01:42:15.326 [2024-12-09 05:36:57.558864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:42:15.326 [2024-12-09 05:36:57.558893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 01:42:15.326 [2024-12-09 05:36:57.558913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.730 ms 01:42:15.326 [2024-12-09 05:36:57.558923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:42:15.326 [2024-12-09 05:36:57.559205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:42:15.326 [2024-12-09 05:36:57.559224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 01:42:15.326 [2024-12-09 05:36:57.559237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.254 ms 01:42:15.326 [2024-12-09 05:36:57.559248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:42:15.326 [2024-12-09 05:36:57.560449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:42:15.326 [2024-12-09 05:36:57.560499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 01:42:15.326 [2024-12-09 05:36:57.560513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.184 ms 01:42:15.326 [2024-12-09 05:36:57.560531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:42:15.326 [2024-12-09 05:36:57.561485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:42:15.326 [2024-12-09 05:36:57.561515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 01:42:15.326 [2024-12-09 05:36:57.561528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.912 ms 01:42:15.326 [2024-12-09 05:36:57.561539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:42:15.326 [2024-12-09 05:36:57.576131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:42:15.327 [2024-12-09 05:36:57.576304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 01:42:15.327 [2024-12-09 05:36:57.576349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.573 ms 01:42:15.327 [2024-12-09 05:36:57.576360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:42:15.327 [2024-12-09 05:36:57.584398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:42:15.327 [2024-12-09 05:36:57.584434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 01:42:15.327 [2024-12-09 05:36:57.584448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.970 ms 01:42:15.327 [2024-12-09 05:36:57.584458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:42:15.327 [2024-12-09 05:36:57.584556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:42:15.327 [2024-12-09 05:36:57.584569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 01:42:15.327 [2024-12-09 05:36:57.584582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.048 ms 01:42:15.327 [2024-12-09 05:36:57.584597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:42:15.327 [2024-12-09 05:36:57.598839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:42:15.327 [2024-12-09 05:36:57.599006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 01:42:15.327 [2024-12-09 05:36:57.599036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.246 ms 01:42:15.327 [2024-12-09 05:36:57.599046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:42:15.327 [2024-12-09 05:36:57.613670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:42:15.327 [2024-12-09 05:36:57.613855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 01:42:15.327 [2024-12-09 05:36:57.613875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.572 ms 01:42:15.327 [2024-12-09 05:36:57.613885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:42:15.327 [2024-12-09 05:36:57.628719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:42:15.327 [2024-12-09 05:36:57.628877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 01:42:15.327 [2024-12-09 05:36:57.628897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.719 ms 01:42:15.327 [2024-12-09 05:36:57.628908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:42:15.327 [2024-12-09 05:36:57.643123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:42:15.327 [2024-12-09 05:36:57.643157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 01:42:15.327 [2024-12-09 05:36:57.643172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.143 ms 01:42:15.327 [2024-12-09 05:36:57.643181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:42:15.327 [2024-12-09 05:36:57.643217] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 01:42:15.327 [2024-12-09 05:36:57.643236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 01:42:15.327 [2024-12-09 05:36:57.643250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 01:42:15.327 [2024-12-09 05:36:57.643262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 01:42:15.327 [2024-12-09 05:36:57.643274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 01:42:15.327 [2024-12-09 05:36:57.643285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 01:42:15.327 [2024-12-09 05:36:57.643297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 01:42:15.327 [2024-12-09 05:36:57.643307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 01:42:15.327 [2024-12-09 05:36:57.643319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 01:42:15.327 [2024-12-09 05:36:57.643330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 01:42:15.327 [2024-12-09 05:36:57.643340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 01:42:15.327 [2024-12-09 05:36:57.643351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 01:42:15.327 [2024-12-09 05:36:57.643362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 01:42:15.327 [2024-12-09 05:36:57.643373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 01:42:15.327 [2024-12-09 05:36:57.643384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 01:42:15.327 [2024-12-09 05:36:57.643395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 01:42:15.327 [2024-12-09 05:36:57.643406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 01:42:15.327 [2024-12-09 05:36:57.643417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 01:42:15.327 [2024-12-09 05:36:57.643429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 01:42:15.327 [2024-12-09 05:36:57.643442] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 01:42:15.327 [2024-12-09 05:36:57.643453] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 162f066f-41b9-45f4-beb4-c6fa7d398deb 01:42:15.327 [2024-12-09 05:36:57.643478] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 01:42:15.327 [2024-12-09 05:36:57.643489] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 01:42:15.327 [2024-12-09 05:36:57.643499] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 01:42:15.327 [2024-12-09 05:36:57.643510] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 01:42:15.327 [2024-12-09 05:36:57.643520] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 01:42:15.327 [2024-12-09 05:36:57.643532] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 01:42:15.327 [2024-12-09 05:36:57.643549] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 01:42:15.327 [2024-12-09 05:36:57.643559] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 01:42:15.327 [2024-12-09 05:36:57.643569] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 01:42:15.327 [2024-12-09 05:36:57.643580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:42:15.327 [2024-12-09 05:36:57.643590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 01:42:15.327 [2024-12-09 05:36:57.643602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.364 ms 01:42:15.327 [2024-12-09 05:36:57.643613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:42:15.327 [2024-12-09 05:36:57.664438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:42:15.327 [2024-12-09 05:36:57.664488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 01:42:15.327 [2024-12-09 05:36:57.664503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.826 ms 01:42:15.327 [2024-12-09 05:36:57.664513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:42:15.327 [2024-12-09 05:36:57.665125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:42:15.327 [2024-12-09 05:36:57.665142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 01:42:15.327 [2024-12-09 05:36:57.665153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.567 ms 01:42:15.327 [2024-12-09 05:36:57.665164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:42:15.327 [2024-12-09 05:36:57.732933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 01:42:15.327 [2024-12-09 05:36:57.732972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 01:42:15.327 [2024-12-09 05:36:57.732986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 01:42:15.327 [2024-12-09 05:36:57.733019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:42:15.327 [2024-12-09 05:36:57.733056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 01:42:15.327 [2024-12-09 05:36:57.733067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 01:42:15.327 [2024-12-09 05:36:57.733078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 01:42:15.327 [2024-12-09 05:36:57.733089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:42:15.327 [2024-12-09 05:36:57.733174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 01:42:15.327 [2024-12-09 05:36:57.733189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 01:42:15.327 [2024-12-09 05:36:57.733202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 01:42:15.328 [2024-12-09 05:36:57.733212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:42:15.328 [2024-12-09 05:36:57.733238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 01:42:15.328 [2024-12-09 05:36:57.733249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 01:42:15.328 [2024-12-09 05:36:57.733261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 01:42:15.328 [2024-12-09 05:36:57.733272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:42:15.594 [2024-12-09 05:36:57.861558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 01:42:15.594 [2024-12-09 05:36:57.861788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 01:42:15.594 [2024-12-09 05:36:57.861885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 01:42:15.594 [2024-12-09 05:36:57.861922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:42:15.594 [2024-12-09 05:36:57.962638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 01:42:15.594 [2024-12-09 05:36:57.962840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 01:42:15.594 [2024-12-09 05:36:57.962944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 01:42:15.594 [2024-12-09 05:36:57.962962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:42:15.594 [2024-12-09 05:36:57.963130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 01:42:15.594 [2024-12-09 05:36:57.963145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 01:42:15.594 [2024-12-09 05:36:57.963158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 01:42:15.594 [2024-12-09 05:36:57.963170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:42:15.594 [2024-12-09 05:36:57.963229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 01:42:15.594 [2024-12-09 05:36:57.963259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 01:42:15.594 [2024-12-09 05:36:57.963271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 01:42:15.594 [2024-12-09 05:36:57.963282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:42:15.594 [2024-12-09 05:36:57.963422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 01:42:15.594 [2024-12-09 05:36:57.963436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 01:42:15.594 [2024-12-09 05:36:57.963448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 01:42:15.594 [2024-12-09 05:36:57.963459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:42:15.594 [2024-12-09 05:36:57.963544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 01:42:15.594 [2024-12-09 05:36:57.963557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 01:42:15.594 [2024-12-09 05:36:57.963574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 01:42:15.594 [2024-12-09 05:36:57.963586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:42:15.594 [2024-12-09 05:36:57.963636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 01:42:15.594 [2024-12-09 05:36:57.963648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 01:42:15.594 [2024-12-09 05:36:57.963659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 01:42:15.594 [2024-12-09 05:36:57.963670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:42:15.594 [2024-12-09 05:36:57.963728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 01:42:15.594 [2024-12-09 05:36:57.963746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 01:42:15.594 [2024-12-09 05:36:57.963758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 01:42:15.594 [2024-12-09 05:36:57.963769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:42:15.594 [2024-12-09 05:36:57.963917] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 410.530 ms, result 0 01:42:16.976 05:36:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 01:42:16.977 05:36:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 01:42:16.977 05:36:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 01:42:16.977 05:36:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 01:42:16.977 05:36:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 01:42:16.977 05:36:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 01:42:16.977 Remove shared memory files 01:42:16.977 05:36:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 01:42:16.977 05:36:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 01:42:16.977 05:36:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 01:42:16.977 05:36:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 01:42:16.977 05:36:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid84407 01:42:16.977 05:36:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 01:42:16.977 05:36:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 01:42:16.977 ************************************ 01:42:16.977 END TEST ftl_upgrade_shutdown 01:42:16.977 ************************************ 01:42:16.977 01:42:16.977 real 1m32.088s 01:42:16.977 user 2m5.048s 01:42:16.977 sys 0m25.037s 01:42:16.977 05:36:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 01:42:16.977 05:36:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 01:42:17.236 Process with pid 76861 is not found 01:42:17.236 05:36:59 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]] 01:42:17.236 05:36:59 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 01:42:17.236 05:36:59 ftl -- ftl/ftl.sh@14 -- # killprocess 76861 01:42:17.236 05:36:59 ftl -- common/autotest_common.sh@954 -- # '[' -z 76861 ']' 01:42:17.236 05:36:59 ftl -- common/autotest_common.sh@958 -- # kill -0 76861 01:42:17.236 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (76861) - No such process 01:42:17.236 05:36:59 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 76861 is not found' 01:42:17.236 05:36:59 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 01:42:17.236 05:36:59 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=84882 01:42:17.236 05:36:59 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:42:17.236 05:36:59 ftl -- ftl/ftl.sh@20 -- # waitforlisten 84882 01:42:17.236 05:36:59 ftl -- common/autotest_common.sh@835 -- # '[' -z 84882 ']' 01:42:17.236 05:36:59 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:42:17.236 05:36:59 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 01:42:17.236 05:36:59 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:42:17.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:42:17.236 05:36:59 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 01:42:17.236 05:36:59 ftl -- common/autotest_common.sh@10 -- # set +x 01:42:17.236 [2024-12-09 05:36:59.586031] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:42:17.236 [2024-12-09 05:36:59.586321] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84882 ] 01:42:17.496 [2024-12-09 05:36:59.773923] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:42:17.496 [2024-12-09 05:36:59.913115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:42:18.877 05:37:00 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:42:18.877 05:37:00 ftl -- common/autotest_common.sh@868 -- # return 0 01:42:18.877 05:37:00 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 01:42:18.877 nvme0n1 01:42:18.877 05:37:01 ftl -- ftl/ftl.sh@22 -- # clear_lvols 01:42:18.877 05:37:01 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 01:42:18.877 05:37:01 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 01:42:19.135 05:37:01 ftl -- ftl/common.sh@28 -- # stores=6a0f8776-91fe-4d94-abcd-7c1d440af63c 01:42:19.135 05:37:01 ftl -- ftl/common.sh@29 -- # for lvs in $stores 01:42:19.135 05:37:01 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6a0f8776-91fe-4d94-abcd-7c1d440af63c 01:42:19.394 05:37:01 ftl -- ftl/ftl.sh@23 -- # killprocess 84882 01:42:19.394 05:37:01 ftl -- common/autotest_common.sh@954 -- # '[' -z 84882 ']' 01:42:19.394 05:37:01 ftl -- common/autotest_common.sh@958 -- # kill -0 84882 01:42:19.394 05:37:01 ftl -- common/autotest_common.sh@959 -- # uname 01:42:19.394 05:37:01 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:42:19.394 05:37:01 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84882 01:42:19.394 killing process with pid 84882 01:42:19.394 05:37:01 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:42:19.394 05:37:01 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:42:19.394 05:37:01 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84882' 01:42:19.394 05:37:01 ftl -- common/autotest_common.sh@973 -- # kill 84882 01:42:19.394 05:37:01 ftl -- common/autotest_common.sh@978 -- # wait 84882 01:42:21.940 05:37:04 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 01:42:22.509 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:42:22.509 Waiting for block devices as requested 01:42:22.509 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 01:42:22.768 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 01:42:22.768 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 01:42:23.026 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 01:42:28.304 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 01:42:28.304 05:37:10 ftl -- ftl/ftl.sh@28 -- # remove_shm 01:42:28.304 Remove shared memory files 01:42:28.304 05:37:10 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 01:42:28.304 05:37:10 ftl -- ftl/common.sh@205 -- # rm -f rm -f 01:42:28.304 05:37:10 ftl -- ftl/common.sh@206 -- # rm -f rm -f 01:42:28.304 05:37:10 ftl -- ftl/common.sh@207 -- # rm -f rm -f 01:42:28.304 05:37:10 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 01:42:28.304 05:37:10 ftl -- ftl/common.sh@209 -- # rm -f rm -f 01:42:28.304 ************************************ 01:42:28.304 END TEST ftl 01:42:28.304 ************************************ 01:42:28.304 01:42:28.304 real 11m56.230s 01:42:28.304 user 14m23.181s 01:42:28.304 sys 1m39.036s 01:42:28.304 05:37:10 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 01:42:28.304 05:37:10 ftl -- common/autotest_common.sh@10 -- # set +x 01:42:28.304 05:37:10 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 01:42:28.304 05:37:10 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 01:42:28.304 05:37:10 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 01:42:28.304 05:37:10 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 01:42:28.304 05:37:10 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 01:42:28.304 05:37:10 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 01:42:28.304 05:37:10 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 01:42:28.304 05:37:10 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 01:42:28.304 05:37:10 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 01:42:28.304 05:37:10 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 01:42:28.305 05:37:10 -- common/autotest_common.sh@726 -- # xtrace_disable 01:42:28.305 05:37:10 -- common/autotest_common.sh@10 -- # set +x 01:42:28.305 05:37:10 -- spdk/autotest.sh@388 -- # autotest_cleanup 01:42:28.305 05:37:10 -- common/autotest_common.sh@1396 -- # local autotest_es=0 01:42:28.305 05:37:10 -- common/autotest_common.sh@1397 -- # xtrace_disable 01:42:28.305 05:37:10 -- common/autotest_common.sh@10 -- # set +x 01:42:30.209 INFO: APP EXITING 01:42:30.209 INFO: killing all VMs 01:42:30.209 INFO: killing vhost app 01:42:30.209 INFO: EXIT DONE 01:42:30.777 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:42:31.346 0000:00:11.0 (1b36 0010): Already using the nvme driver 01:42:31.346 0000:00:10.0 (1b36 0010): Already using the nvme driver 01:42:31.346 0000:00:12.0 (1b36 0010): Already using the nvme driver 01:42:31.346 0000:00:13.0 (1b36 0010): Already using the nvme driver 01:42:31.914 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:42:32.483 Cleaning 01:42:32.483 Removing: /var/run/dpdk/spdk0/config 01:42:32.483 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 01:42:32.483 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 01:42:32.483 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 01:42:32.483 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 01:42:32.483 Removing: /var/run/dpdk/spdk0/fbarray_memzone 01:42:32.483 Removing: /var/run/dpdk/spdk0/hugepage_info 01:42:32.483 Removing: /var/run/dpdk/spdk0 01:42:32.483 Removing: /var/run/dpdk/spdk_pid57494 01:42:32.483 Removing: /var/run/dpdk/spdk_pid57729 01:42:32.483 Removing: /var/run/dpdk/spdk_pid57958 01:42:32.483 Removing: /var/run/dpdk/spdk_pid58073 01:42:32.483 Removing: /var/run/dpdk/spdk_pid58118 01:42:32.483 Removing: /var/run/dpdk/spdk_pid58257 01:42:32.483 Removing: /var/run/dpdk/spdk_pid58275 01:42:32.483 Removing: /var/run/dpdk/spdk_pid58485 01:42:32.483 Removing: /var/run/dpdk/spdk_pid58602 01:42:32.483 Removing: /var/run/dpdk/spdk_pid58709 01:42:32.483 Removing: /var/run/dpdk/spdk_pid58831 01:42:32.483 Removing: /var/run/dpdk/spdk_pid58945 01:42:32.483 Removing: /var/run/dpdk/spdk_pid58984 01:42:32.483 Removing: /var/run/dpdk/spdk_pid59021 01:42:32.483 Removing: /var/run/dpdk/spdk_pid59097 01:42:32.483 Removing: /var/run/dpdk/spdk_pid59219 01:42:32.483 Removing: /var/run/dpdk/spdk_pid59667 01:42:32.483 Removing: /var/run/dpdk/spdk_pid59742 01:42:32.483 Removing: /var/run/dpdk/spdk_pid59818 01:42:32.483 Removing: /var/run/dpdk/spdk_pid59839 01:42:32.483 Removing: /var/run/dpdk/spdk_pid59996 01:42:32.483 Removing: /var/run/dpdk/spdk_pid60012 01:42:32.483 Removing: /var/run/dpdk/spdk_pid60171 01:42:32.483 Removing: /var/run/dpdk/spdk_pid60187 01:42:32.483 Removing: /var/run/dpdk/spdk_pid60257 01:42:32.483 Removing: /var/run/dpdk/spdk_pid60280 01:42:32.483 Removing: /var/run/dpdk/spdk_pid60346 01:42:32.483 Removing: /var/run/dpdk/spdk_pid60370 01:42:32.483 Removing: /var/run/dpdk/spdk_pid60565 01:42:32.483 Removing: /var/run/dpdk/spdk_pid60607 01:42:32.483 Removing: /var/run/dpdk/spdk_pid60696 01:42:32.483 Removing: /var/run/dpdk/spdk_pid60890 01:42:32.483 Removing: /var/run/dpdk/spdk_pid60996 01:42:32.483 Removing: /var/run/dpdk/spdk_pid61038 01:42:32.483 Removing: /var/run/dpdk/spdk_pid61500 01:42:32.483 Removing: /var/run/dpdk/spdk_pid61598 01:42:32.483 Removing: /var/run/dpdk/spdk_pid61718 01:42:32.483 Removing: /var/run/dpdk/spdk_pid61771 01:42:32.483 Removing: /var/run/dpdk/spdk_pid61802 01:42:32.483 Removing: /var/run/dpdk/spdk_pid61886 01:42:32.483 Removing: /var/run/dpdk/spdk_pid62531 01:42:32.483 Removing: /var/run/dpdk/spdk_pid62583 01:42:32.483 Removing: /var/run/dpdk/spdk_pid63072 01:42:32.483 Removing: /var/run/dpdk/spdk_pid63170 01:42:32.483 Removing: /var/run/dpdk/spdk_pid63290 01:42:32.483 Removing: /var/run/dpdk/spdk_pid63349 01:42:32.483 Removing: /var/run/dpdk/spdk_pid63380 01:42:32.483 Removing: /var/run/dpdk/spdk_pid63411 01:42:32.483 Removing: /var/run/dpdk/spdk_pid65325 01:42:32.483 Removing: /var/run/dpdk/spdk_pid65478 01:42:32.483 Removing: /var/run/dpdk/spdk_pid65483 01:42:32.483 Removing: /var/run/dpdk/spdk_pid65495 01:42:32.483 Removing: /var/run/dpdk/spdk_pid65542 01:42:32.483 Removing: /var/run/dpdk/spdk_pid65546 01:42:32.483 Removing: /var/run/dpdk/spdk_pid65568 01:42:32.742 Removing: /var/run/dpdk/spdk_pid65609 01:42:32.742 Removing: /var/run/dpdk/spdk_pid65613 01:42:32.742 Removing: /var/run/dpdk/spdk_pid65635 01:42:32.742 Removing: /var/run/dpdk/spdk_pid65675 01:42:32.742 Removing: /var/run/dpdk/spdk_pid65679 01:42:32.742 Removing: /var/run/dpdk/spdk_pid65701 01:42:32.742 Removing: /var/run/dpdk/spdk_pid67113 01:42:32.742 Removing: /var/run/dpdk/spdk_pid67232 01:42:32.742 Removing: /var/run/dpdk/spdk_pid68666 01:42:32.742 Removing: /var/run/dpdk/spdk_pid70435 01:42:32.742 Removing: /var/run/dpdk/spdk_pid70520 01:42:32.742 Removing: /var/run/dpdk/spdk_pid70609 01:42:32.742 Removing: /var/run/dpdk/spdk_pid70722 01:42:32.742 Removing: /var/run/dpdk/spdk_pid70820 01:42:32.742 Removing: /var/run/dpdk/spdk_pid70921 01:42:32.742 Removing: /var/run/dpdk/spdk_pid71012 01:42:32.742 Removing: /var/run/dpdk/spdk_pid71094 01:42:32.742 Removing: /var/run/dpdk/spdk_pid71204 01:42:32.742 Removing: /var/run/dpdk/spdk_pid71304 01:42:32.742 Removing: /var/run/dpdk/spdk_pid71406 01:42:32.742 Removing: /var/run/dpdk/spdk_pid71498 01:42:32.742 Removing: /var/run/dpdk/spdk_pid71580 01:42:32.742 Removing: /var/run/dpdk/spdk_pid71694 01:42:32.742 Removing: /var/run/dpdk/spdk_pid71787 01:42:32.742 Removing: /var/run/dpdk/spdk_pid71894 01:42:32.742 Removing: /var/run/dpdk/spdk_pid71980 01:42:32.742 Removing: /var/run/dpdk/spdk_pid72065 01:42:32.742 Removing: /var/run/dpdk/spdk_pid72176 01:42:32.742 Removing: /var/run/dpdk/spdk_pid72273 01:42:32.742 Removing: /var/run/dpdk/spdk_pid72379 01:42:32.742 Removing: /var/run/dpdk/spdk_pid72460 01:42:32.742 Removing: /var/run/dpdk/spdk_pid72545 01:42:32.742 Removing: /var/run/dpdk/spdk_pid72627 01:42:32.742 Removing: /var/run/dpdk/spdk_pid72709 01:42:32.742 Removing: /var/run/dpdk/spdk_pid72818 01:42:32.742 Removing: /var/run/dpdk/spdk_pid72914 01:42:32.742 Removing: /var/run/dpdk/spdk_pid73023 01:42:32.742 Removing: /var/run/dpdk/spdk_pid73104 01:42:32.742 Removing: /var/run/dpdk/spdk_pid73189 01:42:32.742 Removing: /var/run/dpdk/spdk_pid73269 01:42:32.742 Removing: /var/run/dpdk/spdk_pid73343 01:42:32.742 Removing: /var/run/dpdk/spdk_pid73452 01:42:32.742 Removing: /var/run/dpdk/spdk_pid73554 01:42:32.742 Removing: /var/run/dpdk/spdk_pid73703 01:42:32.742 Removing: /var/run/dpdk/spdk_pid74004 01:42:32.742 Removing: /var/run/dpdk/spdk_pid74046 01:42:32.742 Removing: /var/run/dpdk/spdk_pid74509 01:42:32.742 Removing: /var/run/dpdk/spdk_pid74700 01:42:32.742 Removing: /var/run/dpdk/spdk_pid74800 01:42:32.742 Removing: /var/run/dpdk/spdk_pid74922 01:42:32.742 Removing: /var/run/dpdk/spdk_pid74980 01:42:32.742 Removing: /var/run/dpdk/spdk_pid75007 01:42:32.742 Removing: /var/run/dpdk/spdk_pid75302 01:42:32.742 Removing: /var/run/dpdk/spdk_pid75380 01:42:33.001 Removing: /var/run/dpdk/spdk_pid75470 01:42:33.001 Removing: /var/run/dpdk/spdk_pid75903 01:42:33.001 Removing: /var/run/dpdk/spdk_pid76051 01:42:33.001 Removing: /var/run/dpdk/spdk_pid76861 01:42:33.001 Removing: /var/run/dpdk/spdk_pid77011 01:42:33.001 Removing: /var/run/dpdk/spdk_pid77225 01:42:33.001 Removing: /var/run/dpdk/spdk_pid77339 01:42:33.001 Removing: /var/run/dpdk/spdk_pid77665 01:42:33.001 Removing: /var/run/dpdk/spdk_pid77923 01:42:33.001 Removing: /var/run/dpdk/spdk_pid78287 01:42:33.001 Removing: /var/run/dpdk/spdk_pid78498 01:42:33.001 Removing: /var/run/dpdk/spdk_pid78656 01:42:33.001 Removing: /var/run/dpdk/spdk_pid78725 01:42:33.001 Removing: /var/run/dpdk/spdk_pid78863 01:42:33.001 Removing: /var/run/dpdk/spdk_pid78905 01:42:33.001 Removing: /var/run/dpdk/spdk_pid78977 01:42:33.001 Removing: /var/run/dpdk/spdk_pid79186 01:42:33.001 Removing: /var/run/dpdk/spdk_pid79429 01:42:33.001 Removing: /var/run/dpdk/spdk_pid79908 01:42:33.001 Removing: /var/run/dpdk/spdk_pid80380 01:42:33.001 Removing: /var/run/dpdk/spdk_pid80876 01:42:33.001 Removing: /var/run/dpdk/spdk_pid81468 01:42:33.001 Removing: /var/run/dpdk/spdk_pid81638 01:42:33.001 Removing: /var/run/dpdk/spdk_pid81725 01:42:33.001 Removing: /var/run/dpdk/spdk_pid82388 01:42:33.001 Removing: /var/run/dpdk/spdk_pid82458 01:42:33.001 Removing: /var/run/dpdk/spdk_pid82922 01:42:33.001 Removing: /var/run/dpdk/spdk_pid83292 01:42:33.001 Removing: /var/run/dpdk/spdk_pid83811 01:42:33.001 Removing: /var/run/dpdk/spdk_pid83945 01:42:33.001 Removing: /var/run/dpdk/spdk_pid84003 01:42:33.001 Removing: /var/run/dpdk/spdk_pid84067 01:42:33.001 Removing: /var/run/dpdk/spdk_pid84124 01:42:33.001 Removing: /var/run/dpdk/spdk_pid84195 01:42:33.001 Removing: /var/run/dpdk/spdk_pid84407 01:42:33.001 Removing: /var/run/dpdk/spdk_pid84500 01:42:33.001 Removing: /var/run/dpdk/spdk_pid84570 01:42:33.001 Removing: /var/run/dpdk/spdk_pid84648 01:42:33.001 Removing: /var/run/dpdk/spdk_pid84684 01:42:33.001 Removing: /var/run/dpdk/spdk_pid84751 01:42:33.001 Removing: /var/run/dpdk/spdk_pid84882 01:42:33.001 Clean 01:42:33.259 05:37:15 -- common/autotest_common.sh@1453 -- # return 0 01:42:33.259 05:37:15 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 01:42:33.259 05:37:15 -- common/autotest_common.sh@732 -- # xtrace_disable 01:42:33.259 05:37:15 -- common/autotest_common.sh@10 -- # set +x 01:42:33.259 05:37:15 -- spdk/autotest.sh@391 -- # timing_exit autotest 01:42:33.259 05:37:15 -- common/autotest_common.sh@732 -- # xtrace_disable 01:42:33.259 05:37:15 -- common/autotest_common.sh@10 -- # set +x 01:42:33.259 05:37:15 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 01:42:33.259 05:37:15 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 01:42:33.259 05:37:15 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 01:42:33.259 05:37:15 -- spdk/autotest.sh@396 -- # [[ y == y ]] 01:42:33.259 05:37:15 -- spdk/autotest.sh@398 -- # hostname 01:42:33.259 05:37:15 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 01:42:33.517 geninfo: WARNING: invalid characters removed from testname! 01:43:00.096 05:37:40 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 01:43:01.997 05:37:44 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 01:43:04.531 05:37:46 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 01:43:06.437 05:37:48 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 01:43:08.970 05:37:50 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 01:43:10.906 05:37:53 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 01:43:13.437 05:37:55 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 01:43:13.437 05:37:55 -- spdk/autorun.sh@1 -- $ timing_finish 01:43:13.438 05:37:55 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 01:43:13.438 05:37:55 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 01:43:13.438 05:37:55 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 01:43:13.438 05:37:55 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 01:43:13.438 + [[ -n 5261 ]] 01:43:13.438 + sudo kill 5261 01:43:13.446 [Pipeline] } 01:43:13.461 [Pipeline] // timeout 01:43:13.465 [Pipeline] } 01:43:13.479 [Pipeline] // stage 01:43:13.484 [Pipeline] } 01:43:13.496 [Pipeline] // catchError 01:43:13.505 [Pipeline] stage 01:43:13.507 [Pipeline] { (Stop VM) 01:43:13.520 [Pipeline] sh 01:43:13.849 + vagrant halt 01:43:16.407 ==> default: Halting domain... 01:43:23.135 [Pipeline] sh 01:43:23.423 + vagrant destroy -f 01:43:25.954 ==> default: Removing domain... 01:43:26.535 [Pipeline] sh 01:43:26.818 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 01:43:26.828 [Pipeline] } 01:43:26.844 [Pipeline] // stage 01:43:26.850 [Pipeline] } 01:43:26.865 [Pipeline] // dir 01:43:26.870 [Pipeline] } 01:43:26.885 [Pipeline] // wrap 01:43:26.892 [Pipeline] } 01:43:26.904 [Pipeline] // catchError 01:43:26.915 [Pipeline] stage 01:43:26.917 [Pipeline] { (Epilogue) 01:43:26.931 [Pipeline] sh 01:43:27.213 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 01:43:32.498 [Pipeline] catchError 01:43:32.500 [Pipeline] { 01:43:32.512 [Pipeline] sh 01:43:32.795 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 01:43:32.795 Artifacts sizes are good 01:43:32.802 [Pipeline] } 01:43:32.811 [Pipeline] // catchError 01:43:32.819 [Pipeline] archiveArtifacts 01:43:32.824 Archiving artifacts 01:43:32.920 [Pipeline] cleanWs 01:43:32.931 [WS-CLEANUP] Deleting project workspace... 01:43:32.931 [WS-CLEANUP] Deferred wipeout is used... 01:43:32.937 [WS-CLEANUP] done 01:43:32.938 [Pipeline] } 01:43:32.952 [Pipeline] // stage 01:43:32.956 [Pipeline] } 01:43:32.968 [Pipeline] // node 01:43:32.973 [Pipeline] End of Pipeline 01:43:33.003 Finished: SUCCESS