00:00:00.001 Started by upstream project "autotest-per-patch" build number 132773 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.029 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.029 The recommended git tool is: git 00:00:00.030 using credential 00000000-0000-0000-0000-000000000002 00:00:00.031 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.049 Fetching changes from the remote Git repository 00:00:00.056 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.098 Using shallow fetch with depth 1 00:00:00.098 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.099 > git --version # timeout=10 00:00:00.143 > git --version # 'git version 2.39.2' 00:00:00.143 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.171 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.171 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.997 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.010 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.022 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:05.022 > git config core.sparsecheckout # timeout=10 00:00:05.036 > git read-tree -mu HEAD # timeout=10 00:00:05.054 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:05.080 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:05.080 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:05.173 [Pipeline] Start of Pipeline 00:00:05.186 [Pipeline] library 00:00:05.188 Loading library shm_lib@master 00:00:05.188 Library shm_lib@master is cached. Copying from home. 00:00:05.211 [Pipeline] node 01:01:51.841 Still waiting to schedule task 01:01:51.841 Waiting for next available executor on ‘vagrant-vm-host’ 01:20:58.026 Running on VM-host-SM0 in /var/jenkins/workspace/nvme-vg-autotest 01:20:58.028 [Pipeline] { 01:20:58.040 [Pipeline] catchError 01:20:58.041 [Pipeline] { 01:20:58.056 [Pipeline] wrap 01:20:58.065 [Pipeline] { 01:20:58.074 [Pipeline] stage 01:20:58.076 [Pipeline] { (Prologue) 01:20:58.097 [Pipeline] echo 01:20:58.099 Node: VM-host-SM0 01:20:58.106 [Pipeline] cleanWs 01:20:58.116 [WS-CLEANUP] Deleting project workspace... 01:20:58.116 [WS-CLEANUP] Deferred wipeout is used... 01:20:58.122 [WS-CLEANUP] done 01:20:58.308 [Pipeline] setCustomBuildProperty 01:20:58.401 [Pipeline] httpRequest 01:20:58.810 [Pipeline] echo 01:20:58.812 Sorcerer 10.211.164.101 is alive 01:20:58.822 [Pipeline] retry 01:20:58.825 [Pipeline] { 01:20:58.839 [Pipeline] httpRequest 01:20:58.844 HttpMethod: GET 01:20:58.844 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 01:20:58.845 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 01:20:58.846 Response Code: HTTP/1.1 200 OK 01:20:58.846 Success: Status code 200 is in the accepted range: 200,404 01:20:58.847 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 01:20:58.992 [Pipeline] } 01:20:59.010 [Pipeline] // retry 01:20:59.017 [Pipeline] sh 01:20:59.434 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 01:20:59.451 [Pipeline] httpRequest 01:20:59.854 [Pipeline] echo 01:20:59.856 Sorcerer 10.211.164.101 is alive 01:20:59.866 [Pipeline] retry 01:20:59.869 [Pipeline] { 01:20:59.883 [Pipeline] httpRequest 01:20:59.889 HttpMethod: GET 01:20:59.889 URL: http://10.211.164.101/packages/spdk_66902d69af506c19fa2a7701832daf75f8183e0d.tar.gz 01:20:59.890 Sending request to url: http://10.211.164.101/packages/spdk_66902d69af506c19fa2a7701832daf75f8183e0d.tar.gz 01:20:59.891 Response Code: HTTP/1.1 200 OK 01:20:59.891 Success: Status code 200 is in the accepted range: 200,404 01:20:59.892 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_66902d69af506c19fa2a7701832daf75f8183e0d.tar.gz 01:21:02.159 [Pipeline] } 01:21:02.176 [Pipeline] // retry 01:21:02.183 [Pipeline] sh 01:21:02.460 + tar --no-same-owner -xf spdk_66902d69af506c19fa2a7701832daf75f8183e0d.tar.gz 01:21:05.008 [Pipeline] sh 01:21:05.284 + git -C spdk log --oneline -n5 01:21:05.284 66902d69a env: explicitly set --legacy-mem flag in no hugepages mode 01:21:05.284 421ce3854 env: add mem_map_fini and vtophys_fini to cleanup mem maps 01:21:05.284 35cd3e84d bdev/part: Pass through dif_check_flags via dif_check_flags_exclude_mask 01:21:05.285 01a2c4855 bdev/passthru: Pass through dif_check_flags via dif_check_flags_exclude_mask 01:21:05.285 9094b9600 bdev: Assert to check if I/O pass dif_check_flags not enabled by bdev 01:21:05.300 [Pipeline] writeFile 01:21:05.311 [Pipeline] sh 01:21:05.587 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 01:21:05.597 [Pipeline] sh 01:21:05.871 + cat autorun-spdk.conf 01:21:05.871 SPDK_RUN_FUNCTIONAL_TEST=1 01:21:05.871 SPDK_TEST_NVME=1 01:21:05.871 SPDK_TEST_FTL=1 01:21:05.871 SPDK_TEST_ISAL=1 01:21:05.871 SPDK_RUN_ASAN=1 01:21:05.871 SPDK_RUN_UBSAN=1 01:21:05.871 SPDK_TEST_XNVME=1 01:21:05.871 SPDK_TEST_NVME_FDP=1 01:21:05.871 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 01:21:05.877 RUN_NIGHTLY=0 01:21:05.879 [Pipeline] } 01:21:05.892 [Pipeline] // stage 01:21:05.906 [Pipeline] stage 01:21:05.909 [Pipeline] { (Run VM) 01:21:05.921 [Pipeline] sh 01:21:06.199 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 01:21:06.199 + echo 'Start stage prepare_nvme.sh' 01:21:06.199 Start stage prepare_nvme.sh 01:21:06.199 + [[ -n 1 ]] 01:21:06.199 + disk_prefix=ex1 01:21:06.199 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 01:21:06.199 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 01:21:06.199 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 01:21:06.199 ++ SPDK_RUN_FUNCTIONAL_TEST=1 01:21:06.199 ++ SPDK_TEST_NVME=1 01:21:06.199 ++ SPDK_TEST_FTL=1 01:21:06.199 ++ SPDK_TEST_ISAL=1 01:21:06.199 ++ SPDK_RUN_ASAN=1 01:21:06.199 ++ SPDK_RUN_UBSAN=1 01:21:06.199 ++ SPDK_TEST_XNVME=1 01:21:06.199 ++ SPDK_TEST_NVME_FDP=1 01:21:06.199 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 01:21:06.199 ++ RUN_NIGHTLY=0 01:21:06.199 + cd /var/jenkins/workspace/nvme-vg-autotest 01:21:06.199 + nvme_files=() 01:21:06.199 + declare -A nvme_files 01:21:06.199 + backend_dir=/var/lib/libvirt/images/backends 01:21:06.199 + nvme_files['nvme.img']=5G 01:21:06.199 + nvme_files['nvme-cmb.img']=5G 01:21:06.199 + nvme_files['nvme-multi0.img']=4G 01:21:06.199 + nvme_files['nvme-multi1.img']=4G 01:21:06.199 + nvme_files['nvme-multi2.img']=4G 01:21:06.199 + nvme_files['nvme-openstack.img']=8G 01:21:06.199 + nvme_files['nvme-zns.img']=5G 01:21:06.199 + (( SPDK_TEST_NVME_PMR == 1 )) 01:21:06.199 + (( SPDK_TEST_FTL == 1 )) 01:21:06.199 + nvme_files["nvme-ftl.img"]=6G 01:21:06.199 + (( SPDK_TEST_NVME_FDP == 1 )) 01:21:06.199 + nvme_files["nvme-fdp.img"]=1G 01:21:06.199 + [[ ! -d /var/lib/libvirt/images/backends ]] 01:21:06.199 + for nvme in "${!nvme_files[@]}" 01:21:06.199 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi2.img -s 4G 01:21:06.199 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 01:21:06.199 + for nvme in "${!nvme_files[@]}" 01:21:06.199 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-ftl.img -s 6G 01:21:06.199 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 01:21:06.199 + for nvme in "${!nvme_files[@]}" 01:21:06.199 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-cmb.img -s 5G 01:21:06.457 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 01:21:06.457 + for nvme in "${!nvme_files[@]}" 01:21:06.457 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-openstack.img -s 8G 01:21:06.457 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 01:21:06.457 + for nvme in "${!nvme_files[@]}" 01:21:06.457 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-zns.img -s 5G 01:21:06.457 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 01:21:06.457 + for nvme in "${!nvme_files[@]}" 01:21:06.457 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi1.img -s 4G 01:21:06.457 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 01:21:06.457 + for nvme in "${!nvme_files[@]}" 01:21:06.457 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi0.img -s 4G 01:21:06.457 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 01:21:06.457 + for nvme in "${!nvme_files[@]}" 01:21:06.457 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-fdp.img -s 1G 01:21:06.457 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 01:21:06.715 + for nvme in "${!nvme_files[@]}" 01:21:06.715 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme.img -s 5G 01:21:06.715 Formatting '/var/lib/libvirt/images/backends/ex1-nvme.img', fmt=raw size=5368709120 preallocation=falloc 01:21:06.715 ++ sudo grep -rl ex1-nvme.img /etc/libvirt/qemu 01:21:06.715 + echo 'End stage prepare_nvme.sh' 01:21:06.715 End stage prepare_nvme.sh 01:21:06.726 [Pipeline] sh 01:21:07.006 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 01:21:07.006 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex1-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex1-nvme.img -b /var/lib/libvirt/images/backends/ex1-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex1-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 01:21:07.264 01:21:07.265 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 01:21:07.265 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 01:21:07.265 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 01:21:07.265 HELP=0 01:21:07.265 DRY_RUN=0 01:21:07.265 NVME_FILE=/var/lib/libvirt/images/backends/ex1-nvme-ftl.img,/var/lib/libvirt/images/backends/ex1-nvme.img,/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,/var/lib/libvirt/images/backends/ex1-nvme-fdp.img, 01:21:07.265 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 01:21:07.265 NVME_AUTO_CREATE=0 01:21:07.265 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,, 01:21:07.265 NVME_CMB=,,,, 01:21:07.265 NVME_PMR=,,,, 01:21:07.265 NVME_ZNS=,,,, 01:21:07.265 NVME_MS=true,,,, 01:21:07.265 NVME_FDP=,,,on, 01:21:07.265 SPDK_VAGRANT_DISTRO=fedora39 01:21:07.265 SPDK_VAGRANT_VMCPU=10 01:21:07.265 SPDK_VAGRANT_VMRAM=12288 01:21:07.265 SPDK_VAGRANT_PROVIDER=libvirt 01:21:07.265 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 01:21:07.265 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 01:21:07.265 SPDK_OPENSTACK_NETWORK=0 01:21:07.265 VAGRANT_PACKAGE_BOX=0 01:21:07.265 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 01:21:07.265 FORCE_DISTRO=true 01:21:07.265 VAGRANT_BOX_VERSION= 01:21:07.265 EXTRA_VAGRANTFILES= 01:21:07.265 NIC_MODEL=e1000 01:21:07.265 01:21:07.265 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 01:21:07.265 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 01:21:10.546 Bringing machine 'default' up with 'libvirt' provider... 01:21:10.804 ==> default: Creating image (snapshot of base box volume). 01:21:11.062 ==> default: Creating domain with the following settings... 01:21:11.062 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733721362_c0b942e9aa29ae20ddec 01:21:11.062 ==> default: -- Domain type: kvm 01:21:11.062 ==> default: -- Cpus: 10 01:21:11.062 ==> default: -- Feature: acpi 01:21:11.062 ==> default: -- Feature: apic 01:21:11.062 ==> default: -- Feature: pae 01:21:11.062 ==> default: -- Memory: 12288M 01:21:11.062 ==> default: -- Memory Backing: hugepages: 01:21:11.062 ==> default: -- Management MAC: 01:21:11.062 ==> default: -- Loader: 01:21:11.062 ==> default: -- Nvram: 01:21:11.062 ==> default: -- Base box: spdk/fedora39 01:21:11.062 ==> default: -- Storage pool: default 01:21:11.062 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733721362_c0b942e9aa29ae20ddec.img (20G) 01:21:11.062 ==> default: -- Volume Cache: default 01:21:11.062 ==> default: -- Kernel: 01:21:11.062 ==> default: -- Initrd: 01:21:11.062 ==> default: -- Graphics Type: vnc 01:21:11.062 ==> default: -- Graphics Port: -1 01:21:11.062 ==> default: -- Graphics IP: 127.0.0.1 01:21:11.062 ==> default: -- Graphics Password: Not defined 01:21:11.062 ==> default: -- Video Type: cirrus 01:21:11.062 ==> default: -- Video VRAM: 9216 01:21:11.062 ==> default: -- Sound Type: 01:21:11.062 ==> default: -- Keymap: en-us 01:21:11.062 ==> default: -- TPM Path: 01:21:11.062 ==> default: -- INPUT: type=mouse, bus=ps2 01:21:11.062 ==> default: -- Command line args: 01:21:11.062 ==> default: -> value=-device, 01:21:11.062 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 01:21:11.062 ==> default: -> value=-drive, 01:21:11.062 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-ftl.img,if=none,id=nvme-0-drive0, 01:21:11.062 ==> default: -> value=-device, 01:21:11.062 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 01:21:11.062 ==> default: -> value=-device, 01:21:11.062 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 01:21:11.062 ==> default: -> value=-drive, 01:21:11.062 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme.img,if=none,id=nvme-1-drive0, 01:21:11.062 ==> default: -> value=-device, 01:21:11.062 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 01:21:11.062 ==> default: -> value=-device, 01:21:11.063 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 01:21:11.063 ==> default: -> value=-drive, 01:21:11.063 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,if=none,id=nvme-2-drive0, 01:21:11.063 ==> default: -> value=-device, 01:21:11.063 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 01:21:11.063 ==> default: -> value=-drive, 01:21:11.063 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi1.img,if=none,id=nvme-2-drive1, 01:21:11.063 ==> default: -> value=-device, 01:21:11.063 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 01:21:11.063 ==> default: -> value=-drive, 01:21:11.063 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,if=none,id=nvme-2-drive2, 01:21:11.063 ==> default: -> value=-device, 01:21:11.063 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 01:21:11.063 ==> default: -> value=-device, 01:21:11.063 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 01:21:11.063 ==> default: -> value=-device, 01:21:11.063 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 01:21:11.063 ==> default: -> value=-drive, 01:21:11.063 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-fdp.img,if=none,id=nvme-3-drive0, 01:21:11.063 ==> default: -> value=-device, 01:21:11.063 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 01:21:11.320 ==> default: Creating shared folders metadata... 01:21:11.320 ==> default: Starting domain. 01:21:13.219 ==> default: Waiting for domain to get an IP address... 01:21:35.154 ==> default: Waiting for SSH to become available... 01:21:35.154 ==> default: Configuring and enabling network interfaces... 01:21:37.684 default: SSH address: 192.168.121.212:22 01:21:37.684 default: SSH username: vagrant 01:21:37.684 default: SSH auth method: private key 01:21:39.583 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 01:21:47.727 ==> default: Mounting SSHFS shared folder... 01:21:49.098 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 01:21:49.098 ==> default: Checking Mount.. 01:21:50.048 ==> default: Folder Successfully Mounted! 01:21:50.048 ==> default: Running provisioner: file... 01:21:50.984 default: ~/.gitconfig => .gitconfig 01:21:51.551 01:21:51.551 SUCCESS! 01:21:51.551 01:21:51.551 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 01:21:51.551 Use vagrant "suspend" and vagrant "resume" to stop and start. 01:21:51.551 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 01:21:51.551 01:21:51.561 [Pipeline] } 01:21:51.581 [Pipeline] // stage 01:21:51.591 [Pipeline] dir 01:21:51.592 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 01:21:51.594 [Pipeline] { 01:21:51.611 [Pipeline] catchError 01:21:51.613 [Pipeline] { 01:21:51.626 [Pipeline] sh 01:21:51.907 + vagrant ssh-config --host vagrant 01:21:51.907 + sed -ne /^Host/,$p 01:21:51.907 + tee ssh_conf 01:21:55.212 Host vagrant 01:21:55.212 HostName 192.168.121.212 01:21:55.212 User vagrant 01:21:55.212 Port 22 01:21:55.212 UserKnownHostsFile /dev/null 01:21:55.212 StrictHostKeyChecking no 01:21:55.212 PasswordAuthentication no 01:21:55.212 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 01:21:55.212 IdentitiesOnly yes 01:21:55.212 LogLevel FATAL 01:21:55.212 ForwardAgent yes 01:21:55.212 ForwardX11 yes 01:21:55.212 01:21:55.226 [Pipeline] withEnv 01:21:55.228 [Pipeline] { 01:21:55.240 [Pipeline] sh 01:21:55.518 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 01:21:55.519 source /etc/os-release 01:21:55.519 [[ -e /image.version ]] && img=$(< /image.version) 01:21:55.519 # Minimal, systemd-like check. 01:21:55.519 if [[ -e /.dockerenv ]]; then 01:21:55.519 # Clear garbage from the node's name: 01:21:55.519 # agt-er_autotest_547-896 -> autotest_547-896 01:21:55.519 # $HOSTNAME is the actual container id 01:21:55.519 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 01:21:55.519 if grep -q "/etc/hostname" /proc/self/mountinfo; then 01:21:55.519 # We can assume this is a mount from a host where container is running, 01:21:55.519 # so fetch its hostname to easily identify the target swarm worker. 01:21:55.519 container="$(< /etc/hostname) ($agent)" 01:21:55.519 else 01:21:55.519 # Fallback 01:21:55.519 container=$agent 01:21:55.519 fi 01:21:55.519 fi 01:21:55.519 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 01:21:55.519 01:21:55.788 [Pipeline] } 01:21:55.803 [Pipeline] // withEnv 01:21:55.810 [Pipeline] setCustomBuildProperty 01:21:55.824 [Pipeline] stage 01:21:55.826 [Pipeline] { (Tests) 01:21:55.842 [Pipeline] sh 01:21:56.121 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 01:21:56.394 [Pipeline] sh 01:21:56.674 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 01:21:56.943 [Pipeline] timeout 01:21:56.944 Timeout set to expire in 50 min 01:21:56.945 [Pipeline] { 01:21:56.959 [Pipeline] sh 01:21:57.237 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 01:21:57.802 HEAD is now at 66902d69a env: explicitly set --legacy-mem flag in no hugepages mode 01:21:57.812 [Pipeline] sh 01:21:58.091 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 01:21:58.360 [Pipeline] sh 01:21:58.636 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 01:21:59.058 [Pipeline] sh 01:21:59.329 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo 01:21:59.587 ++ readlink -f spdk_repo 01:21:59.587 + DIR_ROOT=/home/vagrant/spdk_repo 01:21:59.587 + [[ -n /home/vagrant/spdk_repo ]] 01:21:59.587 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 01:21:59.587 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 01:21:59.587 + [[ -d /home/vagrant/spdk_repo/spdk ]] 01:21:59.587 + [[ ! -d /home/vagrant/spdk_repo/output ]] 01:21:59.587 + [[ -d /home/vagrant/spdk_repo/output ]] 01:21:59.587 + [[ nvme-vg-autotest == pkgdep-* ]] 01:21:59.587 + cd /home/vagrant/spdk_repo 01:21:59.587 + source /etc/os-release 01:21:59.587 ++ NAME='Fedora Linux' 01:21:59.587 ++ VERSION='39 (Cloud Edition)' 01:21:59.587 ++ ID=fedora 01:21:59.587 ++ VERSION_ID=39 01:21:59.587 ++ VERSION_CODENAME= 01:21:59.587 ++ PLATFORM_ID=platform:f39 01:21:59.587 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 01:21:59.587 ++ ANSI_COLOR='0;38;2;60;110;180' 01:21:59.587 ++ LOGO=fedora-logo-icon 01:21:59.587 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 01:21:59.587 ++ HOME_URL=https://fedoraproject.org/ 01:21:59.587 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 01:21:59.587 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 01:21:59.587 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 01:21:59.587 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 01:21:59.587 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 01:21:59.587 ++ REDHAT_SUPPORT_PRODUCT=Fedora 01:21:59.587 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 01:21:59.587 ++ SUPPORT_END=2024-11-12 01:21:59.587 ++ VARIANT='Cloud Edition' 01:21:59.587 ++ VARIANT_ID=cloud 01:21:59.587 + uname -a 01:21:59.587 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 01:21:59.587 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 01:21:59.845 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:22:00.102 Hugepages 01:22:00.102 node hugesize free / total 01:22:00.102 node0 1048576kB 0 / 0 01:22:00.102 node0 2048kB 0 / 0 01:22:00.102 01:22:00.103 Type BDF Vendor Device NUMA Driver Device Block devices 01:22:00.103 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 01:22:00.361 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 01:22:00.361 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 01:22:00.361 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme3 nvme3n1 nvme3n2 nvme3n3 01:22:00.361 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme2 nvme2n1 01:22:00.361 + rm -f /tmp/spdk-ld-path 01:22:00.361 + source autorun-spdk.conf 01:22:00.361 ++ SPDK_RUN_FUNCTIONAL_TEST=1 01:22:00.361 ++ SPDK_TEST_NVME=1 01:22:00.361 ++ SPDK_TEST_FTL=1 01:22:00.361 ++ SPDK_TEST_ISAL=1 01:22:00.361 ++ SPDK_RUN_ASAN=1 01:22:00.361 ++ SPDK_RUN_UBSAN=1 01:22:00.361 ++ SPDK_TEST_XNVME=1 01:22:00.361 ++ SPDK_TEST_NVME_FDP=1 01:22:00.361 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 01:22:00.361 ++ RUN_NIGHTLY=0 01:22:00.361 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 01:22:00.361 + [[ -n '' ]] 01:22:00.361 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 01:22:00.361 + for M in /var/spdk/build-*-manifest.txt 01:22:00.361 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 01:22:00.361 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 01:22:00.361 + for M in /var/spdk/build-*-manifest.txt 01:22:00.361 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 01:22:00.361 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 01:22:00.361 + for M in /var/spdk/build-*-manifest.txt 01:22:00.361 + [[ -f /var/spdk/build-repo-manifest.txt ]] 01:22:00.361 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 01:22:00.361 ++ uname 01:22:00.361 + [[ Linux == \L\i\n\u\x ]] 01:22:00.361 + sudo dmesg -T 01:22:00.361 + sudo dmesg --clear 01:22:00.361 + dmesg_pid=5297 01:22:00.361 + [[ Fedora Linux == FreeBSD ]] 01:22:00.361 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 01:22:00.361 + sudo dmesg -Tw 01:22:00.361 + UNBIND_ENTIRE_IOMMU_GROUP=yes 01:22:00.361 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 01:22:00.361 + [[ -x /usr/src/fio-static/fio ]] 01:22:00.361 + export FIO_BIN=/usr/src/fio-static/fio 01:22:00.361 + FIO_BIN=/usr/src/fio-static/fio 01:22:00.361 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 01:22:00.361 + [[ ! -v VFIO_QEMU_BIN ]] 01:22:00.361 + [[ -e /usr/local/qemu/vfio-user-latest ]] 01:22:00.361 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 01:22:00.361 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 01:22:00.361 + [[ -e /usr/local/qemu/vanilla-latest ]] 01:22:00.361 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 01:22:00.361 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 01:22:00.361 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 01:22:00.619 05:16:51 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 01:22:00.619 05:16:51 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 01:22:00.619 05:16:51 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 01:22:00.619 05:16:51 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 01:22:00.619 05:16:51 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 01:22:00.619 05:16:51 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 01:22:00.619 05:16:51 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 01:22:00.619 05:16:51 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 01:22:00.619 05:16:51 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 01:22:00.619 05:16:51 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 01:22:00.619 05:16:51 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 01:22:00.619 05:16:51 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=0 01:22:00.619 05:16:51 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 01:22:00.619 05:16:51 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 01:22:00.619 05:16:52 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 01:22:00.619 05:16:52 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:22:00.619 05:16:52 -- scripts/common.sh@15 -- $ shopt -s extglob 01:22:00.619 05:16:52 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 01:22:00.619 05:16:52 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:22:00.619 05:16:52 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 01:22:00.619 05:16:52 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:22:00.619 05:16:52 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:22:00.619 05:16:52 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:22:00.619 05:16:52 -- paths/export.sh@5 -- $ export PATH 01:22:00.619 05:16:52 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:22:00.619 05:16:52 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 01:22:00.619 05:16:52 -- common/autobuild_common.sh@493 -- $ date +%s 01:22:00.619 05:16:52 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733721412.XXXXXX 01:22:00.619 05:16:52 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733721412.ymVCBw 01:22:00.619 05:16:52 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 01:22:00.619 05:16:52 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 01:22:00.619 05:16:52 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 01:22:00.619 05:16:52 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 01:22:00.619 05:16:52 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 01:22:00.619 05:16:52 -- common/autobuild_common.sh@509 -- $ get_config_params 01:22:00.619 05:16:52 -- common/autotest_common.sh@409 -- $ xtrace_disable 01:22:00.619 05:16:52 -- common/autotest_common.sh@10 -- $ set +x 01:22:00.619 05:16:52 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 01:22:00.619 05:16:52 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 01:22:00.619 05:16:52 -- pm/common@17 -- $ local monitor 01:22:00.619 05:16:52 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 01:22:00.619 05:16:52 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 01:22:00.619 05:16:52 -- pm/common@21 -- $ date +%s 01:22:00.619 05:16:52 -- pm/common@25 -- $ sleep 1 01:22:00.619 05:16:52 -- pm/common@21 -- $ date +%s 01:22:00.619 05:16:52 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733721412 01:22:00.619 05:16:52 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733721412 01:22:00.619 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733721412_collect-cpu-load.pm.log 01:22:00.619 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733721412_collect-vmstat.pm.log 01:22:01.555 05:16:53 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 01:22:01.555 05:16:53 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 01:22:01.555 05:16:53 -- spdk/autobuild.sh@12 -- $ umask 022 01:22:01.555 05:16:53 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 01:22:01.555 05:16:53 -- spdk/autobuild.sh@16 -- $ date -u 01:22:01.555 Mon Dec 9 05:16:53 AM UTC 2024 01:22:01.555 05:16:53 -- spdk/autobuild.sh@17 -- $ git describe --tags 01:22:01.555 v25.01-pre-278-g66902d69a 01:22:01.555 05:16:53 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 01:22:01.555 05:16:53 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 01:22:01.555 05:16:53 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 01:22:01.555 05:16:53 -- common/autotest_common.sh@1111 -- $ xtrace_disable 01:22:01.555 05:16:53 -- common/autotest_common.sh@10 -- $ set +x 01:22:01.555 ************************************ 01:22:01.555 START TEST asan 01:22:01.555 ************************************ 01:22:01.555 using asan 01:22:01.555 05:16:53 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 01:22:01.555 01:22:01.555 real 0m0.001s 01:22:01.555 user 0m0.000s 01:22:01.555 sys 0m0.000s 01:22:01.555 05:16:53 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 01:22:01.555 05:16:53 asan -- common/autotest_common.sh@10 -- $ set +x 01:22:01.555 ************************************ 01:22:01.555 END TEST asan 01:22:01.555 ************************************ 01:22:01.555 05:16:53 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 01:22:01.555 05:16:53 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 01:22:01.555 05:16:53 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 01:22:01.555 05:16:53 -- common/autotest_common.sh@1111 -- $ xtrace_disable 01:22:01.555 05:16:53 -- common/autotest_common.sh@10 -- $ set +x 01:22:01.555 ************************************ 01:22:01.555 START TEST ubsan 01:22:01.555 ************************************ 01:22:01.555 using ubsan 01:22:01.555 05:16:53 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 01:22:01.555 01:22:01.555 real 0m0.000s 01:22:01.555 user 0m0.000s 01:22:01.555 sys 0m0.000s 01:22:01.555 05:16:53 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 01:22:01.555 05:16:53 ubsan -- common/autotest_common.sh@10 -- $ set +x 01:22:01.555 ************************************ 01:22:01.555 END TEST ubsan 01:22:01.555 ************************************ 01:22:01.814 05:16:53 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 01:22:01.814 05:16:53 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 01:22:01.814 05:16:53 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 01:22:01.814 05:16:53 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 01:22:01.814 05:16:53 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 01:22:01.814 05:16:53 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 01:22:01.814 05:16:53 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 01:22:01.814 05:16:53 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 01:22:01.814 05:16:53 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 01:22:01.814 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 01:22:01.814 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 01:22:02.382 Using 'verbs' RDMA provider 01:22:18.190 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 01:22:30.389 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 01:22:30.389 Creating mk/config.mk...done. 01:22:30.389 Creating mk/cc.flags.mk...done. 01:22:30.389 Type 'make' to build. 01:22:30.389 05:17:20 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 01:22:30.389 05:17:20 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 01:22:30.389 05:17:20 -- common/autotest_common.sh@1111 -- $ xtrace_disable 01:22:30.389 05:17:20 -- common/autotest_common.sh@10 -- $ set +x 01:22:30.389 ************************************ 01:22:30.389 START TEST make 01:22:30.389 ************************************ 01:22:30.389 05:17:20 make -- common/autotest_common.sh@1129 -- $ make -j10 01:22:30.389 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 01:22:30.389 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 01:22:30.389 meson setup builddir \ 01:22:30.389 -Dwith-libaio=enabled \ 01:22:30.389 -Dwith-liburing=enabled \ 01:22:30.389 -Dwith-libvfn=disabled \ 01:22:30.389 -Dwith-spdk=disabled \ 01:22:30.389 -Dexamples=false \ 01:22:30.389 -Dtests=false \ 01:22:30.389 -Dtools=false && \ 01:22:30.389 meson compile -C builddir && \ 01:22:30.389 cd -) 01:22:30.389 make[1]: Nothing to be done for 'all'. 01:22:32.288 The Meson build system 01:22:32.288 Version: 1.5.0 01:22:32.288 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 01:22:32.288 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 01:22:32.288 Build type: native build 01:22:32.288 Project name: xnvme 01:22:32.288 Project version: 0.7.5 01:22:32.288 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 01:22:32.288 C linker for the host machine: cc ld.bfd 2.40-14 01:22:32.288 Host machine cpu family: x86_64 01:22:32.288 Host machine cpu: x86_64 01:22:32.288 Message: host_machine.system: linux 01:22:32.288 Compiler for C supports arguments -Wno-missing-braces: YES 01:22:32.288 Compiler for C supports arguments -Wno-cast-function-type: YES 01:22:32.288 Compiler for C supports arguments -Wno-strict-aliasing: YES 01:22:32.288 Run-time dependency threads found: YES 01:22:32.288 Has header "setupapi.h" : NO 01:22:32.288 Has header "linux/blkzoned.h" : YES 01:22:32.288 Has header "linux/blkzoned.h" : YES (cached) 01:22:32.288 Has header "libaio.h" : YES 01:22:32.288 Library aio found: YES 01:22:32.288 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 01:22:32.288 Run-time dependency liburing found: YES 2.2 01:22:32.288 Dependency libvfn skipped: feature with-libvfn disabled 01:22:32.288 Found CMake: /usr/bin/cmake (3.27.7) 01:22:32.288 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 01:22:32.288 Subproject spdk : skipped: feature with-spdk disabled 01:22:32.288 Run-time dependency appleframeworks found: NO (tried framework) 01:22:32.288 Run-time dependency appleframeworks found: NO (tried framework) 01:22:32.288 Library rt found: YES 01:22:32.288 Checking for function "clock_gettime" with dependency -lrt: YES 01:22:32.288 Configuring xnvme_config.h using configuration 01:22:32.288 Configuring xnvme.spec using configuration 01:22:32.288 Run-time dependency bash-completion found: YES 2.11 01:22:32.288 Message: Bash-completions: /usr/share/bash-completion/completions 01:22:32.288 Program cp found: YES (/usr/bin/cp) 01:22:32.288 Build targets in project: 3 01:22:32.288 01:22:32.288 xnvme 0.7.5 01:22:32.288 01:22:32.288 Subprojects 01:22:32.288 spdk : NO Feature 'with-spdk' disabled 01:22:32.288 01:22:32.288 User defined options 01:22:32.288 examples : false 01:22:32.289 tests : false 01:22:32.289 tools : false 01:22:32.289 with-libaio : enabled 01:22:32.289 with-liburing: enabled 01:22:32.289 with-libvfn : disabled 01:22:32.289 with-spdk : disabled 01:22:32.289 01:22:32.289 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 01:22:32.856 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 01:22:32.856 [1/76] Generating toolbox/xnvme-driver-script with a custom command 01:22:33.115 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 01:22:33.115 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 01:22:33.115 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 01:22:33.115 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 01:22:33.115 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 01:22:33.115 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 01:22:33.115 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 01:22:33.115 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 01:22:33.115 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 01:22:33.115 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 01:22:33.115 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 01:22:33.115 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 01:22:33.115 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 01:22:33.115 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 01:22:33.115 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 01:22:33.115 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 01:22:33.115 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 01:22:33.396 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 01:22:33.396 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 01:22:33.396 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 01:22:33.396 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 01:22:33.396 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 01:22:33.396 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 01:22:33.396 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 01:22:33.396 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 01:22:33.396 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 01:22:33.396 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 01:22:33.396 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 01:22:33.397 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 01:22:33.397 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 01:22:33.397 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 01:22:33.397 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 01:22:33.397 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 01:22:33.397 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 01:22:33.397 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 01:22:33.397 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 01:22:33.397 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 01:22:33.397 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 01:22:33.397 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 01:22:33.397 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 01:22:33.397 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 01:22:33.397 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 01:22:33.397 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 01:22:33.397 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 01:22:33.397 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 01:22:33.397 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 01:22:33.397 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 01:22:33.397 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 01:22:33.397 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 01:22:33.397 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 01:22:33.654 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 01:22:33.654 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 01:22:33.654 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 01:22:33.654 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 01:22:33.654 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 01:22:33.654 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 01:22:33.654 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 01:22:33.654 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 01:22:33.654 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 01:22:33.654 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 01:22:33.654 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 01:22:33.654 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 01:22:33.654 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 01:22:33.654 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 01:22:33.654 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 01:22:33.654 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 01:22:33.913 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 01:22:33.913 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 01:22:33.913 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 01:22:33.913 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 01:22:33.913 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 01:22:33.913 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 01:22:34.172 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 01:22:34.430 [75/76] Linking static target lib/libxnvme.a 01:22:34.430 [76/76] Linking target lib/libxnvme.so.0.7.5 01:22:34.430 INFO: autodetecting backend as ninja 01:22:34.430 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 01:22:34.431 /home/vagrant/spdk_repo/spdk/xnvmebuild 01:22:42.567 The Meson build system 01:22:42.567 Version: 1.5.0 01:22:42.567 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 01:22:42.567 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 01:22:42.567 Build type: native build 01:22:42.567 Program cat found: YES (/usr/bin/cat) 01:22:42.567 Project name: DPDK 01:22:42.567 Project version: 24.03.0 01:22:42.567 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 01:22:42.567 C linker for the host machine: cc ld.bfd 2.40-14 01:22:42.567 Host machine cpu family: x86_64 01:22:42.567 Host machine cpu: x86_64 01:22:42.567 Message: ## Building in Developer Mode ## 01:22:42.567 Program pkg-config found: YES (/usr/bin/pkg-config) 01:22:42.567 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 01:22:42.567 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 01:22:42.567 Program python3 found: YES (/usr/bin/python3) 01:22:42.567 Program cat found: YES (/usr/bin/cat) 01:22:42.567 Compiler for C supports arguments -march=native: YES 01:22:42.567 Checking for size of "void *" : 8 01:22:42.567 Checking for size of "void *" : 8 (cached) 01:22:42.567 Compiler for C supports link arguments -Wl,--undefined-version: YES 01:22:42.567 Library m found: YES 01:22:42.567 Library numa found: YES 01:22:42.567 Has header "numaif.h" : YES 01:22:42.567 Library fdt found: NO 01:22:42.567 Library execinfo found: NO 01:22:42.567 Has header "execinfo.h" : YES 01:22:42.567 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 01:22:42.567 Run-time dependency libarchive found: NO (tried pkgconfig) 01:22:42.567 Run-time dependency libbsd found: NO (tried pkgconfig) 01:22:42.567 Run-time dependency jansson found: NO (tried pkgconfig) 01:22:42.567 Run-time dependency openssl found: YES 3.1.1 01:22:42.567 Run-time dependency libpcap found: YES 1.10.4 01:22:42.567 Has header "pcap.h" with dependency libpcap: YES 01:22:42.567 Compiler for C supports arguments -Wcast-qual: YES 01:22:42.567 Compiler for C supports arguments -Wdeprecated: YES 01:22:42.567 Compiler for C supports arguments -Wformat: YES 01:22:42.567 Compiler for C supports arguments -Wformat-nonliteral: NO 01:22:42.567 Compiler for C supports arguments -Wformat-security: NO 01:22:42.567 Compiler for C supports arguments -Wmissing-declarations: YES 01:22:42.567 Compiler for C supports arguments -Wmissing-prototypes: YES 01:22:42.567 Compiler for C supports arguments -Wnested-externs: YES 01:22:42.567 Compiler for C supports arguments -Wold-style-definition: YES 01:22:42.567 Compiler for C supports arguments -Wpointer-arith: YES 01:22:42.567 Compiler for C supports arguments -Wsign-compare: YES 01:22:42.567 Compiler for C supports arguments -Wstrict-prototypes: YES 01:22:42.567 Compiler for C supports arguments -Wundef: YES 01:22:42.567 Compiler for C supports arguments -Wwrite-strings: YES 01:22:42.567 Compiler for C supports arguments -Wno-address-of-packed-member: YES 01:22:42.567 Compiler for C supports arguments -Wno-packed-not-aligned: YES 01:22:42.567 Compiler for C supports arguments -Wno-missing-field-initializers: YES 01:22:42.567 Compiler for C supports arguments -Wno-zero-length-bounds: YES 01:22:42.567 Program objdump found: YES (/usr/bin/objdump) 01:22:42.568 Compiler for C supports arguments -mavx512f: YES 01:22:42.568 Checking if "AVX512 checking" compiles: YES 01:22:42.568 Fetching value of define "__SSE4_2__" : 1 01:22:42.568 Fetching value of define "__AES__" : 1 01:22:42.568 Fetching value of define "__AVX__" : 1 01:22:42.568 Fetching value of define "__AVX2__" : 1 01:22:42.568 Fetching value of define "__AVX512BW__" : (undefined) 01:22:42.568 Fetching value of define "__AVX512CD__" : (undefined) 01:22:42.568 Fetching value of define "__AVX512DQ__" : (undefined) 01:22:42.568 Fetching value of define "__AVX512F__" : (undefined) 01:22:42.568 Fetching value of define "__AVX512VL__" : (undefined) 01:22:42.568 Fetching value of define "__PCLMUL__" : 1 01:22:42.568 Fetching value of define "__RDRND__" : 1 01:22:42.568 Fetching value of define "__RDSEED__" : 1 01:22:42.568 Fetching value of define "__VPCLMULQDQ__" : (undefined) 01:22:42.568 Fetching value of define "__znver1__" : (undefined) 01:22:42.568 Fetching value of define "__znver2__" : (undefined) 01:22:42.568 Fetching value of define "__znver3__" : (undefined) 01:22:42.568 Fetching value of define "__znver4__" : (undefined) 01:22:42.568 Library asan found: YES 01:22:42.568 Compiler for C supports arguments -Wno-format-truncation: YES 01:22:42.568 Message: lib/log: Defining dependency "log" 01:22:42.568 Message: lib/kvargs: Defining dependency "kvargs" 01:22:42.568 Message: lib/telemetry: Defining dependency "telemetry" 01:22:42.568 Library rt found: YES 01:22:42.568 Checking for function "getentropy" : NO 01:22:42.568 Message: lib/eal: Defining dependency "eal" 01:22:42.568 Message: lib/ring: Defining dependency "ring" 01:22:42.568 Message: lib/rcu: Defining dependency "rcu" 01:22:42.568 Message: lib/mempool: Defining dependency "mempool" 01:22:42.568 Message: lib/mbuf: Defining dependency "mbuf" 01:22:42.568 Fetching value of define "__PCLMUL__" : 1 (cached) 01:22:42.568 Fetching value of define "__AVX512F__" : (undefined) (cached) 01:22:42.568 Compiler for C supports arguments -mpclmul: YES 01:22:42.568 Compiler for C supports arguments -maes: YES 01:22:42.568 Compiler for C supports arguments -mavx512f: YES (cached) 01:22:42.568 Compiler for C supports arguments -mavx512bw: YES 01:22:42.568 Compiler for C supports arguments -mavx512dq: YES 01:22:42.568 Compiler for C supports arguments -mavx512vl: YES 01:22:42.568 Compiler for C supports arguments -mvpclmulqdq: YES 01:22:42.568 Compiler for C supports arguments -mavx2: YES 01:22:42.568 Compiler for C supports arguments -mavx: YES 01:22:42.568 Message: lib/net: Defining dependency "net" 01:22:42.568 Message: lib/meter: Defining dependency "meter" 01:22:42.568 Message: lib/ethdev: Defining dependency "ethdev" 01:22:42.568 Message: lib/pci: Defining dependency "pci" 01:22:42.568 Message: lib/cmdline: Defining dependency "cmdline" 01:22:42.568 Message: lib/hash: Defining dependency "hash" 01:22:42.568 Message: lib/timer: Defining dependency "timer" 01:22:42.568 Message: lib/compressdev: Defining dependency "compressdev" 01:22:42.568 Message: lib/cryptodev: Defining dependency "cryptodev" 01:22:42.568 Message: lib/dmadev: Defining dependency "dmadev" 01:22:42.568 Compiler for C supports arguments -Wno-cast-qual: YES 01:22:42.568 Message: lib/power: Defining dependency "power" 01:22:42.568 Message: lib/reorder: Defining dependency "reorder" 01:22:42.568 Message: lib/security: Defining dependency "security" 01:22:42.568 Has header "linux/userfaultfd.h" : YES 01:22:42.568 Has header "linux/vduse.h" : YES 01:22:42.568 Message: lib/vhost: Defining dependency "vhost" 01:22:42.568 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 01:22:42.568 Message: drivers/bus/pci: Defining dependency "bus_pci" 01:22:42.568 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 01:22:42.568 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 01:22:42.568 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 01:22:42.568 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 01:22:42.568 Message: Disabling ml/* drivers: missing internal dependency "mldev" 01:22:42.568 Message: Disabling event/* drivers: missing internal dependency "eventdev" 01:22:42.568 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 01:22:42.568 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 01:22:42.568 Program doxygen found: YES (/usr/local/bin/doxygen) 01:22:42.568 Configuring doxy-api-html.conf using configuration 01:22:42.568 Configuring doxy-api-man.conf using configuration 01:22:42.568 Program mandb found: YES (/usr/bin/mandb) 01:22:42.568 Program sphinx-build found: NO 01:22:42.568 Configuring rte_build_config.h using configuration 01:22:42.568 Message: 01:22:42.568 ================= 01:22:42.568 Applications Enabled 01:22:42.568 ================= 01:22:42.568 01:22:42.568 apps: 01:22:42.568 01:22:42.568 01:22:42.568 Message: 01:22:42.568 ================= 01:22:42.568 Libraries Enabled 01:22:42.568 ================= 01:22:42.568 01:22:42.568 libs: 01:22:42.568 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 01:22:42.568 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 01:22:42.568 cryptodev, dmadev, power, reorder, security, vhost, 01:22:42.568 01:22:42.568 Message: 01:22:42.568 =============== 01:22:42.568 Drivers Enabled 01:22:42.568 =============== 01:22:42.568 01:22:42.568 common: 01:22:42.568 01:22:42.568 bus: 01:22:42.568 pci, vdev, 01:22:42.568 mempool: 01:22:42.568 ring, 01:22:42.568 dma: 01:22:42.568 01:22:42.568 net: 01:22:42.568 01:22:42.568 crypto: 01:22:42.568 01:22:42.568 compress: 01:22:42.568 01:22:42.568 vdpa: 01:22:42.568 01:22:42.568 01:22:42.568 Message: 01:22:42.568 ================= 01:22:42.568 Content Skipped 01:22:42.568 ================= 01:22:42.568 01:22:42.568 apps: 01:22:42.568 dumpcap: explicitly disabled via build config 01:22:42.568 graph: explicitly disabled via build config 01:22:42.568 pdump: explicitly disabled via build config 01:22:42.568 proc-info: explicitly disabled via build config 01:22:42.568 test-acl: explicitly disabled via build config 01:22:42.568 test-bbdev: explicitly disabled via build config 01:22:42.568 test-cmdline: explicitly disabled via build config 01:22:42.568 test-compress-perf: explicitly disabled via build config 01:22:42.568 test-crypto-perf: explicitly disabled via build config 01:22:42.568 test-dma-perf: explicitly disabled via build config 01:22:42.568 test-eventdev: explicitly disabled via build config 01:22:42.568 test-fib: explicitly disabled via build config 01:22:42.568 test-flow-perf: explicitly disabled via build config 01:22:42.568 test-gpudev: explicitly disabled via build config 01:22:42.568 test-mldev: explicitly disabled via build config 01:22:42.568 test-pipeline: explicitly disabled via build config 01:22:42.568 test-pmd: explicitly disabled via build config 01:22:42.568 test-regex: explicitly disabled via build config 01:22:42.568 test-sad: explicitly disabled via build config 01:22:42.568 test-security-perf: explicitly disabled via build config 01:22:42.568 01:22:42.568 libs: 01:22:42.568 argparse: explicitly disabled via build config 01:22:42.568 metrics: explicitly disabled via build config 01:22:42.568 acl: explicitly disabled via build config 01:22:42.568 bbdev: explicitly disabled via build config 01:22:42.568 bitratestats: explicitly disabled via build config 01:22:42.568 bpf: explicitly disabled via build config 01:22:42.568 cfgfile: explicitly disabled via build config 01:22:42.568 distributor: explicitly disabled via build config 01:22:42.568 efd: explicitly disabled via build config 01:22:42.568 eventdev: explicitly disabled via build config 01:22:42.568 dispatcher: explicitly disabled via build config 01:22:42.568 gpudev: explicitly disabled via build config 01:22:42.568 gro: explicitly disabled via build config 01:22:42.568 gso: explicitly disabled via build config 01:22:42.568 ip_frag: explicitly disabled via build config 01:22:42.568 jobstats: explicitly disabled via build config 01:22:42.568 latencystats: explicitly disabled via build config 01:22:42.568 lpm: explicitly disabled via build config 01:22:42.568 member: explicitly disabled via build config 01:22:42.568 pcapng: explicitly disabled via build config 01:22:42.568 rawdev: explicitly disabled via build config 01:22:42.568 regexdev: explicitly disabled via build config 01:22:42.568 mldev: explicitly disabled via build config 01:22:42.568 rib: explicitly disabled via build config 01:22:42.568 sched: explicitly disabled via build config 01:22:42.568 stack: explicitly disabled via build config 01:22:42.568 ipsec: explicitly disabled via build config 01:22:42.568 pdcp: explicitly disabled via build config 01:22:42.568 fib: explicitly disabled via build config 01:22:42.568 port: explicitly disabled via build config 01:22:42.568 pdump: explicitly disabled via build config 01:22:42.568 table: explicitly disabled via build config 01:22:42.568 pipeline: explicitly disabled via build config 01:22:42.568 graph: explicitly disabled via build config 01:22:42.568 node: explicitly disabled via build config 01:22:42.568 01:22:42.568 drivers: 01:22:42.568 common/cpt: not in enabled drivers build config 01:22:42.568 common/dpaax: not in enabled drivers build config 01:22:42.568 common/iavf: not in enabled drivers build config 01:22:42.568 common/idpf: not in enabled drivers build config 01:22:42.568 common/ionic: not in enabled drivers build config 01:22:42.568 common/mvep: not in enabled drivers build config 01:22:42.568 common/octeontx: not in enabled drivers build config 01:22:42.568 bus/auxiliary: not in enabled drivers build config 01:22:42.568 bus/cdx: not in enabled drivers build config 01:22:42.568 bus/dpaa: not in enabled drivers build config 01:22:42.569 bus/fslmc: not in enabled drivers build config 01:22:42.569 bus/ifpga: not in enabled drivers build config 01:22:42.569 bus/platform: not in enabled drivers build config 01:22:42.569 bus/uacce: not in enabled drivers build config 01:22:42.569 bus/vmbus: not in enabled drivers build config 01:22:42.569 common/cnxk: not in enabled drivers build config 01:22:42.569 common/mlx5: not in enabled drivers build config 01:22:42.569 common/nfp: not in enabled drivers build config 01:22:42.569 common/nitrox: not in enabled drivers build config 01:22:42.569 common/qat: not in enabled drivers build config 01:22:42.569 common/sfc_efx: not in enabled drivers build config 01:22:42.569 mempool/bucket: not in enabled drivers build config 01:22:42.569 mempool/cnxk: not in enabled drivers build config 01:22:42.569 mempool/dpaa: not in enabled drivers build config 01:22:42.569 mempool/dpaa2: not in enabled drivers build config 01:22:42.569 mempool/octeontx: not in enabled drivers build config 01:22:42.569 mempool/stack: not in enabled drivers build config 01:22:42.569 dma/cnxk: not in enabled drivers build config 01:22:42.569 dma/dpaa: not in enabled drivers build config 01:22:42.569 dma/dpaa2: not in enabled drivers build config 01:22:42.569 dma/hisilicon: not in enabled drivers build config 01:22:42.569 dma/idxd: not in enabled drivers build config 01:22:42.569 dma/ioat: not in enabled drivers build config 01:22:42.569 dma/skeleton: not in enabled drivers build config 01:22:42.569 net/af_packet: not in enabled drivers build config 01:22:42.569 net/af_xdp: not in enabled drivers build config 01:22:42.569 net/ark: not in enabled drivers build config 01:22:42.569 net/atlantic: not in enabled drivers build config 01:22:42.569 net/avp: not in enabled drivers build config 01:22:42.569 net/axgbe: not in enabled drivers build config 01:22:42.569 net/bnx2x: not in enabled drivers build config 01:22:42.569 net/bnxt: not in enabled drivers build config 01:22:42.569 net/bonding: not in enabled drivers build config 01:22:42.569 net/cnxk: not in enabled drivers build config 01:22:42.569 net/cpfl: not in enabled drivers build config 01:22:42.569 net/cxgbe: not in enabled drivers build config 01:22:42.569 net/dpaa: not in enabled drivers build config 01:22:42.569 net/dpaa2: not in enabled drivers build config 01:22:42.569 net/e1000: not in enabled drivers build config 01:22:42.569 net/ena: not in enabled drivers build config 01:22:42.569 net/enetc: not in enabled drivers build config 01:22:42.569 net/enetfec: not in enabled drivers build config 01:22:42.569 net/enic: not in enabled drivers build config 01:22:42.569 net/failsafe: not in enabled drivers build config 01:22:42.569 net/fm10k: not in enabled drivers build config 01:22:42.569 net/gve: not in enabled drivers build config 01:22:42.569 net/hinic: not in enabled drivers build config 01:22:42.569 net/hns3: not in enabled drivers build config 01:22:42.569 net/i40e: not in enabled drivers build config 01:22:42.569 net/iavf: not in enabled drivers build config 01:22:42.569 net/ice: not in enabled drivers build config 01:22:42.569 net/idpf: not in enabled drivers build config 01:22:42.569 net/igc: not in enabled drivers build config 01:22:42.569 net/ionic: not in enabled drivers build config 01:22:42.569 net/ipn3ke: not in enabled drivers build config 01:22:42.569 net/ixgbe: not in enabled drivers build config 01:22:42.569 net/mana: not in enabled drivers build config 01:22:42.569 net/memif: not in enabled drivers build config 01:22:42.569 net/mlx4: not in enabled drivers build config 01:22:42.569 net/mlx5: not in enabled drivers build config 01:22:42.569 net/mvneta: not in enabled drivers build config 01:22:42.569 net/mvpp2: not in enabled drivers build config 01:22:42.569 net/netvsc: not in enabled drivers build config 01:22:42.569 net/nfb: not in enabled drivers build config 01:22:42.569 net/nfp: not in enabled drivers build config 01:22:42.569 net/ngbe: not in enabled drivers build config 01:22:42.569 net/null: not in enabled drivers build config 01:22:42.569 net/octeontx: not in enabled drivers build config 01:22:42.569 net/octeon_ep: not in enabled drivers build config 01:22:42.569 net/pcap: not in enabled drivers build config 01:22:42.569 net/pfe: not in enabled drivers build config 01:22:42.569 net/qede: not in enabled drivers build config 01:22:42.569 net/ring: not in enabled drivers build config 01:22:42.569 net/sfc: not in enabled drivers build config 01:22:42.569 net/softnic: not in enabled drivers build config 01:22:42.569 net/tap: not in enabled drivers build config 01:22:42.569 net/thunderx: not in enabled drivers build config 01:22:42.569 net/txgbe: not in enabled drivers build config 01:22:42.569 net/vdev_netvsc: not in enabled drivers build config 01:22:42.569 net/vhost: not in enabled drivers build config 01:22:42.569 net/virtio: not in enabled drivers build config 01:22:42.569 net/vmxnet3: not in enabled drivers build config 01:22:42.569 raw/*: missing internal dependency, "rawdev" 01:22:42.569 crypto/armv8: not in enabled drivers build config 01:22:42.569 crypto/bcmfs: not in enabled drivers build config 01:22:42.569 crypto/caam_jr: not in enabled drivers build config 01:22:42.569 crypto/ccp: not in enabled drivers build config 01:22:42.569 crypto/cnxk: not in enabled drivers build config 01:22:42.569 crypto/dpaa_sec: not in enabled drivers build config 01:22:42.569 crypto/dpaa2_sec: not in enabled drivers build config 01:22:42.569 crypto/ipsec_mb: not in enabled drivers build config 01:22:42.569 crypto/mlx5: not in enabled drivers build config 01:22:42.569 crypto/mvsam: not in enabled drivers build config 01:22:42.569 crypto/nitrox: not in enabled drivers build config 01:22:42.569 crypto/null: not in enabled drivers build config 01:22:42.569 crypto/octeontx: not in enabled drivers build config 01:22:42.569 crypto/openssl: not in enabled drivers build config 01:22:42.569 crypto/scheduler: not in enabled drivers build config 01:22:42.569 crypto/uadk: not in enabled drivers build config 01:22:42.569 crypto/virtio: not in enabled drivers build config 01:22:42.569 compress/isal: not in enabled drivers build config 01:22:42.569 compress/mlx5: not in enabled drivers build config 01:22:42.569 compress/nitrox: not in enabled drivers build config 01:22:42.569 compress/octeontx: not in enabled drivers build config 01:22:42.569 compress/zlib: not in enabled drivers build config 01:22:42.569 regex/*: missing internal dependency, "regexdev" 01:22:42.569 ml/*: missing internal dependency, "mldev" 01:22:42.569 vdpa/ifc: not in enabled drivers build config 01:22:42.569 vdpa/mlx5: not in enabled drivers build config 01:22:42.569 vdpa/nfp: not in enabled drivers build config 01:22:42.569 vdpa/sfc: not in enabled drivers build config 01:22:42.569 event/*: missing internal dependency, "eventdev" 01:22:42.569 baseband/*: missing internal dependency, "bbdev" 01:22:42.569 gpu/*: missing internal dependency, "gpudev" 01:22:42.569 01:22:42.569 01:22:43.149 Build targets in project: 85 01:22:43.149 01:22:43.149 DPDK 24.03.0 01:22:43.149 01:22:43.149 User defined options 01:22:43.149 buildtype : debug 01:22:43.149 default_library : shared 01:22:43.149 libdir : lib 01:22:43.149 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 01:22:43.149 b_sanitize : address 01:22:43.149 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 01:22:43.149 c_link_args : 01:22:43.149 cpu_instruction_set: native 01:22:43.149 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 01:22:43.149 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 01:22:43.149 enable_docs : false 01:22:43.149 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 01:22:43.149 enable_kmods : false 01:22:43.149 max_lcores : 128 01:22:43.149 tests : false 01:22:43.149 01:22:43.149 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 01:22:43.715 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 01:22:43.715 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 01:22:43.715 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 01:22:43.715 [3/268] Linking static target lib/librte_kvargs.a 01:22:43.715 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 01:22:43.715 [5/268] Linking static target lib/librte_log.a 01:22:43.715 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 01:22:44.281 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 01:22:44.539 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 01:22:44.539 [9/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 01:22:44.539 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 01:22:44.539 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 01:22:44.539 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 01:22:44.539 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 01:22:44.540 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 01:22:44.798 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 01:22:44.798 [16/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 01:22:45.056 [17/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 01:22:45.056 [18/268] Linking static target lib/librte_telemetry.a 01:22:45.056 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 01:22:45.056 [20/268] Linking target lib/librte_log.so.24.1 01:22:45.314 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 01:22:45.314 [22/268] Linking target lib/librte_kvargs.so.24.1 01:22:45.570 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 01:22:45.570 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 01:22:45.570 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 01:22:45.570 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 01:22:45.570 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 01:22:45.571 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 01:22:45.828 [29/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 01:22:45.828 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 01:22:45.828 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 01:22:45.828 [32/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 01:22:46.086 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 01:22:46.086 [34/268] Linking target lib/librte_telemetry.so.24.1 01:22:46.344 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 01:22:46.344 [36/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 01:22:46.344 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 01:22:46.601 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 01:22:46.601 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 01:22:46.601 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 01:22:46.601 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 01:22:46.601 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 01:22:46.601 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 01:22:46.601 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 01:22:46.859 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 01:22:47.116 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 01:22:47.373 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 01:22:47.373 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 01:22:47.373 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 01:22:47.373 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 01:22:47.373 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 01:22:47.631 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 01:22:47.631 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 01:22:47.631 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 01:22:47.928 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 01:22:47.928 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 01:22:48.205 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 01:22:48.205 [58/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 01:22:48.464 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 01:22:48.464 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 01:22:48.464 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 01:22:48.464 [62/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 01:22:48.722 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 01:22:48.722 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 01:22:48.722 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 01:22:48.722 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 01:22:48.981 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 01:22:49.549 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 01:22:49.549 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 01:22:49.549 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 01:22:49.549 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 01:22:49.549 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 01:22:49.549 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 01:22:49.549 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 01:22:49.549 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 01:22:49.807 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 01:22:49.807 [77/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 01:22:49.807 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 01:22:49.807 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 01:22:50.380 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 01:22:50.380 [81/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 01:22:50.381 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 01:22:50.381 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 01:22:50.381 [84/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 01:22:50.639 [85/268] Linking static target lib/librte_eal.a 01:22:50.639 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 01:22:50.639 [87/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 01:22:50.639 [88/268] Linking static target lib/librte_ring.a 01:22:50.639 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 01:22:50.897 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 01:22:50.897 [91/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 01:22:50.897 [92/268] Linking static target lib/librte_rcu.a 01:22:50.897 [93/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 01:22:51.155 [94/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 01:22:51.155 [95/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 01:22:51.155 [96/268] Linking static target lib/net/libnet_crc_avx512_lib.a 01:22:51.413 [97/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 01:22:51.413 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 01:22:51.413 [99/268] Linking static target lib/librte_mempool.a 01:22:51.413 [100/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 01:22:51.672 [101/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 01:22:51.672 [102/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 01:22:51.930 [103/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 01:22:51.930 [104/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 01:22:51.930 [105/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 01:22:52.188 [106/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 01:22:52.188 [107/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 01:22:52.188 [108/268] Linking static target lib/librte_meter.a 01:22:52.188 [109/268] Linking static target lib/librte_net.a 01:22:52.188 [110/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 01:22:52.188 [111/268] Linking static target lib/librte_mbuf.a 01:22:52.465 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 01:22:52.465 [113/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 01:22:52.465 [114/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 01:22:52.465 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 01:22:52.724 [116/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 01:22:52.983 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 01:22:52.983 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 01:22:53.241 [119/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 01:22:53.500 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 01:22:53.758 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 01:22:53.758 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 01:22:53.758 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 01:22:54.016 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 01:22:54.016 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 01:22:54.016 [126/268] Linking static target lib/librte_pci.a 01:22:54.273 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 01:22:54.273 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 01:22:54.273 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 01:22:54.273 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 01:22:54.531 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 01:22:54.531 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 01:22:54.531 [133/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 01:22:54.531 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 01:22:54.531 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 01:22:54.531 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 01:22:54.789 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 01:22:54.789 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 01:22:54.789 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 01:22:54.789 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 01:22:54.789 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 01:22:54.789 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 01:22:54.789 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 01:22:55.046 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 01:22:55.046 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 01:22:55.304 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 01:22:55.304 [147/268] Linking static target lib/librte_cmdline.a 01:22:55.561 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 01:22:55.561 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 01:22:55.819 [150/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 01:22:55.819 [151/268] Linking static target lib/librte_timer.a 01:22:55.819 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 01:22:55.819 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 01:22:56.076 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 01:22:56.076 [155/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 01:22:56.076 [156/268] Linking static target lib/librte_ethdev.a 01:22:56.076 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 01:22:56.335 [158/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 01:22:56.335 [159/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 01:22:56.591 [160/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 01:22:56.591 [161/268] Linking static target lib/librte_hash.a 01:22:56.591 [162/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 01:22:56.591 [163/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 01:22:56.848 [164/268] Linking static target lib/librte_compressdev.a 01:22:56.848 [165/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 01:22:57.105 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 01:22:57.105 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 01:22:57.105 [168/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 01:22:57.105 [169/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 01:22:57.105 [170/268] Linking static target lib/librte_dmadev.a 01:22:57.362 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 01:22:57.362 [172/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 01:22:57.618 [173/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 01:22:57.618 [174/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 01:22:57.875 [175/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 01:22:57.875 [176/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 01:22:58.133 [177/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 01:22:58.133 [178/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 01:22:58.133 [179/268] Linking static target lib/librte_cryptodev.a 01:22:58.133 [180/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 01:22:58.133 [181/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 01:22:58.133 [182/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 01:22:58.390 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 01:22:58.648 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 01:22:58.648 [185/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 01:22:58.907 [186/268] Linking static target lib/librte_reorder.a 01:22:58.907 [187/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 01:22:58.907 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 01:22:58.907 [189/268] Linking static target lib/librte_power.a 01:22:59.166 [190/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 01:22:59.166 [191/268] Linking static target lib/librte_security.a 01:22:59.166 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 01:22:59.425 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 01:22:59.425 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 01:22:59.992 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 01:22:59.992 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 01:23:00.250 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 01:23:00.508 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 01:23:00.508 [199/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 01:23:00.508 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 01:23:00.766 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 01:23:00.766 [202/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 01:23:00.766 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 01:23:01.029 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 01:23:01.029 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 01:23:01.293 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 01:23:01.550 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 01:23:01.550 [208/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 01:23:01.550 [209/268] Linking static target drivers/libtmp_rte_bus_vdev.a 01:23:01.550 [210/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 01:23:01.550 [211/268] Linking static target drivers/libtmp_rte_bus_pci.a 01:23:01.808 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 01:23:01.808 [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 01:23:01.808 [214/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 01:23:01.808 [215/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 01:23:01.808 [216/268] Linking static target drivers/librte_bus_vdev.a 01:23:01.808 [217/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 01:23:01.808 [218/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 01:23:01.808 [219/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 01:23:01.808 [220/268] Linking static target drivers/libtmp_rte_mempool_ring.a 01:23:01.808 [221/268] Linking static target drivers/librte_bus_pci.a 01:23:02.067 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 01:23:02.067 [223/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 01:23:02.067 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 01:23:02.067 [225/268] Linking static target drivers/librte_mempool_ring.a 01:23:02.327 [226/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 01:23:02.585 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 01:23:03.519 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 01:23:03.519 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 01:23:03.519 [230/268] Linking target lib/librte_eal.so.24.1 01:23:03.777 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 01:23:03.777 [232/268] Linking target lib/librte_dmadev.so.24.1 01:23:03.777 [233/268] Linking target lib/librte_timer.so.24.1 01:23:03.777 [234/268] Linking target lib/librte_ring.so.24.1 01:23:03.777 [235/268] Linking target lib/librte_meter.so.24.1 01:23:03.777 [236/268] Linking target drivers/librte_bus_vdev.so.24.1 01:23:03.777 [237/268] Linking target lib/librte_pci.so.24.1 01:23:03.777 [238/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 01:23:03.777 [239/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 01:23:04.035 [240/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 01:23:04.035 [241/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 01:23:04.035 [242/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 01:23:04.035 [243/268] Linking target lib/librte_mempool.so.24.1 01:23:04.035 [244/268] Linking target lib/librte_rcu.so.24.1 01:23:04.035 [245/268] Linking target drivers/librte_bus_pci.so.24.1 01:23:04.035 [246/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 01:23:04.035 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 01:23:04.292 [248/268] Linking target lib/librte_mbuf.so.24.1 01:23:04.292 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 01:23:04.292 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 01:23:04.292 [251/268] Linking target lib/librte_reorder.so.24.1 01:23:04.292 [252/268] Linking target lib/librte_compressdev.so.24.1 01:23:04.292 [253/268] Linking target lib/librte_cryptodev.so.24.1 01:23:04.292 [254/268] Linking target lib/librte_net.so.24.1 01:23:04.550 [255/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 01:23:04.550 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 01:23:04.550 [257/268] Linking target lib/librte_hash.so.24.1 01:23:04.550 [258/268] Linking target lib/librte_security.so.24.1 01:23:04.550 [259/268] Linking target lib/librte_cmdline.so.24.1 01:23:04.808 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 01:23:04.808 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 01:23:05.066 [262/268] Linking target lib/librte_ethdev.so.24.1 01:23:05.066 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 01:23:05.324 [264/268] Linking target lib/librte_power.so.24.1 01:23:07.854 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 01:23:07.854 [266/268] Linking static target lib/librte_vhost.a 01:23:09.230 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 01:23:09.230 [268/268] Linking target lib/librte_vhost.so.24.1 01:23:09.230 INFO: autodetecting backend as ninja 01:23:09.230 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 01:23:31.189 CC lib/ut_mock/mock.o 01:23:31.189 CC lib/ut/ut.o 01:23:31.189 CC lib/log/log.o 01:23:31.189 CC lib/log/log_deprecated.o 01:23:31.189 CC lib/log/log_flags.o 01:23:31.189 LIB libspdk_log.a 01:23:31.189 LIB libspdk_ut.a 01:23:31.189 LIB libspdk_ut_mock.a 01:23:31.189 SO libspdk_ut.so.2.0 01:23:31.189 SO libspdk_ut_mock.so.6.0 01:23:31.189 SO libspdk_log.so.7.1 01:23:31.189 SYMLINK libspdk_ut.so 01:23:31.189 SYMLINK libspdk_ut_mock.so 01:23:31.189 SYMLINK libspdk_log.so 01:23:31.189 CC lib/dma/dma.o 01:23:31.189 CC lib/ioat/ioat.o 01:23:31.189 CC lib/util/base64.o 01:23:31.189 CC lib/util/cpuset.o 01:23:31.189 CXX lib/trace_parser/trace.o 01:23:31.189 CC lib/util/bit_array.o 01:23:31.189 CC lib/util/crc16.o 01:23:31.189 CC lib/util/crc32.o 01:23:31.189 CC lib/util/crc32c.o 01:23:31.189 CC lib/vfio_user/host/vfio_user_pci.o 01:23:31.189 CC lib/util/crc32_ieee.o 01:23:31.189 CC lib/util/crc64.o 01:23:31.189 CC lib/util/dif.o 01:23:31.189 CC lib/vfio_user/host/vfio_user.o 01:23:31.189 LIB libspdk_dma.a 01:23:31.189 SO libspdk_dma.so.5.0 01:23:31.189 CC lib/util/fd.o 01:23:31.189 CC lib/util/fd_group.o 01:23:31.189 CC lib/util/file.o 01:23:31.189 CC lib/util/hexlify.o 01:23:31.189 SYMLINK libspdk_dma.so 01:23:31.189 CC lib/util/iov.o 01:23:31.189 LIB libspdk_ioat.a 01:23:31.189 SO libspdk_ioat.so.7.0 01:23:31.189 CC lib/util/math.o 01:23:31.189 CC lib/util/net.o 01:23:31.189 LIB libspdk_vfio_user.a 01:23:31.189 SYMLINK libspdk_ioat.so 01:23:31.189 CC lib/util/pipe.o 01:23:31.189 CC lib/util/strerror_tls.o 01:23:31.189 CC lib/util/string.o 01:23:31.189 SO libspdk_vfio_user.so.5.0 01:23:31.189 SYMLINK libspdk_vfio_user.so 01:23:31.189 CC lib/util/uuid.o 01:23:31.189 CC lib/util/xor.o 01:23:31.189 CC lib/util/zipf.o 01:23:31.189 CC lib/util/md5.o 01:23:31.189 LIB libspdk_util.a 01:23:31.189 SO libspdk_util.so.10.1 01:23:31.189 LIB libspdk_trace_parser.a 01:23:31.189 SO libspdk_trace_parser.so.6.0 01:23:31.189 SYMLINK libspdk_util.so 01:23:31.189 SYMLINK libspdk_trace_parser.so 01:23:31.447 CC lib/conf/conf.o 01:23:31.447 CC lib/env_dpdk/env.o 01:23:31.447 CC lib/vmd/vmd.o 01:23:31.447 CC lib/rdma_utils/rdma_utils.o 01:23:31.447 CC lib/env_dpdk/memory.o 01:23:31.447 CC lib/json/json_parse.o 01:23:31.447 CC lib/env_dpdk/pci.o 01:23:31.447 CC lib/vmd/led.o 01:23:31.447 CC lib/json/json_util.o 01:23:31.447 CC lib/idxd/idxd.o 01:23:31.447 CC lib/idxd/idxd_user.o 01:23:31.704 LIB libspdk_conf.a 01:23:31.704 CC lib/env_dpdk/init.o 01:23:31.704 SO libspdk_conf.so.6.0 01:23:31.704 CC lib/json/json_write.o 01:23:31.704 LIB libspdk_rdma_utils.a 01:23:31.704 SO libspdk_rdma_utils.so.1.0 01:23:31.704 SYMLINK libspdk_conf.so 01:23:31.704 CC lib/env_dpdk/threads.o 01:23:31.704 SYMLINK libspdk_rdma_utils.so 01:23:31.704 CC lib/env_dpdk/pci_ioat.o 01:23:31.704 CC lib/env_dpdk/pci_virtio.o 01:23:31.704 CC lib/env_dpdk/pci_vmd.o 01:23:31.963 CC lib/env_dpdk/pci_idxd.o 01:23:31.963 CC lib/env_dpdk/pci_event.o 01:23:31.963 CC lib/idxd/idxd_kernel.o 01:23:31.963 CC lib/env_dpdk/sigbus_handler.o 01:23:31.963 LIB libspdk_json.a 01:23:31.963 SO libspdk_json.so.6.0 01:23:31.963 CC lib/env_dpdk/pci_dpdk.o 01:23:31.963 SYMLINK libspdk_json.so 01:23:31.963 CC lib/env_dpdk/pci_dpdk_2207.o 01:23:32.221 CC lib/env_dpdk/pci_dpdk_2211.o 01:23:32.221 LIB libspdk_idxd.a 01:23:32.221 SO libspdk_idxd.so.12.1 01:23:32.221 LIB libspdk_vmd.a 01:23:32.221 CC lib/rdma_provider/common.o 01:23:32.221 CC lib/rdma_provider/rdma_provider_verbs.o 01:23:32.221 SO libspdk_vmd.so.6.0 01:23:32.221 CC lib/jsonrpc/jsonrpc_server.o 01:23:32.221 SYMLINK libspdk_idxd.so 01:23:32.221 CC lib/jsonrpc/jsonrpc_server_tcp.o 01:23:32.221 CC lib/jsonrpc/jsonrpc_client.o 01:23:32.221 CC lib/jsonrpc/jsonrpc_client_tcp.o 01:23:32.221 SYMLINK libspdk_vmd.so 01:23:32.480 LIB libspdk_rdma_provider.a 01:23:32.480 SO libspdk_rdma_provider.so.7.0 01:23:32.480 SYMLINK libspdk_rdma_provider.so 01:23:32.480 LIB libspdk_jsonrpc.a 01:23:32.738 SO libspdk_jsonrpc.so.6.0 01:23:32.738 SYMLINK libspdk_jsonrpc.so 01:23:32.996 CC lib/rpc/rpc.o 01:23:33.255 LIB libspdk_env_dpdk.a 01:23:33.255 LIB libspdk_rpc.a 01:23:33.255 SO libspdk_rpc.so.6.0 01:23:33.255 SO libspdk_env_dpdk.so.15.1 01:23:33.255 SYMLINK libspdk_rpc.so 01:23:33.514 SYMLINK libspdk_env_dpdk.so 01:23:33.514 CC lib/trace/trace.o 01:23:33.514 CC lib/notify/notify.o 01:23:33.514 CC lib/trace/trace_rpc.o 01:23:33.514 CC lib/notify/notify_rpc.o 01:23:33.514 CC lib/trace/trace_flags.o 01:23:33.514 CC lib/keyring/keyring_rpc.o 01:23:33.514 CC lib/keyring/keyring.o 01:23:33.772 LIB libspdk_notify.a 01:23:33.772 SO libspdk_notify.so.6.0 01:23:33.772 LIB libspdk_keyring.a 01:23:33.772 SYMLINK libspdk_notify.so 01:23:33.772 SO libspdk_keyring.so.2.0 01:23:34.030 SYMLINK libspdk_keyring.so 01:23:34.030 LIB libspdk_trace.a 01:23:34.030 SO libspdk_trace.so.11.0 01:23:34.030 SYMLINK libspdk_trace.so 01:23:34.288 CC lib/thread/thread.o 01:23:34.288 CC lib/thread/iobuf.o 01:23:34.288 CC lib/sock/sock.o 01:23:34.288 CC lib/sock/sock_rpc.o 01:23:35.223 LIB libspdk_sock.a 01:23:35.223 SO libspdk_sock.so.10.0 01:23:35.223 SYMLINK libspdk_sock.so 01:23:35.481 CC lib/nvme/nvme_ctrlr_cmd.o 01:23:35.481 CC lib/nvme/nvme_ctrlr.o 01:23:35.481 CC lib/nvme/nvme_fabric.o 01:23:35.481 CC lib/nvme/nvme_pcie_common.o 01:23:35.481 CC lib/nvme/nvme_ns_cmd.o 01:23:35.481 CC lib/nvme/nvme_ns.o 01:23:35.481 CC lib/nvme/nvme.o 01:23:35.481 CC lib/nvme/nvme_pcie.o 01:23:35.481 CC lib/nvme/nvme_qpair.o 01:23:36.416 CC lib/nvme/nvme_quirks.o 01:23:36.416 CC lib/nvme/nvme_transport.o 01:23:36.416 CC lib/nvme/nvme_discovery.o 01:23:36.416 LIB libspdk_thread.a 01:23:36.416 SO libspdk_thread.so.11.0 01:23:36.416 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 01:23:36.673 CC lib/nvme/nvme_ns_ocssd_cmd.o 01:23:36.673 CC lib/nvme/nvme_tcp.o 01:23:36.673 SYMLINK libspdk_thread.so 01:23:36.673 CC lib/nvme/nvme_opal.o 01:23:36.673 CC lib/nvme/nvme_io_msg.o 01:23:36.932 CC lib/nvme/nvme_poll_group.o 01:23:36.932 CC lib/nvme/nvme_zns.o 01:23:37.190 CC lib/nvme/nvme_stubs.o 01:23:37.190 CC lib/nvme/nvme_auth.o 01:23:37.447 CC lib/accel/accel.o 01:23:37.447 CC lib/nvme/nvme_cuse.o 01:23:37.447 CC lib/blob/blobstore.o 01:23:37.447 CC lib/init/json_config.o 01:23:37.705 CC lib/blob/request.o 01:23:37.705 CC lib/blob/zeroes.o 01:23:37.705 CC lib/blob/blob_bs_dev.o 01:23:37.962 CC lib/init/subsystem.o 01:23:37.962 CC lib/init/subsystem_rpc.o 01:23:37.962 CC lib/virtio/virtio.o 01:23:37.962 CC lib/nvme/nvme_rdma.o 01:23:38.220 CC lib/init/rpc.o 01:23:38.220 CC lib/fsdev/fsdev.o 01:23:38.477 CC lib/virtio/virtio_vhost_user.o 01:23:38.477 LIB libspdk_init.a 01:23:38.477 SO libspdk_init.so.6.0 01:23:38.477 CC lib/accel/accel_rpc.o 01:23:38.477 CC lib/accel/accel_sw.o 01:23:38.477 SYMLINK libspdk_init.so 01:23:38.477 CC lib/virtio/virtio_vfio_user.o 01:23:38.734 CC lib/virtio/virtio_pci.o 01:23:38.734 CC lib/event/app.o 01:23:38.734 CC lib/event/reactor.o 01:23:38.734 CC lib/event/log_rpc.o 01:23:38.734 CC lib/fsdev/fsdev_io.o 01:23:38.734 CC lib/event/app_rpc.o 01:23:38.991 LIB libspdk_accel.a 01:23:38.991 CC lib/event/scheduler_static.o 01:23:38.991 SO libspdk_accel.so.16.0 01:23:38.991 CC lib/fsdev/fsdev_rpc.o 01:23:39.248 LIB libspdk_virtio.a 01:23:39.248 SYMLINK libspdk_accel.so 01:23:39.248 SO libspdk_virtio.so.7.0 01:23:39.248 SYMLINK libspdk_virtio.so 01:23:39.248 LIB libspdk_fsdev.a 01:23:39.248 CC lib/bdev/bdev.o 01:23:39.248 CC lib/bdev/bdev_rpc.o 01:23:39.248 CC lib/bdev/bdev_zone.o 01:23:39.248 SO libspdk_fsdev.so.2.0 01:23:39.248 CC lib/bdev/part.o 01:23:39.248 CC lib/bdev/scsi_nvme.o 01:23:39.248 LIB libspdk_event.a 01:23:39.505 SYMLINK libspdk_fsdev.so 01:23:39.505 SO libspdk_event.so.14.0 01:23:39.505 SYMLINK libspdk_event.so 01:23:39.505 CC lib/fuse_dispatcher/fuse_dispatcher.o 01:23:39.763 LIB libspdk_nvme.a 01:23:40.021 SO libspdk_nvme.so.15.0 01:23:40.277 SYMLINK libspdk_nvme.so 01:23:40.535 LIB libspdk_fuse_dispatcher.a 01:23:40.535 SO libspdk_fuse_dispatcher.so.1.0 01:23:40.535 SYMLINK libspdk_fuse_dispatcher.so 01:23:41.907 LIB libspdk_blob.a 01:23:42.165 SO libspdk_blob.so.12.0 01:23:42.165 SYMLINK libspdk_blob.so 01:23:42.423 CC lib/blobfs/tree.o 01:23:42.423 CC lib/blobfs/blobfs.o 01:23:42.423 CC lib/lvol/lvol.o 01:23:43.354 LIB libspdk_bdev.a 01:23:43.354 SO libspdk_bdev.so.17.0 01:23:43.354 SYMLINK libspdk_bdev.so 01:23:43.611 LIB libspdk_blobfs.a 01:23:43.611 CC lib/nbd/nbd.o 01:23:43.611 CC lib/ublk/ublk_rpc.o 01:23:43.611 CC lib/ublk/ublk.o 01:23:43.611 CC lib/nbd/nbd_rpc.o 01:23:43.611 CC lib/ftl/ftl_core.o 01:23:43.611 CC lib/ftl/ftl_init.o 01:23:43.611 CC lib/scsi/dev.o 01:23:43.611 CC lib/nvmf/ctrlr.o 01:23:43.611 SO libspdk_blobfs.so.11.0 01:23:43.611 LIB libspdk_lvol.a 01:23:43.868 SO libspdk_lvol.so.11.0 01:23:43.868 SYMLINK libspdk_blobfs.so 01:23:43.868 CC lib/scsi/lun.o 01:23:43.868 SYMLINK libspdk_lvol.so 01:23:43.868 CC lib/scsi/port.o 01:23:43.868 CC lib/ftl/ftl_layout.o 01:23:43.868 CC lib/ftl/ftl_debug.o 01:23:43.868 CC lib/scsi/scsi.o 01:23:44.126 CC lib/ftl/ftl_io.o 01:23:44.126 CC lib/ftl/ftl_sb.o 01:23:44.126 CC lib/scsi/scsi_bdev.o 01:23:44.126 CC lib/ftl/ftl_l2p.o 01:23:44.126 CC lib/ftl/ftl_l2p_flat.o 01:23:44.126 CC lib/scsi/scsi_pr.o 01:23:44.417 LIB libspdk_nbd.a 01:23:44.417 CC lib/ftl/ftl_nv_cache.o 01:23:44.417 CC lib/ftl/ftl_band.o 01:23:44.417 SO libspdk_nbd.so.7.0 01:23:44.417 CC lib/nvmf/ctrlr_discovery.o 01:23:44.417 SYMLINK libspdk_nbd.so 01:23:44.417 CC lib/nvmf/ctrlr_bdev.o 01:23:44.417 CC lib/scsi/scsi_rpc.o 01:23:44.417 CC lib/nvmf/subsystem.o 01:23:44.674 LIB libspdk_ublk.a 01:23:44.674 CC lib/nvmf/nvmf.o 01:23:44.674 SO libspdk_ublk.so.3.0 01:23:44.674 CC lib/scsi/task.o 01:23:44.674 SYMLINK libspdk_ublk.so 01:23:44.674 CC lib/nvmf/nvmf_rpc.o 01:23:44.932 CC lib/nvmf/transport.o 01:23:44.932 CC lib/nvmf/tcp.o 01:23:44.932 LIB libspdk_scsi.a 01:23:44.932 CC lib/nvmf/stubs.o 01:23:44.932 SO libspdk_scsi.so.9.0 01:23:45.189 SYMLINK libspdk_scsi.so 01:23:45.189 CC lib/nvmf/mdns_server.o 01:23:45.447 CC lib/nvmf/rdma.o 01:23:45.447 CC lib/nvmf/auth.o 01:23:45.705 CC lib/ftl/ftl_band_ops.o 01:23:45.705 CC lib/ftl/ftl_writer.o 01:23:45.963 CC lib/ftl/ftl_rq.o 01:23:45.963 CC lib/iscsi/conn.o 01:23:45.963 CC lib/vhost/vhost.o 01:23:45.963 CC lib/vhost/vhost_rpc.o 01:23:45.963 CC lib/vhost/vhost_scsi.o 01:23:45.963 CC lib/ftl/ftl_reloc.o 01:23:46.222 CC lib/iscsi/init_grp.o 01:23:46.222 CC lib/iscsi/iscsi.o 01:23:46.480 CC lib/iscsi/param.o 01:23:46.480 CC lib/ftl/ftl_l2p_cache.o 01:23:46.738 CC lib/ftl/ftl_p2l.o 01:23:46.738 CC lib/iscsi/portal_grp.o 01:23:46.738 CC lib/vhost/vhost_blk.o 01:23:46.996 CC lib/iscsi/tgt_node.o 01:23:47.254 CC lib/ftl/ftl_p2l_log.o 01:23:47.254 CC lib/ftl/mngt/ftl_mngt.o 01:23:47.254 CC lib/ftl/mngt/ftl_mngt_bdev.o 01:23:47.254 CC lib/ftl/mngt/ftl_mngt_shutdown.o 01:23:47.254 CC lib/vhost/rte_vhost_user.o 01:23:47.254 CC lib/iscsi/iscsi_subsystem.o 01:23:47.512 CC lib/iscsi/iscsi_rpc.o 01:23:47.512 CC lib/iscsi/task.o 01:23:47.512 CC lib/ftl/mngt/ftl_mngt_startup.o 01:23:47.512 CC lib/ftl/mngt/ftl_mngt_md.o 01:23:47.512 CC lib/ftl/mngt/ftl_mngt_misc.o 01:23:47.770 CC lib/ftl/mngt/ftl_mngt_ioch.o 01:23:47.770 CC lib/ftl/mngt/ftl_mngt_l2p.o 01:23:47.770 CC lib/ftl/mngt/ftl_mngt_band.o 01:23:47.770 CC lib/ftl/mngt/ftl_mngt_self_test.o 01:23:47.770 CC lib/ftl/mngt/ftl_mngt_p2l.o 01:23:48.029 CC lib/ftl/mngt/ftl_mngt_recovery.o 01:23:48.029 CC lib/ftl/mngt/ftl_mngt_upgrade.o 01:23:48.029 CC lib/ftl/utils/ftl_conf.o 01:23:48.029 CC lib/ftl/utils/ftl_md.o 01:23:48.029 LIB libspdk_iscsi.a 01:23:48.029 CC lib/ftl/utils/ftl_mempool.o 01:23:48.029 CC lib/ftl/utils/ftl_bitmap.o 01:23:48.029 SO libspdk_iscsi.so.8.0 01:23:48.288 CC lib/ftl/utils/ftl_property.o 01:23:48.288 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 01:23:48.288 CC lib/ftl/upgrade/ftl_layout_upgrade.o 01:23:48.288 SYMLINK libspdk_iscsi.so 01:23:48.288 CC lib/ftl/upgrade/ftl_sb_upgrade.o 01:23:48.288 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 01:23:48.288 CC lib/ftl/upgrade/ftl_band_upgrade.o 01:23:48.288 LIB libspdk_nvmf.a 01:23:48.548 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 01:23:48.548 CC lib/ftl/upgrade/ftl_trim_upgrade.o 01:23:48.548 CC lib/ftl/upgrade/ftl_sb_v3.o 01:23:48.548 LIB libspdk_vhost.a 01:23:48.548 CC lib/ftl/upgrade/ftl_sb_v5.o 01:23:48.548 SO libspdk_nvmf.so.20.0 01:23:48.548 SO libspdk_vhost.so.8.0 01:23:48.548 CC lib/ftl/nvc/ftl_nvc_dev.o 01:23:48.548 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 01:23:48.548 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 01:23:48.548 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 01:23:48.548 CC lib/ftl/base/ftl_base_dev.o 01:23:48.807 CC lib/ftl/base/ftl_base_bdev.o 01:23:48.807 SYMLINK libspdk_vhost.so 01:23:48.807 CC lib/ftl/ftl_trace.o 01:23:48.807 SYMLINK libspdk_nvmf.so 01:23:49.066 LIB libspdk_ftl.a 01:23:49.325 SO libspdk_ftl.so.9.0 01:23:49.607 SYMLINK libspdk_ftl.so 01:23:49.867 CC module/env_dpdk/env_dpdk_rpc.o 01:23:50.126 CC module/accel/iaa/accel_iaa.o 01:23:50.126 CC module/accel/error/accel_error.o 01:23:50.126 CC module/accel/dsa/accel_dsa.o 01:23:50.126 CC module/sock/posix/posix.o 01:23:50.126 CC module/accel/ioat/accel_ioat.o 01:23:50.126 CC module/fsdev/aio/fsdev_aio.o 01:23:50.126 CC module/blob/bdev/blob_bdev.o 01:23:50.126 CC module/keyring/file/keyring.o 01:23:50.126 CC module/scheduler/dynamic/scheduler_dynamic.o 01:23:50.126 LIB libspdk_env_dpdk_rpc.a 01:23:50.126 SO libspdk_env_dpdk_rpc.so.6.0 01:23:50.126 SYMLINK libspdk_env_dpdk_rpc.so 01:23:50.126 CC module/accel/dsa/accel_dsa_rpc.o 01:23:50.126 CC module/keyring/file/keyring_rpc.o 01:23:50.126 CC module/accel/ioat/accel_ioat_rpc.o 01:23:50.385 CC module/accel/iaa/accel_iaa_rpc.o 01:23:50.385 CC module/accel/error/accel_error_rpc.o 01:23:50.385 LIB libspdk_scheduler_dynamic.a 01:23:50.385 SO libspdk_scheduler_dynamic.so.4.0 01:23:50.385 LIB libspdk_blob_bdev.a 01:23:50.385 LIB libspdk_keyring_file.a 01:23:50.385 SO libspdk_blob_bdev.so.12.0 01:23:50.385 LIB libspdk_accel_ioat.a 01:23:50.385 SO libspdk_keyring_file.so.2.0 01:23:50.385 LIB libspdk_accel_dsa.a 01:23:50.385 SYMLINK libspdk_scheduler_dynamic.so 01:23:50.385 LIB libspdk_accel_iaa.a 01:23:50.385 SO libspdk_accel_ioat.so.6.0 01:23:50.385 SO libspdk_accel_dsa.so.5.0 01:23:50.385 LIB libspdk_accel_error.a 01:23:50.385 SO libspdk_accel_iaa.so.3.0 01:23:50.385 SYMLINK libspdk_keyring_file.so 01:23:50.385 SYMLINK libspdk_blob_bdev.so 01:23:50.385 SO libspdk_accel_error.so.2.0 01:23:50.643 SYMLINK libspdk_accel_ioat.so 01:23:50.643 SYMLINK libspdk_accel_dsa.so 01:23:50.643 CC module/fsdev/aio/fsdev_aio_rpc.o 01:23:50.643 CC module/scheduler/dpdk_governor/dpdk_governor.o 01:23:50.643 SYMLINK libspdk_accel_iaa.so 01:23:50.643 CC module/fsdev/aio/linux_aio_mgr.o 01:23:50.643 SYMLINK libspdk_accel_error.so 01:23:50.643 CC module/scheduler/gscheduler/gscheduler.o 01:23:50.643 CC module/keyring/linux/keyring.o 01:23:50.643 LIB libspdk_scheduler_dpdk_governor.a 01:23:50.643 SO libspdk_scheduler_dpdk_governor.so.4.0 01:23:50.902 CC module/bdev/error/vbdev_error.o 01:23:50.902 CC module/bdev/delay/vbdev_delay.o 01:23:50.902 CC module/blobfs/bdev/blobfs_bdev.o 01:23:50.902 CC module/keyring/linux/keyring_rpc.o 01:23:50.902 LIB libspdk_scheduler_gscheduler.a 01:23:50.902 SYMLINK libspdk_scheduler_dpdk_governor.so 01:23:50.902 CC module/blobfs/bdev/blobfs_bdev_rpc.o 01:23:50.902 SO libspdk_scheduler_gscheduler.so.4.0 01:23:50.902 CC module/bdev/gpt/gpt.o 01:23:50.902 LIB libspdk_fsdev_aio.a 01:23:50.902 SYMLINK libspdk_scheduler_gscheduler.so 01:23:50.902 SO libspdk_fsdev_aio.so.1.0 01:23:50.902 CC module/bdev/delay/vbdev_delay_rpc.o 01:23:50.902 CC module/bdev/lvol/vbdev_lvol.o 01:23:50.902 LIB libspdk_keyring_linux.a 01:23:50.902 LIB libspdk_sock_posix.a 01:23:50.902 CC module/bdev/error/vbdev_error_rpc.o 01:23:50.902 SYMLINK libspdk_fsdev_aio.so 01:23:51.160 SO libspdk_keyring_linux.so.1.0 01:23:51.160 CC module/bdev/lvol/vbdev_lvol_rpc.o 01:23:51.160 SO libspdk_sock_posix.so.6.0 01:23:51.160 LIB libspdk_blobfs_bdev.a 01:23:51.160 SYMLINK libspdk_keyring_linux.so 01:23:51.160 SO libspdk_blobfs_bdev.so.6.0 01:23:51.160 CC module/bdev/gpt/vbdev_gpt.o 01:23:51.160 SYMLINK libspdk_sock_posix.so 01:23:51.160 SYMLINK libspdk_blobfs_bdev.so 01:23:51.160 LIB libspdk_bdev_error.a 01:23:51.160 SO libspdk_bdev_error.so.6.0 01:23:51.160 LIB libspdk_bdev_delay.a 01:23:51.418 CC module/bdev/malloc/bdev_malloc.o 01:23:51.418 SO libspdk_bdev_delay.so.6.0 01:23:51.418 CC module/bdev/null/bdev_null.o 01:23:51.418 SYMLINK libspdk_bdev_error.so 01:23:51.418 CC module/bdev/passthru/vbdev_passthru.o 01:23:51.418 CC module/bdev/nvme/bdev_nvme.o 01:23:51.418 CC module/bdev/raid/bdev_raid.o 01:23:51.418 SYMLINK libspdk_bdev_delay.so 01:23:51.418 CC module/bdev/nvme/bdev_nvme_rpc.o 01:23:51.418 LIB libspdk_bdev_gpt.a 01:23:51.418 SO libspdk_bdev_gpt.so.6.0 01:23:51.418 CC module/bdev/split/vbdev_split.o 01:23:51.677 SYMLINK libspdk_bdev_gpt.so 01:23:51.677 CC module/bdev/malloc/bdev_malloc_rpc.o 01:23:51.677 LIB libspdk_bdev_lvol.a 01:23:51.677 SO libspdk_bdev_lvol.so.6.0 01:23:51.677 CC module/bdev/null/bdev_null_rpc.o 01:23:51.677 CC module/bdev/passthru/vbdev_passthru_rpc.o 01:23:51.677 SYMLINK libspdk_bdev_lvol.so 01:23:51.677 CC module/bdev/nvme/nvme_rpc.o 01:23:51.677 CC module/bdev/zone_block/vbdev_zone_block.o 01:23:51.677 CC module/bdev/split/vbdev_split_rpc.o 01:23:51.677 CC module/bdev/nvme/bdev_mdns_client.o 01:23:51.935 LIB libspdk_bdev_malloc.a 01:23:51.935 SO libspdk_bdev_malloc.so.6.0 01:23:51.935 LIB libspdk_bdev_null.a 01:23:51.935 LIB libspdk_bdev_passthru.a 01:23:51.935 SO libspdk_bdev_null.so.6.0 01:23:51.935 SYMLINK libspdk_bdev_malloc.so 01:23:51.935 SO libspdk_bdev_passthru.so.6.0 01:23:51.935 CC module/bdev/nvme/vbdev_opal.o 01:23:51.935 SYMLINK libspdk_bdev_null.so 01:23:51.935 CC module/bdev/nvme/vbdev_opal_rpc.o 01:23:51.935 LIB libspdk_bdev_split.a 01:23:51.935 SYMLINK libspdk_bdev_passthru.so 01:23:51.935 SO libspdk_bdev_split.so.6.0 01:23:51.936 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 01:23:52.195 SYMLINK libspdk_bdev_split.so 01:23:52.195 CC module/bdev/xnvme/bdev_xnvme.o 01:23:52.195 CC module/bdev/aio/bdev_aio.o 01:23:52.195 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 01:23:52.195 CC module/bdev/xnvme/bdev_xnvme_rpc.o 01:23:52.195 CC module/bdev/aio/bdev_aio_rpc.o 01:23:52.195 CC module/bdev/ftl/bdev_ftl.o 01:23:52.454 LIB libspdk_bdev_zone_block.a 01:23:52.454 CC module/bdev/iscsi/bdev_iscsi.o 01:23:52.454 SO libspdk_bdev_zone_block.so.6.0 01:23:52.454 CC module/bdev/virtio/bdev_virtio_scsi.o 01:23:52.454 CC module/bdev/iscsi/bdev_iscsi_rpc.o 01:23:52.454 CC module/bdev/virtio/bdev_virtio_blk.o 01:23:52.454 SYMLINK libspdk_bdev_zone_block.so 01:23:52.454 CC module/bdev/raid/bdev_raid_rpc.o 01:23:52.454 LIB libspdk_bdev_xnvme.a 01:23:52.454 SO libspdk_bdev_xnvme.so.3.0 01:23:52.713 LIB libspdk_bdev_aio.a 01:23:52.713 SYMLINK libspdk_bdev_xnvme.so 01:23:52.713 CC module/bdev/ftl/bdev_ftl_rpc.o 01:23:52.713 SO libspdk_bdev_aio.so.6.0 01:23:52.713 CC module/bdev/virtio/bdev_virtio_rpc.o 01:23:52.713 CC module/bdev/raid/bdev_raid_sb.o 01:23:52.713 SYMLINK libspdk_bdev_aio.so 01:23:52.713 CC module/bdev/raid/raid0.o 01:23:52.713 CC module/bdev/raid/raid1.o 01:23:52.713 CC module/bdev/raid/concat.o 01:23:52.972 LIB libspdk_bdev_ftl.a 01:23:52.972 LIB libspdk_bdev_iscsi.a 01:23:52.972 SO libspdk_bdev_ftl.so.6.0 01:23:52.972 SO libspdk_bdev_iscsi.so.6.0 01:23:52.972 SYMLINK libspdk_bdev_ftl.so 01:23:52.972 SYMLINK libspdk_bdev_iscsi.so 01:23:52.972 LIB libspdk_bdev_raid.a 01:23:52.972 LIB libspdk_bdev_virtio.a 01:23:53.229 SO libspdk_bdev_raid.so.6.0 01:23:53.229 SO libspdk_bdev_virtio.so.6.0 01:23:53.229 SYMLINK libspdk_bdev_virtio.so 01:23:53.229 SYMLINK libspdk_bdev_raid.so 01:23:55.136 LIB libspdk_bdev_nvme.a 01:23:55.136 SO libspdk_bdev_nvme.so.7.1 01:23:55.393 SYMLINK libspdk_bdev_nvme.so 01:23:55.650 CC module/event/subsystems/keyring/keyring.o 01:23:55.650 CC module/event/subsystems/scheduler/scheduler.o 01:23:55.650 CC module/event/subsystems/sock/sock.o 01:23:55.907 CC module/event/subsystems/fsdev/fsdev.o 01:23:55.907 CC module/event/subsystems/iobuf/iobuf.o 01:23:55.907 CC module/event/subsystems/vhost_blk/vhost_blk.o 01:23:55.907 CC module/event/subsystems/iobuf/iobuf_rpc.o 01:23:55.907 CC module/event/subsystems/vmd/vmd.o 01:23:55.907 CC module/event/subsystems/vmd/vmd_rpc.o 01:23:55.907 LIB libspdk_event_keyring.a 01:23:55.907 LIB libspdk_event_vhost_blk.a 01:23:55.907 LIB libspdk_event_iobuf.a 01:23:55.907 LIB libspdk_event_fsdev.a 01:23:55.907 SO libspdk_event_vhost_blk.so.3.0 01:23:55.907 SO libspdk_event_keyring.so.1.0 01:23:55.907 LIB libspdk_event_sock.a 01:23:55.907 LIB libspdk_event_vmd.a 01:23:55.907 SO libspdk_event_fsdev.so.1.0 01:23:55.907 SO libspdk_event_sock.so.5.0 01:23:55.907 SO libspdk_event_iobuf.so.3.0 01:23:55.907 LIB libspdk_event_scheduler.a 01:23:56.164 SYMLINK libspdk_event_vhost_blk.so 01:23:56.164 SO libspdk_event_vmd.so.6.0 01:23:56.164 SYMLINK libspdk_event_keyring.so 01:23:56.164 SO libspdk_event_scheduler.so.4.0 01:23:56.164 SYMLINK libspdk_event_fsdev.so 01:23:56.164 SYMLINK libspdk_event_iobuf.so 01:23:56.164 SYMLINK libspdk_event_sock.so 01:23:56.164 SYMLINK libspdk_event_vmd.so 01:23:56.164 SYMLINK libspdk_event_scheduler.so 01:23:56.423 CC module/event/subsystems/accel/accel.o 01:23:56.423 LIB libspdk_event_accel.a 01:23:56.680 SO libspdk_event_accel.so.6.0 01:23:56.680 SYMLINK libspdk_event_accel.so 01:23:56.938 CC module/event/subsystems/bdev/bdev.o 01:23:57.195 LIB libspdk_event_bdev.a 01:23:57.195 SO libspdk_event_bdev.so.6.0 01:23:57.195 SYMLINK libspdk_event_bdev.so 01:23:57.453 CC module/event/subsystems/nbd/nbd.o 01:23:57.711 CC module/event/subsystems/scsi/scsi.o 01:23:57.711 CC module/event/subsystems/nvmf/nvmf_rpc.o 01:23:57.711 CC module/event/subsystems/nvmf/nvmf_tgt.o 01:23:57.711 CC module/event/subsystems/ublk/ublk.o 01:23:57.711 LIB libspdk_event_nbd.a 01:23:57.711 LIB libspdk_event_ublk.a 01:23:57.969 SO libspdk_event_nbd.so.6.0 01:23:57.969 SO libspdk_event_ublk.so.3.0 01:23:57.969 LIB libspdk_event_scsi.a 01:23:57.969 SYMLINK libspdk_event_ublk.so 01:23:57.969 SYMLINK libspdk_event_nbd.so 01:23:57.969 SO libspdk_event_scsi.so.6.0 01:23:57.969 LIB libspdk_event_nvmf.a 01:23:57.969 SYMLINK libspdk_event_scsi.so 01:23:57.969 SO libspdk_event_nvmf.so.6.0 01:23:57.969 SYMLINK libspdk_event_nvmf.so 01:23:58.226 CC module/event/subsystems/iscsi/iscsi.o 01:23:58.226 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 01:23:58.484 LIB libspdk_event_vhost_scsi.a 01:23:58.484 LIB libspdk_event_iscsi.a 01:23:58.484 SO libspdk_event_vhost_scsi.so.3.0 01:23:58.484 SO libspdk_event_iscsi.so.6.0 01:23:58.484 SYMLINK libspdk_event_vhost_scsi.so 01:23:58.484 SYMLINK libspdk_event_iscsi.so 01:23:58.745 SO libspdk.so.6.0 01:23:58.745 SYMLINK libspdk.so 01:23:59.007 CXX app/trace/trace.o 01:23:59.007 CC app/trace_record/trace_record.o 01:23:59.007 CC app/spdk_lspci/spdk_lspci.o 01:23:59.007 CC examples/interrupt_tgt/interrupt_tgt.o 01:23:59.007 CC app/iscsi_tgt/iscsi_tgt.o 01:23:59.007 CC app/nvmf_tgt/nvmf_main.o 01:23:59.007 CC examples/util/zipf/zipf.o 01:23:59.007 CC test/thread/poller_perf/poller_perf.o 01:23:59.007 CC app/spdk_tgt/spdk_tgt.o 01:23:59.007 CC examples/ioat/perf/perf.o 01:23:59.007 LINK spdk_lspci 01:23:59.266 LINK nvmf_tgt 01:23:59.266 LINK interrupt_tgt 01:23:59.266 LINK poller_perf 01:23:59.266 LINK zipf 01:23:59.266 LINK iscsi_tgt 01:23:59.266 LINK spdk_tgt 01:23:59.266 LINK spdk_trace_record 01:23:59.524 LINK ioat_perf 01:23:59.524 LINK spdk_trace 01:23:59.524 CC app/spdk_nvme_perf/perf.o 01:23:59.524 CC app/spdk_nvme_identify/identify.o 01:23:59.524 CC app/spdk_nvme_discover/discovery_aer.o 01:23:59.781 CC app/spdk_top/spdk_top.o 01:23:59.781 CC test/dma/test_dma/test_dma.o 01:23:59.781 CC app/spdk_dd/spdk_dd.o 01:23:59.781 CC examples/ioat/verify/verify.o 01:23:59.781 CC test/app/bdev_svc/bdev_svc.o 01:23:59.781 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 01:23:59.781 LINK spdk_nvme_discover 01:23:59.781 CC app/fio/nvme/fio_plugin.o 01:24:00.038 LINK bdev_svc 01:24:00.038 LINK verify 01:24:00.039 LINK spdk_dd 01:24:00.296 CC app/vhost/vhost.o 01:24:00.296 TEST_HEADER include/spdk/accel.h 01:24:00.296 TEST_HEADER include/spdk/accel_module.h 01:24:00.296 TEST_HEADER include/spdk/assert.h 01:24:00.296 TEST_HEADER include/spdk/barrier.h 01:24:00.296 TEST_HEADER include/spdk/base64.h 01:24:00.296 TEST_HEADER include/spdk/bdev.h 01:24:00.296 TEST_HEADER include/spdk/bdev_module.h 01:24:00.296 TEST_HEADER include/spdk/bdev_zone.h 01:24:00.296 TEST_HEADER include/spdk/bit_array.h 01:24:00.296 TEST_HEADER include/spdk/bit_pool.h 01:24:00.296 TEST_HEADER include/spdk/blob_bdev.h 01:24:00.296 TEST_HEADER include/spdk/blobfs_bdev.h 01:24:00.296 TEST_HEADER include/spdk/blobfs.h 01:24:00.296 TEST_HEADER include/spdk/blob.h 01:24:00.296 TEST_HEADER include/spdk/conf.h 01:24:00.296 TEST_HEADER include/spdk/config.h 01:24:00.296 TEST_HEADER include/spdk/cpuset.h 01:24:00.296 TEST_HEADER include/spdk/crc16.h 01:24:00.296 TEST_HEADER include/spdk/crc32.h 01:24:00.296 TEST_HEADER include/spdk/crc64.h 01:24:00.296 TEST_HEADER include/spdk/dif.h 01:24:00.296 TEST_HEADER include/spdk/dma.h 01:24:00.296 TEST_HEADER include/spdk/endian.h 01:24:00.296 TEST_HEADER include/spdk/env_dpdk.h 01:24:00.296 TEST_HEADER include/spdk/env.h 01:24:00.296 TEST_HEADER include/spdk/event.h 01:24:00.296 TEST_HEADER include/spdk/fd_group.h 01:24:00.296 TEST_HEADER include/spdk/fd.h 01:24:00.296 TEST_HEADER include/spdk/file.h 01:24:00.296 TEST_HEADER include/spdk/fsdev.h 01:24:00.296 TEST_HEADER include/spdk/fsdev_module.h 01:24:00.296 TEST_HEADER include/spdk/ftl.h 01:24:00.297 TEST_HEADER include/spdk/fuse_dispatcher.h 01:24:00.297 LINK test_dma 01:24:00.297 TEST_HEADER include/spdk/gpt_spec.h 01:24:00.297 TEST_HEADER include/spdk/hexlify.h 01:24:00.297 TEST_HEADER include/spdk/histogram_data.h 01:24:00.297 LINK nvme_fuzz 01:24:00.297 TEST_HEADER include/spdk/idxd.h 01:24:00.297 TEST_HEADER include/spdk/idxd_spec.h 01:24:00.297 TEST_HEADER include/spdk/init.h 01:24:00.297 TEST_HEADER include/spdk/ioat.h 01:24:00.297 TEST_HEADER include/spdk/ioat_spec.h 01:24:00.297 TEST_HEADER include/spdk/iscsi_spec.h 01:24:00.297 TEST_HEADER include/spdk/json.h 01:24:00.297 TEST_HEADER include/spdk/jsonrpc.h 01:24:00.297 TEST_HEADER include/spdk/keyring.h 01:24:00.297 TEST_HEADER include/spdk/keyring_module.h 01:24:00.297 TEST_HEADER include/spdk/likely.h 01:24:00.297 TEST_HEADER include/spdk/log.h 01:24:00.297 TEST_HEADER include/spdk/lvol.h 01:24:00.297 TEST_HEADER include/spdk/md5.h 01:24:00.297 TEST_HEADER include/spdk/memory.h 01:24:00.297 TEST_HEADER include/spdk/mmio.h 01:24:00.297 TEST_HEADER include/spdk/nbd.h 01:24:00.297 TEST_HEADER include/spdk/net.h 01:24:00.297 TEST_HEADER include/spdk/notify.h 01:24:00.297 TEST_HEADER include/spdk/nvme.h 01:24:00.297 TEST_HEADER include/spdk/nvme_intel.h 01:24:00.297 LINK vhost 01:24:00.297 TEST_HEADER include/spdk/nvme_ocssd.h 01:24:00.555 TEST_HEADER include/spdk/nvme_ocssd_spec.h 01:24:00.555 TEST_HEADER include/spdk/nvme_spec.h 01:24:00.555 TEST_HEADER include/spdk/nvme_zns.h 01:24:00.555 TEST_HEADER include/spdk/nvmf_cmd.h 01:24:00.555 TEST_HEADER include/spdk/nvmf_fc_spec.h 01:24:00.555 TEST_HEADER include/spdk/nvmf.h 01:24:00.555 TEST_HEADER include/spdk/nvmf_spec.h 01:24:00.555 TEST_HEADER include/spdk/nvmf_transport.h 01:24:00.555 TEST_HEADER include/spdk/opal.h 01:24:00.555 TEST_HEADER include/spdk/opal_spec.h 01:24:00.555 TEST_HEADER include/spdk/pci_ids.h 01:24:00.555 TEST_HEADER include/spdk/pipe.h 01:24:00.555 TEST_HEADER include/spdk/queue.h 01:24:00.555 CC examples/thread/thread/thread_ex.o 01:24:00.555 TEST_HEADER include/spdk/reduce.h 01:24:00.555 TEST_HEADER include/spdk/rpc.h 01:24:00.555 TEST_HEADER include/spdk/scheduler.h 01:24:00.555 TEST_HEADER include/spdk/scsi.h 01:24:00.555 TEST_HEADER include/spdk/scsi_spec.h 01:24:00.555 TEST_HEADER include/spdk/sock.h 01:24:00.555 TEST_HEADER include/spdk/stdinc.h 01:24:00.555 TEST_HEADER include/spdk/string.h 01:24:00.555 TEST_HEADER include/spdk/thread.h 01:24:00.555 TEST_HEADER include/spdk/trace.h 01:24:00.555 TEST_HEADER include/spdk/trace_parser.h 01:24:00.555 TEST_HEADER include/spdk/tree.h 01:24:00.555 TEST_HEADER include/spdk/ublk.h 01:24:00.555 TEST_HEADER include/spdk/util.h 01:24:00.555 TEST_HEADER include/spdk/uuid.h 01:24:00.555 TEST_HEADER include/spdk/version.h 01:24:00.555 TEST_HEADER include/spdk/vfio_user_pci.h 01:24:00.555 TEST_HEADER include/spdk/vfio_user_spec.h 01:24:00.555 TEST_HEADER include/spdk/vhost.h 01:24:00.555 TEST_HEADER include/spdk/vmd.h 01:24:00.555 TEST_HEADER include/spdk/xor.h 01:24:00.555 TEST_HEADER include/spdk/zipf.h 01:24:00.555 CXX test/cpp_headers/accel.o 01:24:00.555 LINK spdk_nvme 01:24:00.555 LINK spdk_nvme_perf 01:24:00.555 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 01:24:00.813 CXX test/cpp_headers/accel_module.o 01:24:00.813 CC test/env/mem_callbacks/mem_callbacks.o 01:24:00.813 LINK spdk_nvme_identify 01:24:00.813 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 01:24:00.813 LINK thread 01:24:00.813 LINK spdk_top 01:24:00.813 CC examples/sock/hello_world/hello_sock.o 01:24:00.813 CC app/fio/bdev/fio_plugin.o 01:24:00.813 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 01:24:00.813 CXX test/cpp_headers/assert.o 01:24:01.071 CC test/event/event_perf/event_perf.o 01:24:01.071 CC test/env/vtophys/vtophys.o 01:24:01.071 CXX test/cpp_headers/barrier.o 01:24:01.071 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 01:24:01.071 CC test/env/memory/memory_ut.o 01:24:01.329 LINK event_perf 01:24:01.329 LINK vtophys 01:24:01.329 LINK hello_sock 01:24:01.329 CXX test/cpp_headers/base64.o 01:24:01.329 LINK mem_callbacks 01:24:01.329 LINK env_dpdk_post_init 01:24:01.586 CXX test/cpp_headers/bdev.o 01:24:01.586 CC test/event/reactor/reactor.o 01:24:01.586 CC test/env/pci/pci_ut.o 01:24:01.586 LINK vhost_fuzz 01:24:01.586 LINK spdk_bdev 01:24:01.586 CC examples/vmd/lsvmd/lsvmd.o 01:24:01.586 LINK reactor 01:24:01.586 CC test/rpc_client/rpc_client_test.o 01:24:01.843 CC test/nvme/aer/aer.o 01:24:01.843 CXX test/cpp_headers/bdev_module.o 01:24:01.843 CC test/nvme/reset/reset.o 01:24:01.843 LINK lsvmd 01:24:01.843 CC test/nvme/sgl/sgl.o 01:24:01.843 LINK rpc_client_test 01:24:01.843 CC test/event/reactor_perf/reactor_perf.o 01:24:01.843 CXX test/cpp_headers/bdev_zone.o 01:24:02.100 LINK pci_ut 01:24:02.100 CXX test/cpp_headers/bit_array.o 01:24:02.100 LINK reactor_perf 01:24:02.100 CC examples/vmd/led/led.o 01:24:02.100 LINK aer 01:24:02.100 LINK reset 01:24:02.100 LINK sgl 01:24:02.100 CXX test/cpp_headers/bit_pool.o 01:24:02.357 CXX test/cpp_headers/blob_bdev.o 01:24:02.357 LINK led 01:24:02.357 CC test/event/app_repeat/app_repeat.o 01:24:02.357 CXX test/cpp_headers/blobfs_bdev.o 01:24:02.357 CXX test/cpp_headers/blobfs.o 01:24:02.357 CXX test/cpp_headers/blob.o 01:24:02.357 CC test/nvme/e2edp/nvme_dp.o 01:24:02.357 LINK app_repeat 01:24:02.614 CC test/app/histogram_perf/histogram_perf.o 01:24:02.614 CXX test/cpp_headers/conf.o 01:24:02.614 CC test/app/jsoncat/jsoncat.o 01:24:02.614 CC test/app/stub/stub.o 01:24:02.614 LINK memory_ut 01:24:02.614 CC examples/idxd/perf/perf.o 01:24:02.614 CXX test/cpp_headers/config.o 01:24:02.614 LINK jsoncat 01:24:02.614 LINK histogram_perf 01:24:02.614 CC examples/fsdev/hello_world/hello_fsdev.o 01:24:02.614 CXX test/cpp_headers/cpuset.o 01:24:02.871 LINK nvme_dp 01:24:02.871 CC test/event/scheduler/scheduler.o 01:24:02.871 LINK stub 01:24:02.871 CXX test/cpp_headers/crc16.o 01:24:02.871 LINK iscsi_fuzz 01:24:02.871 CXX test/cpp_headers/crc32.o 01:24:03.128 LINK scheduler 01:24:03.128 CC test/accel/dif/dif.o 01:24:03.128 CC test/nvme/overhead/overhead.o 01:24:03.128 LINK hello_fsdev 01:24:03.128 CC test/blobfs/mkfs/mkfs.o 01:24:03.128 LINK idxd_perf 01:24:03.128 CXX test/cpp_headers/crc64.o 01:24:03.128 CXX test/cpp_headers/dif.o 01:24:03.128 CC test/lvol/esnap/esnap.o 01:24:03.128 CXX test/cpp_headers/dma.o 01:24:03.385 CC examples/accel/perf/accel_perf.o 01:24:03.385 LINK mkfs 01:24:03.385 LINK overhead 01:24:03.385 CC test/nvme/err_injection/err_injection.o 01:24:03.385 CXX test/cpp_headers/endian.o 01:24:03.385 CC test/nvme/startup/startup.o 01:24:03.385 CC test/nvme/reserve/reserve.o 01:24:03.385 CC examples/blob/hello_world/hello_blob.o 01:24:03.691 CXX test/cpp_headers/env_dpdk.o 01:24:03.691 CXX test/cpp_headers/env.o 01:24:03.691 LINK startup 01:24:03.691 LINK err_injection 01:24:03.691 LINK reserve 01:24:03.691 CXX test/cpp_headers/event.o 01:24:03.691 LINK hello_blob 01:24:03.691 CC test/nvme/simple_copy/simple_copy.o 01:24:03.948 CC examples/nvme/hello_world/hello_world.o 01:24:03.948 CXX test/cpp_headers/fd_group.o 01:24:03.948 CXX test/cpp_headers/fd.o 01:24:03.948 CXX test/cpp_headers/file.o 01:24:03.948 LINK dif 01:24:03.948 LINK accel_perf 01:24:03.948 CC test/nvme/connect_stress/connect_stress.o 01:24:03.948 CXX test/cpp_headers/fsdev.o 01:24:04.206 LINK simple_copy 01:24:04.206 CC test/nvme/boot_partition/boot_partition.o 01:24:04.206 LINK hello_world 01:24:04.206 CC examples/blob/cli/blobcli.o 01:24:04.206 CC examples/nvme/reconnect/reconnect.o 01:24:04.206 CXX test/cpp_headers/fsdev_module.o 01:24:04.206 LINK connect_stress 01:24:04.206 CXX test/cpp_headers/ftl.o 01:24:04.206 LINK boot_partition 01:24:04.463 CC examples/bdev/hello_world/hello_bdev.o 01:24:04.463 CXX test/cpp_headers/fuse_dispatcher.o 01:24:04.463 CC examples/bdev/bdevperf/bdevperf.o 01:24:04.463 CXX test/cpp_headers/gpt_spec.o 01:24:04.463 CC test/nvme/compliance/nvme_compliance.o 01:24:04.463 CC test/nvme/fused_ordering/fused_ordering.o 01:24:04.721 LINK reconnect 01:24:04.721 CXX test/cpp_headers/hexlify.o 01:24:04.721 LINK hello_bdev 01:24:04.721 CC test/nvme/doorbell_aers/doorbell_aers.o 01:24:04.721 CC test/bdev/bdevio/bdevio.o 01:24:04.721 LINK blobcli 01:24:04.721 CXX test/cpp_headers/histogram_data.o 01:24:04.721 LINK fused_ordering 01:24:04.977 CC examples/nvme/nvme_manage/nvme_manage.o 01:24:04.977 LINK nvme_compliance 01:24:04.977 LINK doorbell_aers 01:24:04.977 CXX test/cpp_headers/idxd.o 01:24:04.977 CC test/nvme/fdp/fdp.o 01:24:04.977 CXX test/cpp_headers/idxd_spec.o 01:24:04.977 CC test/nvme/cuse/cuse.o 01:24:05.234 CC examples/nvme/arbitration/arbitration.o 01:24:05.234 CXX test/cpp_headers/init.o 01:24:05.234 LINK bdevio 01:24:05.234 CC examples/nvme/hotplug/hotplug.o 01:24:05.234 CC examples/nvme/cmb_copy/cmb_copy.o 01:24:05.491 CXX test/cpp_headers/ioat.o 01:24:05.491 LINK bdevperf 01:24:05.491 LINK fdp 01:24:05.491 CXX test/cpp_headers/ioat_spec.o 01:24:05.491 LINK nvme_manage 01:24:05.491 LINK cmb_copy 01:24:05.491 LINK arbitration 01:24:05.491 LINK hotplug 01:24:05.491 CXX test/cpp_headers/iscsi_spec.o 01:24:05.749 CC examples/nvme/abort/abort.o 01:24:05.749 CXX test/cpp_headers/json.o 01:24:05.749 CXX test/cpp_headers/jsonrpc.o 01:24:05.749 CXX test/cpp_headers/keyring.o 01:24:05.749 CXX test/cpp_headers/keyring_module.o 01:24:05.749 CXX test/cpp_headers/likely.o 01:24:05.749 CC examples/nvme/pmr_persistence/pmr_persistence.o 01:24:05.749 CXX test/cpp_headers/log.o 01:24:06.007 CXX test/cpp_headers/lvol.o 01:24:06.007 CXX test/cpp_headers/md5.o 01:24:06.007 CXX test/cpp_headers/memory.o 01:24:06.007 CXX test/cpp_headers/mmio.o 01:24:06.007 CXX test/cpp_headers/nbd.o 01:24:06.007 CXX test/cpp_headers/net.o 01:24:06.007 LINK pmr_persistence 01:24:06.007 CXX test/cpp_headers/notify.o 01:24:06.007 LINK abort 01:24:06.007 CXX test/cpp_headers/nvme.o 01:24:06.007 CXX test/cpp_headers/nvme_intel.o 01:24:06.266 CXX test/cpp_headers/nvme_ocssd.o 01:24:06.266 CXX test/cpp_headers/nvme_ocssd_spec.o 01:24:06.266 CXX test/cpp_headers/nvme_spec.o 01:24:06.266 CXX test/cpp_headers/nvme_zns.o 01:24:06.266 CXX test/cpp_headers/nvmf_cmd.o 01:24:06.266 CXX test/cpp_headers/nvmf_fc_spec.o 01:24:06.266 CXX test/cpp_headers/nvmf.o 01:24:06.266 CXX test/cpp_headers/nvmf_spec.o 01:24:06.266 CXX test/cpp_headers/nvmf_transport.o 01:24:06.266 CXX test/cpp_headers/opal.o 01:24:06.266 CXX test/cpp_headers/opal_spec.o 01:24:06.525 CXX test/cpp_headers/pci_ids.o 01:24:06.525 CC examples/nvmf/nvmf/nvmf.o 01:24:06.525 CXX test/cpp_headers/pipe.o 01:24:06.525 CXX test/cpp_headers/queue.o 01:24:06.525 CXX test/cpp_headers/reduce.o 01:24:06.525 CXX test/cpp_headers/rpc.o 01:24:06.525 CXX test/cpp_headers/scheduler.o 01:24:06.525 CXX test/cpp_headers/scsi.o 01:24:06.525 CXX test/cpp_headers/scsi_spec.o 01:24:06.525 CXX test/cpp_headers/sock.o 01:24:06.783 LINK cuse 01:24:06.783 CXX test/cpp_headers/stdinc.o 01:24:06.783 CXX test/cpp_headers/string.o 01:24:06.783 CXX test/cpp_headers/thread.o 01:24:06.783 CXX test/cpp_headers/trace.o 01:24:06.783 CXX test/cpp_headers/trace_parser.o 01:24:06.783 CXX test/cpp_headers/tree.o 01:24:06.783 LINK nvmf 01:24:06.783 CXX test/cpp_headers/ublk.o 01:24:06.783 CXX test/cpp_headers/util.o 01:24:07.041 CXX test/cpp_headers/uuid.o 01:24:07.041 CXX test/cpp_headers/version.o 01:24:07.041 CXX test/cpp_headers/vfio_user_pci.o 01:24:07.041 CXX test/cpp_headers/vfio_user_spec.o 01:24:07.041 CXX test/cpp_headers/vhost.o 01:24:07.041 CXX test/cpp_headers/vmd.o 01:24:07.041 CXX test/cpp_headers/xor.o 01:24:07.041 CXX test/cpp_headers/zipf.o 01:24:11.223 LINK esnap 01:24:11.223 01:24:11.223 real 1m41.637s 01:24:11.223 user 9m27.817s 01:24:11.223 sys 1m52.063s 01:24:11.223 05:19:02 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 01:24:11.223 05:19:02 make -- common/autotest_common.sh@10 -- $ set +x 01:24:11.223 ************************************ 01:24:11.223 END TEST make 01:24:11.223 ************************************ 01:24:11.223 05:19:02 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 01:24:11.223 05:19:02 -- pm/common@29 -- $ signal_monitor_resources TERM 01:24:11.223 05:19:02 -- pm/common@40 -- $ local monitor pid pids signal=TERM 01:24:11.223 05:19:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 01:24:11.223 05:19:02 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 01:24:11.223 05:19:02 -- pm/common@44 -- $ pid=5340 01:24:11.223 05:19:02 -- pm/common@50 -- $ kill -TERM 5340 01:24:11.223 05:19:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 01:24:11.223 05:19:02 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 01:24:11.223 05:19:02 -- pm/common@44 -- $ pid=5342 01:24:11.223 05:19:02 -- pm/common@50 -- $ kill -TERM 5342 01:24:11.223 05:19:02 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 01:24:11.223 05:19:02 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 01:24:11.223 05:19:02 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:24:11.223 05:19:02 -- common/autotest_common.sh@1693 -- # lcov --version 01:24:11.223 05:19:02 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:24:11.223 05:19:02 -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:24:11.223 05:19:02 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:24:11.223 05:19:02 -- scripts/common.sh@333 -- # local ver1 ver1_l 01:24:11.223 05:19:02 -- scripts/common.sh@334 -- # local ver2 ver2_l 01:24:11.223 05:19:02 -- scripts/common.sh@336 -- # IFS=.-: 01:24:11.223 05:19:02 -- scripts/common.sh@336 -- # read -ra ver1 01:24:11.223 05:19:02 -- scripts/common.sh@337 -- # IFS=.-: 01:24:11.223 05:19:02 -- scripts/common.sh@337 -- # read -ra ver2 01:24:11.223 05:19:02 -- scripts/common.sh@338 -- # local 'op=<' 01:24:11.223 05:19:02 -- scripts/common.sh@340 -- # ver1_l=2 01:24:11.223 05:19:02 -- scripts/common.sh@341 -- # ver2_l=1 01:24:11.223 05:19:02 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:24:11.223 05:19:02 -- scripts/common.sh@344 -- # case "$op" in 01:24:11.223 05:19:02 -- scripts/common.sh@345 -- # : 1 01:24:11.223 05:19:02 -- scripts/common.sh@364 -- # (( v = 0 )) 01:24:11.223 05:19:02 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:24:11.223 05:19:02 -- scripts/common.sh@365 -- # decimal 1 01:24:11.223 05:19:02 -- scripts/common.sh@353 -- # local d=1 01:24:11.223 05:19:02 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:24:11.223 05:19:02 -- scripts/common.sh@355 -- # echo 1 01:24:11.223 05:19:02 -- scripts/common.sh@365 -- # ver1[v]=1 01:24:11.223 05:19:02 -- scripts/common.sh@366 -- # decimal 2 01:24:11.223 05:19:02 -- scripts/common.sh@353 -- # local d=2 01:24:11.223 05:19:02 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:24:11.223 05:19:02 -- scripts/common.sh@355 -- # echo 2 01:24:11.223 05:19:02 -- scripts/common.sh@366 -- # ver2[v]=2 01:24:11.223 05:19:02 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:24:11.223 05:19:02 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:24:11.223 05:19:02 -- scripts/common.sh@368 -- # return 0 01:24:11.223 05:19:02 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:24:11.223 05:19:02 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:24:11.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:24:11.223 --rc genhtml_branch_coverage=1 01:24:11.223 --rc genhtml_function_coverage=1 01:24:11.223 --rc genhtml_legend=1 01:24:11.223 --rc geninfo_all_blocks=1 01:24:11.223 --rc geninfo_unexecuted_blocks=1 01:24:11.223 01:24:11.223 ' 01:24:11.223 05:19:02 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:24:11.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:24:11.223 --rc genhtml_branch_coverage=1 01:24:11.223 --rc genhtml_function_coverage=1 01:24:11.223 --rc genhtml_legend=1 01:24:11.223 --rc geninfo_all_blocks=1 01:24:11.223 --rc geninfo_unexecuted_blocks=1 01:24:11.223 01:24:11.223 ' 01:24:11.223 05:19:02 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:24:11.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:24:11.223 --rc genhtml_branch_coverage=1 01:24:11.223 --rc genhtml_function_coverage=1 01:24:11.223 --rc genhtml_legend=1 01:24:11.223 --rc geninfo_all_blocks=1 01:24:11.223 --rc geninfo_unexecuted_blocks=1 01:24:11.223 01:24:11.223 ' 01:24:11.223 05:19:02 -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:24:11.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:24:11.223 --rc genhtml_branch_coverage=1 01:24:11.223 --rc genhtml_function_coverage=1 01:24:11.223 --rc genhtml_legend=1 01:24:11.223 --rc geninfo_all_blocks=1 01:24:11.223 --rc geninfo_unexecuted_blocks=1 01:24:11.223 01:24:11.223 ' 01:24:11.223 05:19:02 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:24:11.223 05:19:02 -- nvmf/common.sh@7 -- # uname -s 01:24:11.223 05:19:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:24:11.223 05:19:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:24:11.223 05:19:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:24:11.223 05:19:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:24:11.223 05:19:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:24:11.223 05:19:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:24:11.223 05:19:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:24:11.223 05:19:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:24:11.223 05:19:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:24:11.223 05:19:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:24:11.223 05:19:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:fab57822-ba49-4e71-bebd-8b94bbcfdc8e 01:24:11.223 05:19:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=fab57822-ba49-4e71-bebd-8b94bbcfdc8e 01:24:11.223 05:19:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:24:11.223 05:19:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:24:11.223 05:19:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 01:24:11.223 05:19:02 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:24:11.223 05:19:02 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:24:11.223 05:19:02 -- scripts/common.sh@15 -- # shopt -s extglob 01:24:11.223 05:19:02 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:24:11.223 05:19:02 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:24:11.223 05:19:02 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:24:11.223 05:19:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:24:11.223 05:19:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:24:11.223 05:19:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:24:11.223 05:19:02 -- paths/export.sh@5 -- # export PATH 01:24:11.223 05:19:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:24:11.224 05:19:02 -- nvmf/common.sh@51 -- # : 0 01:24:11.224 05:19:02 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:24:11.224 05:19:02 -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:24:11.224 05:19:02 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:24:11.224 05:19:02 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:24:11.224 05:19:02 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:24:11.224 05:19:02 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:24:11.224 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:24:11.224 05:19:02 -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:24:11.224 05:19:02 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:24:11.224 05:19:02 -- nvmf/common.sh@55 -- # have_pci_nics=0 01:24:11.224 05:19:02 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 01:24:11.224 05:19:02 -- spdk/autotest.sh@32 -- # uname -s 01:24:11.224 05:19:02 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 01:24:11.224 05:19:02 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 01:24:11.224 05:19:02 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 01:24:11.224 05:19:02 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 01:24:11.224 05:19:02 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 01:24:11.224 05:19:02 -- spdk/autotest.sh@44 -- # modprobe nbd 01:24:11.224 05:19:02 -- spdk/autotest.sh@46 -- # type -P udevadm 01:24:11.224 05:19:02 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 01:24:11.224 05:19:02 -- spdk/autotest.sh@48 -- # udevadm_pid=54950 01:24:11.224 05:19:02 -- spdk/autotest.sh@53 -- # start_monitor_resources 01:24:11.224 05:19:02 -- pm/common@17 -- # local monitor 01:24:11.224 05:19:02 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 01:24:11.224 05:19:02 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 01:24:11.224 05:19:02 -- pm/common@25 -- # sleep 1 01:24:11.224 05:19:02 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 01:24:11.224 05:19:02 -- pm/common@21 -- # date +%s 01:24:11.224 05:19:02 -- pm/common@21 -- # date +%s 01:24:11.224 05:19:02 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733721542 01:24:11.224 05:19:02 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733721542 01:24:11.487 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733721542_collect-cpu-load.pm.log 01:24:11.487 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733721542_collect-vmstat.pm.log 01:24:12.421 05:19:03 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 01:24:12.421 05:19:03 -- spdk/autotest.sh@57 -- # timing_enter autotest 01:24:12.421 05:19:03 -- common/autotest_common.sh@726 -- # xtrace_disable 01:24:12.421 05:19:03 -- common/autotest_common.sh@10 -- # set +x 01:24:12.421 05:19:03 -- spdk/autotest.sh@59 -- # create_test_list 01:24:12.421 05:19:03 -- common/autotest_common.sh@752 -- # xtrace_disable 01:24:12.421 05:19:03 -- common/autotest_common.sh@10 -- # set +x 01:24:12.421 05:19:03 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 01:24:12.421 05:19:03 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 01:24:12.421 05:19:03 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 01:24:12.421 05:19:03 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 01:24:12.421 05:19:03 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 01:24:12.421 05:19:03 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 01:24:12.421 05:19:03 -- common/autotest_common.sh@1457 -- # uname 01:24:12.421 05:19:03 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 01:24:12.421 05:19:03 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 01:24:12.421 05:19:03 -- common/autotest_common.sh@1477 -- # uname 01:24:12.421 05:19:03 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 01:24:12.421 05:19:03 -- spdk/autotest.sh@68 -- # [[ y == y ]] 01:24:12.421 05:19:03 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 01:24:12.421 lcov: LCOV version 1.15 01:24:12.421 05:19:03 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 01:24:30.496 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 01:24:30.496 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 01:24:48.575 05:19:38 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 01:24:48.575 05:19:38 -- common/autotest_common.sh@726 -- # xtrace_disable 01:24:48.575 05:19:38 -- common/autotest_common.sh@10 -- # set +x 01:24:48.575 05:19:38 -- spdk/autotest.sh@78 -- # rm -f 01:24:48.575 05:19:38 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 01:24:48.575 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:24:48.575 0000:00:11.0 (1b36 0010): Already using the nvme driver 01:24:48.575 0000:00:10.0 (1b36 0010): Already using the nvme driver 01:24:48.575 0000:00:12.0 (1b36 0010): Already using the nvme driver 01:24:48.575 0000:00:13.0 (1b36 0010): Already using the nvme driver 01:24:48.575 05:19:39 -- spdk/autotest.sh@83 -- # get_zoned_devs 01:24:48.575 05:19:39 -- common/autotest_common.sh@1657 -- # zoned_devs=() 01:24:48.575 05:19:39 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 01:24:48.575 05:19:39 -- common/autotest_common.sh@1658 -- # local nvme bdf 01:24:48.575 05:19:39 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 01:24:48.575 05:19:39 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 01:24:48.575 05:19:39 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 01:24:48.575 05:19:39 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 01:24:48.575 05:19:39 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:24:48.575 05:19:39 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 01:24:48.575 05:19:39 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 01:24:48.575 05:19:39 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 01:24:48.575 05:19:39 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 01:24:48.575 05:19:39 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:24:48.575 05:19:39 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 01:24:48.575 05:19:39 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2c2n1 01:24:48.575 05:19:39 -- common/autotest_common.sh@1650 -- # local device=nvme2c2n1 01:24:48.575 05:19:39 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2c2n1/queue/zoned ]] 01:24:48.575 05:19:39 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:24:48.575 05:19:39 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 01:24:48.575 05:19:39 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 01:24:48.575 05:19:39 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 01:24:48.575 05:19:39 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 01:24:48.575 05:19:39 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:24:48.575 05:19:39 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 01:24:48.575 05:19:39 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 01:24:48.575 05:19:39 -- common/autotest_common.sh@1650 -- # local device=nvme3n1 01:24:48.575 05:19:39 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 01:24:48.575 05:19:39 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:24:48.575 05:19:39 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 01:24:48.575 05:19:39 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n2 01:24:48.575 05:19:39 -- common/autotest_common.sh@1650 -- # local device=nvme3n2 01:24:48.575 05:19:39 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n2/queue/zoned ]] 01:24:48.575 05:19:39 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:24:48.575 05:19:39 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 01:24:48.575 05:19:39 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n3 01:24:48.575 05:19:39 -- common/autotest_common.sh@1650 -- # local device=nvme3n3 01:24:48.575 05:19:39 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n3/queue/zoned ]] 01:24:48.575 05:19:39 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:24:48.575 05:19:39 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 01:24:48.575 05:19:39 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 01:24:48.576 05:19:39 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 01:24:48.576 05:19:39 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 01:24:48.576 05:19:39 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 01:24:48.576 05:19:39 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 01:24:48.576 No valid GPT data, bailing 01:24:48.576 05:19:39 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 01:24:48.576 05:19:39 -- scripts/common.sh@394 -- # pt= 01:24:48.576 05:19:39 -- scripts/common.sh@395 -- # return 1 01:24:48.576 05:19:39 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 01:24:48.576 1+0 records in 01:24:48.576 1+0 records out 01:24:48.576 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.014177 s, 74.0 MB/s 01:24:48.576 05:19:39 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 01:24:48.576 05:19:39 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 01:24:48.576 05:19:39 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 01:24:48.576 05:19:39 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 01:24:48.576 05:19:39 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 01:24:48.576 No valid GPT data, bailing 01:24:48.576 05:19:39 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 01:24:48.576 05:19:39 -- scripts/common.sh@394 -- # pt= 01:24:48.576 05:19:39 -- scripts/common.sh@395 -- # return 1 01:24:48.576 05:19:39 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 01:24:48.576 1+0 records in 01:24:48.576 1+0 records out 01:24:48.576 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00506767 s, 207 MB/s 01:24:48.576 05:19:39 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 01:24:48.576 05:19:39 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 01:24:48.576 05:19:39 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 01:24:48.576 05:19:39 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 01:24:48.576 05:19:39 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 01:24:48.576 No valid GPT data, bailing 01:24:48.576 05:19:39 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 01:24:48.576 05:19:39 -- scripts/common.sh@394 -- # pt= 01:24:48.576 05:19:39 -- scripts/common.sh@395 -- # return 1 01:24:48.576 05:19:39 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 01:24:48.576 1+0 records in 01:24:48.576 1+0 records out 01:24:48.576 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00474768 s, 221 MB/s 01:24:48.576 05:19:39 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 01:24:48.576 05:19:39 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 01:24:48.576 05:19:39 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 01:24:48.576 05:19:39 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 01:24:48.576 05:19:39 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 01:24:48.576 No valid GPT data, bailing 01:24:48.576 05:19:39 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 01:24:48.576 05:19:39 -- scripts/common.sh@394 -- # pt= 01:24:48.576 05:19:39 -- scripts/common.sh@395 -- # return 1 01:24:48.576 05:19:39 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 01:24:48.576 1+0 records in 01:24:48.576 1+0 records out 01:24:48.576 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00508531 s, 206 MB/s 01:24:48.576 05:19:39 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 01:24:48.576 05:19:39 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 01:24:48.576 05:19:39 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n2 01:24:48.576 05:19:39 -- scripts/common.sh@381 -- # local block=/dev/nvme3n2 pt 01:24:48.576 05:19:39 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n2 01:24:48.576 No valid GPT data, bailing 01:24:48.576 05:19:39 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n2 01:24:48.576 05:19:39 -- scripts/common.sh@394 -- # pt= 01:24:48.576 05:19:39 -- scripts/common.sh@395 -- # return 1 01:24:48.576 05:19:39 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n2 bs=1M count=1 01:24:48.576 1+0 records in 01:24:48.576 1+0 records out 01:24:48.576 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00442392 s, 237 MB/s 01:24:48.576 05:19:39 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 01:24:48.576 05:19:40 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 01:24:48.576 05:19:40 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n3 01:24:48.576 05:19:40 -- scripts/common.sh@381 -- # local block=/dev/nvme3n3 pt 01:24:48.576 05:19:40 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n3 01:24:48.576 No valid GPT data, bailing 01:24:48.576 05:19:40 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n3 01:24:48.576 05:19:40 -- scripts/common.sh@394 -- # pt= 01:24:48.576 05:19:40 -- scripts/common.sh@395 -- # return 1 01:24:48.576 05:19:40 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n3 bs=1M count=1 01:24:48.576 1+0 records in 01:24:48.576 1+0 records out 01:24:48.576 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0050141 s, 209 MB/s 01:24:48.576 05:19:40 -- spdk/autotest.sh@105 -- # sync 01:24:48.576 05:19:40 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 01:24:48.576 05:19:40 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 01:24:48.576 05:19:40 -- common/autotest_common.sh@22 -- # reap_spdk_processes 01:24:51.104 05:19:42 -- spdk/autotest.sh@111 -- # uname -s 01:24:51.104 05:19:42 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 01:24:51.104 05:19:42 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 01:24:51.104 05:19:42 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 01:24:51.104 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:24:51.672 Hugepages 01:24:51.672 node hugesize free / total 01:24:51.672 node0 1048576kB 0 / 0 01:24:51.672 node0 2048kB 0 / 0 01:24:51.672 01:24:51.672 Type BDF Vendor Device NUMA Driver Device Block devices 01:24:51.672 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 01:24:51.931 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 01:24:51.931 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 01:24:51.931 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme3 nvme3n1 nvme3n2 nvme3n3 01:24:51.931 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme2 nvme2n1 01:24:51.931 05:19:43 -- spdk/autotest.sh@117 -- # uname -s 01:24:51.931 05:19:43 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 01:24:51.931 05:19:43 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 01:24:51.931 05:19:43 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 01:24:52.496 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:24:53.062 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 01:24:53.062 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 01:24:53.062 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 01:24:53.320 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 01:24:53.320 05:19:44 -- common/autotest_common.sh@1517 -- # sleep 1 01:24:54.259 05:19:45 -- common/autotest_common.sh@1518 -- # bdfs=() 01:24:54.259 05:19:45 -- common/autotest_common.sh@1518 -- # local bdfs 01:24:54.259 05:19:45 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 01:24:54.259 05:19:45 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 01:24:54.259 05:19:45 -- common/autotest_common.sh@1498 -- # bdfs=() 01:24:54.259 05:19:45 -- common/autotest_common.sh@1498 -- # local bdfs 01:24:54.259 05:19:45 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 01:24:54.259 05:19:45 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 01:24:54.259 05:19:45 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 01:24:54.259 05:19:45 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 01:24:54.259 05:19:45 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 01:24:54.259 05:19:45 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 01:24:54.825 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:24:54.825 Waiting for block devices as requested 01:24:55.083 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 01:24:55.083 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 01:24:55.083 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 01:24:55.083 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 01:25:00.349 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 01:25:00.349 05:19:51 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 01:25:00.349 05:19:51 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 01:25:00.349 05:19:51 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 01:25:00.349 05:19:51 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 01:25:00.349 05:19:51 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 01:25:00.349 05:19:51 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 01:25:00.349 05:19:51 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 01:25:00.349 05:19:51 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 01:25:00.349 05:19:51 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 01:25:00.349 05:19:51 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 01:25:00.349 05:19:51 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 01:25:00.349 05:19:51 -- common/autotest_common.sh@1531 -- # grep oacs 01:25:00.349 05:19:51 -- common/autotest_common.sh@1531 -- # cut -d: -f2 01:25:00.349 05:19:51 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 01:25:00.349 05:19:51 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 01:25:00.349 05:19:51 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 01:25:00.349 05:19:51 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 01:25:00.349 05:19:51 -- common/autotest_common.sh@1540 -- # cut -d: -f2 01:25:00.349 05:19:51 -- common/autotest_common.sh@1540 -- # grep unvmcap 01:25:00.349 05:19:51 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 01:25:00.349 05:19:51 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 01:25:00.349 05:19:51 -- common/autotest_common.sh@1543 -- # continue 01:25:00.349 05:19:51 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 01:25:00.349 05:19:51 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 01:25:00.349 05:19:51 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 01:25:00.349 05:19:51 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 01:25:00.349 05:19:51 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 01:25:00.349 05:19:51 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 01:25:00.349 05:19:51 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 01:25:00.349 05:19:51 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 01:25:00.349 05:19:51 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 01:25:00.349 05:19:51 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 01:25:00.349 05:19:51 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 01:25:00.349 05:19:51 -- common/autotest_common.sh@1531 -- # cut -d: -f2 01:25:00.349 05:19:51 -- common/autotest_common.sh@1531 -- # grep oacs 01:25:00.349 05:19:51 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 01:25:00.349 05:19:51 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 01:25:00.349 05:19:51 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 01:25:00.349 05:19:51 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 01:25:00.349 05:19:51 -- common/autotest_common.sh@1540 -- # grep unvmcap 01:25:00.349 05:19:51 -- common/autotest_common.sh@1540 -- # cut -d: -f2 01:25:00.349 05:19:51 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 01:25:00.349 05:19:51 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 01:25:00.349 05:19:51 -- common/autotest_common.sh@1543 -- # continue 01:25:00.349 05:19:51 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 01:25:00.349 05:19:51 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 01:25:00.349 05:19:51 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 01:25:00.349 05:19:51 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 01:25:00.349 05:19:51 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 01:25:00.349 05:19:51 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 01:25:00.349 05:19:51 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 01:25:00.349 05:19:51 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 01:25:00.349 05:19:51 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 01:25:00.349 05:19:51 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 01:25:00.349 05:19:51 -- common/autotest_common.sh@1531 -- # grep oacs 01:25:00.349 05:19:51 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 01:25:00.349 05:19:51 -- common/autotest_common.sh@1531 -- # cut -d: -f2 01:25:00.349 05:19:51 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 01:25:00.349 05:19:51 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 01:25:00.349 05:19:51 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 01:25:00.349 05:19:51 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 01:25:00.349 05:19:51 -- common/autotest_common.sh@1540 -- # grep unvmcap 01:25:00.349 05:19:51 -- common/autotest_common.sh@1540 -- # cut -d: -f2 01:25:00.349 05:19:51 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 01:25:00.349 05:19:51 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 01:25:00.349 05:19:51 -- common/autotest_common.sh@1543 -- # continue 01:25:00.349 05:19:51 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 01:25:00.349 05:19:51 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 01:25:00.349 05:19:51 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 01:25:00.349 05:19:51 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 01:25:00.349 05:19:51 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 01:25:00.349 05:19:51 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 01:25:00.349 05:19:51 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 01:25:00.349 05:19:51 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 01:25:00.350 05:19:51 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 01:25:00.350 05:19:51 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 01:25:00.350 05:19:51 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 01:25:00.350 05:19:51 -- common/autotest_common.sh@1531 -- # grep oacs 01:25:00.350 05:19:51 -- common/autotest_common.sh@1531 -- # cut -d: -f2 01:25:00.350 05:19:51 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 01:25:00.350 05:19:51 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 01:25:00.350 05:19:51 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 01:25:00.350 05:19:51 -- common/autotest_common.sh@1540 -- # grep unvmcap 01:25:00.350 05:19:51 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 01:25:00.350 05:19:51 -- common/autotest_common.sh@1540 -- # cut -d: -f2 01:25:00.350 05:19:51 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 01:25:00.350 05:19:51 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 01:25:00.350 05:19:51 -- common/autotest_common.sh@1543 -- # continue 01:25:00.350 05:19:51 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 01:25:00.350 05:19:51 -- common/autotest_common.sh@732 -- # xtrace_disable 01:25:00.350 05:19:51 -- common/autotest_common.sh@10 -- # set +x 01:25:00.350 05:19:51 -- spdk/autotest.sh@125 -- # timing_enter afterboot 01:25:00.350 05:19:51 -- common/autotest_common.sh@726 -- # xtrace_disable 01:25:00.350 05:19:51 -- common/autotest_common.sh@10 -- # set +x 01:25:00.608 05:19:51 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 01:25:01.178 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:25:01.745 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 01:25:01.745 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 01:25:01.745 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 01:25:01.745 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 01:25:01.745 05:19:53 -- spdk/autotest.sh@127 -- # timing_exit afterboot 01:25:01.745 05:19:53 -- common/autotest_common.sh@732 -- # xtrace_disable 01:25:01.745 05:19:53 -- common/autotest_common.sh@10 -- # set +x 01:25:01.745 05:19:53 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 01:25:01.745 05:19:53 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 01:25:01.745 05:19:53 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 01:25:01.745 05:19:53 -- common/autotest_common.sh@1563 -- # bdfs=() 01:25:01.745 05:19:53 -- common/autotest_common.sh@1563 -- # _bdfs=() 01:25:01.745 05:19:53 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 01:25:01.745 05:19:53 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 01:25:01.745 05:19:53 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 01:25:01.745 05:19:53 -- common/autotest_common.sh@1498 -- # bdfs=() 01:25:01.745 05:19:53 -- common/autotest_common.sh@1498 -- # local bdfs 01:25:01.745 05:19:53 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 01:25:01.745 05:19:53 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 01:25:01.745 05:19:53 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 01:25:01.745 05:19:53 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 01:25:01.745 05:19:53 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 01:25:01.745 05:19:53 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 01:25:01.745 05:19:53 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 01:25:01.745 05:19:53 -- common/autotest_common.sh@1566 -- # device=0x0010 01:25:01.745 05:19:53 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 01:25:01.745 05:19:53 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 01:25:01.745 05:19:53 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 01:25:01.745 05:19:53 -- common/autotest_common.sh@1566 -- # device=0x0010 01:25:01.745 05:19:53 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 01:25:01.745 05:19:53 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 01:25:01.745 05:19:53 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 01:25:01.745 05:19:53 -- common/autotest_common.sh@1566 -- # device=0x0010 01:25:01.745 05:19:53 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 01:25:01.745 05:19:53 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 01:25:01.745 05:19:53 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 01:25:01.745 05:19:53 -- common/autotest_common.sh@1566 -- # device=0x0010 01:25:01.745 05:19:53 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 01:25:01.745 05:19:53 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 01:25:01.745 05:19:53 -- common/autotest_common.sh@1572 -- # return 0 01:25:01.745 05:19:53 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 01:25:01.745 05:19:53 -- common/autotest_common.sh@1580 -- # return 0 01:25:01.745 05:19:53 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 01:25:01.745 05:19:53 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 01:25:01.745 05:19:53 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 01:25:01.745 05:19:53 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 01:25:01.745 05:19:53 -- spdk/autotest.sh@149 -- # timing_enter lib 01:25:01.745 05:19:53 -- common/autotest_common.sh@726 -- # xtrace_disable 01:25:01.745 05:19:53 -- common/autotest_common.sh@10 -- # set +x 01:25:02.004 05:19:53 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 01:25:02.004 05:19:53 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 01:25:02.004 05:19:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:25:02.004 05:19:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:25:02.004 05:19:53 -- common/autotest_common.sh@10 -- # set +x 01:25:02.004 ************************************ 01:25:02.004 START TEST env 01:25:02.004 ************************************ 01:25:02.004 05:19:53 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 01:25:02.004 * Looking for test storage... 01:25:02.004 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 01:25:02.004 05:19:53 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:25:02.004 05:19:53 env -- common/autotest_common.sh@1693 -- # lcov --version 01:25:02.004 05:19:53 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:25:02.004 05:19:53 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:25:02.004 05:19:53 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:25:02.004 05:19:53 env -- scripts/common.sh@333 -- # local ver1 ver1_l 01:25:02.004 05:19:53 env -- scripts/common.sh@334 -- # local ver2 ver2_l 01:25:02.004 05:19:53 env -- scripts/common.sh@336 -- # IFS=.-: 01:25:02.004 05:19:53 env -- scripts/common.sh@336 -- # read -ra ver1 01:25:02.004 05:19:53 env -- scripts/common.sh@337 -- # IFS=.-: 01:25:02.004 05:19:53 env -- scripts/common.sh@337 -- # read -ra ver2 01:25:02.004 05:19:53 env -- scripts/common.sh@338 -- # local 'op=<' 01:25:02.004 05:19:53 env -- scripts/common.sh@340 -- # ver1_l=2 01:25:02.004 05:19:53 env -- scripts/common.sh@341 -- # ver2_l=1 01:25:02.004 05:19:53 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:25:02.004 05:19:53 env -- scripts/common.sh@344 -- # case "$op" in 01:25:02.004 05:19:53 env -- scripts/common.sh@345 -- # : 1 01:25:02.004 05:19:53 env -- scripts/common.sh@364 -- # (( v = 0 )) 01:25:02.004 05:19:53 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:25:02.004 05:19:53 env -- scripts/common.sh@365 -- # decimal 1 01:25:02.004 05:19:53 env -- scripts/common.sh@353 -- # local d=1 01:25:02.004 05:19:53 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:25:02.004 05:19:53 env -- scripts/common.sh@355 -- # echo 1 01:25:02.004 05:19:53 env -- scripts/common.sh@365 -- # ver1[v]=1 01:25:02.004 05:19:53 env -- scripts/common.sh@366 -- # decimal 2 01:25:02.004 05:19:53 env -- scripts/common.sh@353 -- # local d=2 01:25:02.004 05:19:53 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:25:02.004 05:19:53 env -- scripts/common.sh@355 -- # echo 2 01:25:02.004 05:19:53 env -- scripts/common.sh@366 -- # ver2[v]=2 01:25:02.004 05:19:53 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:25:02.004 05:19:53 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:25:02.004 05:19:53 env -- scripts/common.sh@368 -- # return 0 01:25:02.004 05:19:53 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:25:02.004 05:19:53 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:25:02.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:25:02.004 --rc genhtml_branch_coverage=1 01:25:02.004 --rc genhtml_function_coverage=1 01:25:02.004 --rc genhtml_legend=1 01:25:02.004 --rc geninfo_all_blocks=1 01:25:02.004 --rc geninfo_unexecuted_blocks=1 01:25:02.004 01:25:02.004 ' 01:25:02.004 05:19:53 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:25:02.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:25:02.004 --rc genhtml_branch_coverage=1 01:25:02.004 --rc genhtml_function_coverage=1 01:25:02.004 --rc genhtml_legend=1 01:25:02.004 --rc geninfo_all_blocks=1 01:25:02.004 --rc geninfo_unexecuted_blocks=1 01:25:02.004 01:25:02.004 ' 01:25:02.004 05:19:53 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:25:02.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:25:02.004 --rc genhtml_branch_coverage=1 01:25:02.004 --rc genhtml_function_coverage=1 01:25:02.004 --rc genhtml_legend=1 01:25:02.004 --rc geninfo_all_blocks=1 01:25:02.004 --rc geninfo_unexecuted_blocks=1 01:25:02.004 01:25:02.004 ' 01:25:02.004 05:19:53 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:25:02.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:25:02.004 --rc genhtml_branch_coverage=1 01:25:02.004 --rc genhtml_function_coverage=1 01:25:02.004 --rc genhtml_legend=1 01:25:02.004 --rc geninfo_all_blocks=1 01:25:02.004 --rc geninfo_unexecuted_blocks=1 01:25:02.004 01:25:02.004 ' 01:25:02.004 05:19:53 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 01:25:02.004 05:19:53 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:25:02.004 05:19:53 env -- common/autotest_common.sh@1111 -- # xtrace_disable 01:25:02.004 05:19:53 env -- common/autotest_common.sh@10 -- # set +x 01:25:02.004 ************************************ 01:25:02.004 START TEST env_memory 01:25:02.004 ************************************ 01:25:02.004 05:19:53 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 01:25:02.004 01:25:02.004 01:25:02.004 CUnit - A unit testing framework for C - Version 2.1-3 01:25:02.004 http://cunit.sourceforge.net/ 01:25:02.004 01:25:02.004 01:25:02.004 Suite: memory 01:25:02.272 Test: alloc and free memory map ...[2024-12-09 05:19:53.644170] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 284:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 01:25:02.272 passed 01:25:02.272 Test: mem map translation ...[2024-12-09 05:19:53.704331] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 596:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 01:25:02.272 [2024-12-09 05:19:53.704451] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 596:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 01:25:02.272 [2024-12-09 05:19:53.704569] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 01:25:02.272 [2024-12-09 05:19:53.704611] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 606:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 01:25:02.272 passed 01:25:02.272 Test: mem map registration ...[2024-12-09 05:19:53.802993] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 348:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 01:25:02.272 [2024-12-09 05:19:53.803112] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 348:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 01:25:02.272 passed 01:25:02.530 Test: mem map adjacent registrations ...passed 01:25:02.530 01:25:02.530 Run Summary: Type Total Ran Passed Failed Inactive 01:25:02.530 suites 1 1 n/a 0 0 01:25:02.530 tests 4 4 4 0 0 01:25:02.530 asserts 152 152 152 0 n/a 01:25:02.530 01:25:02.530 Elapsed time = 0.343 seconds 01:25:02.530 01:25:02.530 real 0m0.383s 01:25:02.530 user 0m0.354s 01:25:02.530 sys 0m0.023s 01:25:02.530 05:19:53 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 01:25:02.530 05:19:53 env.env_memory -- common/autotest_common.sh@10 -- # set +x 01:25:02.530 ************************************ 01:25:02.530 END TEST env_memory 01:25:02.530 ************************************ 01:25:02.530 05:19:53 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 01:25:02.530 05:19:53 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:25:02.530 05:19:53 env -- common/autotest_common.sh@1111 -- # xtrace_disable 01:25:02.530 05:19:53 env -- common/autotest_common.sh@10 -- # set +x 01:25:02.530 ************************************ 01:25:02.530 START TEST env_vtophys 01:25:02.530 ************************************ 01:25:02.530 05:19:54 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 01:25:02.530 EAL: lib.eal log level changed from notice to debug 01:25:02.530 EAL: Detected lcore 0 as core 0 on socket 0 01:25:02.530 EAL: Detected lcore 1 as core 0 on socket 0 01:25:02.530 EAL: Detected lcore 2 as core 0 on socket 0 01:25:02.530 EAL: Detected lcore 3 as core 0 on socket 0 01:25:02.530 EAL: Detected lcore 4 as core 0 on socket 0 01:25:02.530 EAL: Detected lcore 5 as core 0 on socket 0 01:25:02.530 EAL: Detected lcore 6 as core 0 on socket 0 01:25:02.530 EAL: Detected lcore 7 as core 0 on socket 0 01:25:02.530 EAL: Detected lcore 8 as core 0 on socket 0 01:25:02.530 EAL: Detected lcore 9 as core 0 on socket 0 01:25:02.530 EAL: Maximum logical cores by configuration: 128 01:25:02.530 EAL: Detected CPU lcores: 10 01:25:02.530 EAL: Detected NUMA nodes: 1 01:25:02.530 EAL: Checking presence of .so 'librte_eal.so.24.1' 01:25:02.530 EAL: Detected shared linkage of DPDK 01:25:02.530 EAL: No shared files mode enabled, IPC will be disabled 01:25:02.530 EAL: Selected IOVA mode 'PA' 01:25:02.530 EAL: Probing VFIO support... 01:25:02.530 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 01:25:02.530 EAL: VFIO modules not loaded, skipping VFIO support... 01:25:02.530 EAL: Ask a virtual area of 0x2e000 bytes 01:25:02.530 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 01:25:02.530 EAL: Setting up physically contiguous memory... 01:25:02.530 EAL: Setting maximum number of open files to 524288 01:25:02.530 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 01:25:02.530 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 01:25:02.530 EAL: Ask a virtual area of 0x61000 bytes 01:25:02.530 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 01:25:02.530 EAL: Memseg list allocated at socket 0, page size 0x800kB 01:25:02.530 EAL: Ask a virtual area of 0x400000000 bytes 01:25:02.530 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 01:25:02.530 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 01:25:02.530 EAL: Ask a virtual area of 0x61000 bytes 01:25:02.530 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 01:25:02.530 EAL: Memseg list allocated at socket 0, page size 0x800kB 01:25:02.530 EAL: Ask a virtual area of 0x400000000 bytes 01:25:02.530 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 01:25:02.530 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 01:25:02.530 EAL: Ask a virtual area of 0x61000 bytes 01:25:02.530 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 01:25:02.530 EAL: Memseg list allocated at socket 0, page size 0x800kB 01:25:02.530 EAL: Ask a virtual area of 0x400000000 bytes 01:25:02.530 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 01:25:02.530 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 01:25:02.530 EAL: Ask a virtual area of 0x61000 bytes 01:25:02.530 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 01:25:02.530 EAL: Memseg list allocated at socket 0, page size 0x800kB 01:25:02.530 EAL: Ask a virtual area of 0x400000000 bytes 01:25:02.530 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 01:25:02.530 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 01:25:02.530 EAL: Hugepages will be freed exactly as allocated. 01:25:02.530 EAL: No shared files mode enabled, IPC is disabled 01:25:02.530 EAL: No shared files mode enabled, IPC is disabled 01:25:02.804 EAL: TSC frequency is ~2200000 KHz 01:25:02.804 EAL: Main lcore 0 is ready (tid=7fd2c04d7a40;cpuset=[0]) 01:25:02.804 EAL: Trying to obtain current memory policy. 01:25:02.804 EAL: Setting policy MPOL_PREFERRED for socket 0 01:25:02.804 EAL: Restoring previous memory policy: 0 01:25:02.804 EAL: request: mp_malloc_sync 01:25:02.804 EAL: No shared files mode enabled, IPC is disabled 01:25:02.804 EAL: Heap on socket 0 was expanded by 2MB 01:25:02.804 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 01:25:02.804 EAL: No PCI address specified using 'addr=' in: bus=pci 01:25:02.804 EAL: Mem event callback 'spdk:(nil)' registered 01:25:02.804 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 01:25:02.804 01:25:02.804 01:25:02.804 CUnit - A unit testing framework for C - Version 2.1-3 01:25:02.804 http://cunit.sourceforge.net/ 01:25:02.804 01:25:02.804 01:25:02.804 Suite: components_suite 01:25:03.369 Test: vtophys_malloc_test ...passed 01:25:03.369 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 01:25:03.369 EAL: Setting policy MPOL_PREFERRED for socket 0 01:25:03.369 EAL: Restoring previous memory policy: 4 01:25:03.369 EAL: Calling mem event callback 'spdk:(nil)' 01:25:03.369 EAL: request: mp_malloc_sync 01:25:03.369 EAL: No shared files mode enabled, IPC is disabled 01:25:03.369 EAL: Heap on socket 0 was expanded by 4MB 01:25:03.369 EAL: Calling mem event callback 'spdk:(nil)' 01:25:03.369 EAL: request: mp_malloc_sync 01:25:03.369 EAL: No shared files mode enabled, IPC is disabled 01:25:03.369 EAL: Heap on socket 0 was shrunk by 4MB 01:25:03.369 EAL: Trying to obtain current memory policy. 01:25:03.369 EAL: Setting policy MPOL_PREFERRED for socket 0 01:25:03.369 EAL: Restoring previous memory policy: 4 01:25:03.369 EAL: Calling mem event callback 'spdk:(nil)' 01:25:03.369 EAL: request: mp_malloc_sync 01:25:03.369 EAL: No shared files mode enabled, IPC is disabled 01:25:03.369 EAL: Heap on socket 0 was expanded by 6MB 01:25:03.369 EAL: Calling mem event callback 'spdk:(nil)' 01:25:03.369 EAL: request: mp_malloc_sync 01:25:03.369 EAL: No shared files mode enabled, IPC is disabled 01:25:03.369 EAL: Heap on socket 0 was shrunk by 6MB 01:25:03.369 EAL: Trying to obtain current memory policy. 01:25:03.369 EAL: Setting policy MPOL_PREFERRED for socket 0 01:25:03.369 EAL: Restoring previous memory policy: 4 01:25:03.369 EAL: Calling mem event callback 'spdk:(nil)' 01:25:03.370 EAL: request: mp_malloc_sync 01:25:03.370 EAL: No shared files mode enabled, IPC is disabled 01:25:03.370 EAL: Heap on socket 0 was expanded by 10MB 01:25:03.370 EAL: Calling mem event callback 'spdk:(nil)' 01:25:03.370 EAL: request: mp_malloc_sync 01:25:03.370 EAL: No shared files mode enabled, IPC is disabled 01:25:03.370 EAL: Heap on socket 0 was shrunk by 10MB 01:25:03.370 EAL: Trying to obtain current memory policy. 01:25:03.370 EAL: Setting policy MPOL_PREFERRED for socket 0 01:25:03.370 EAL: Restoring previous memory policy: 4 01:25:03.370 EAL: Calling mem event callback 'spdk:(nil)' 01:25:03.370 EAL: request: mp_malloc_sync 01:25:03.370 EAL: No shared files mode enabled, IPC is disabled 01:25:03.370 EAL: Heap on socket 0 was expanded by 18MB 01:25:03.370 EAL: Calling mem event callback 'spdk:(nil)' 01:25:03.370 EAL: request: mp_malloc_sync 01:25:03.370 EAL: No shared files mode enabled, IPC is disabled 01:25:03.370 EAL: Heap on socket 0 was shrunk by 18MB 01:25:03.370 EAL: Trying to obtain current memory policy. 01:25:03.370 EAL: Setting policy MPOL_PREFERRED for socket 0 01:25:03.370 EAL: Restoring previous memory policy: 4 01:25:03.370 EAL: Calling mem event callback 'spdk:(nil)' 01:25:03.370 EAL: request: mp_malloc_sync 01:25:03.370 EAL: No shared files mode enabled, IPC is disabled 01:25:03.370 EAL: Heap on socket 0 was expanded by 34MB 01:25:03.370 EAL: Calling mem event callback 'spdk:(nil)' 01:25:03.370 EAL: request: mp_malloc_sync 01:25:03.370 EAL: No shared files mode enabled, IPC is disabled 01:25:03.370 EAL: Heap on socket 0 was shrunk by 34MB 01:25:03.626 EAL: Trying to obtain current memory policy. 01:25:03.626 EAL: Setting policy MPOL_PREFERRED for socket 0 01:25:03.626 EAL: Restoring previous memory policy: 4 01:25:03.626 EAL: Calling mem event callback 'spdk:(nil)' 01:25:03.626 EAL: request: mp_malloc_sync 01:25:03.626 EAL: No shared files mode enabled, IPC is disabled 01:25:03.626 EAL: Heap on socket 0 was expanded by 66MB 01:25:03.626 EAL: Calling mem event callback 'spdk:(nil)' 01:25:03.626 EAL: request: mp_malloc_sync 01:25:03.626 EAL: No shared files mode enabled, IPC is disabled 01:25:03.626 EAL: Heap on socket 0 was shrunk by 66MB 01:25:03.626 EAL: Trying to obtain current memory policy. 01:25:03.626 EAL: Setting policy MPOL_PREFERRED for socket 0 01:25:03.894 EAL: Restoring previous memory policy: 4 01:25:03.894 EAL: Calling mem event callback 'spdk:(nil)' 01:25:03.894 EAL: request: mp_malloc_sync 01:25:03.894 EAL: No shared files mode enabled, IPC is disabled 01:25:03.894 EAL: Heap on socket 0 was expanded by 130MB 01:25:03.894 EAL: Calling mem event callback 'spdk:(nil)' 01:25:04.170 EAL: request: mp_malloc_sync 01:25:04.170 EAL: No shared files mode enabled, IPC is disabled 01:25:04.170 EAL: Heap on socket 0 was shrunk by 130MB 01:25:04.170 EAL: Trying to obtain current memory policy. 01:25:04.170 EAL: Setting policy MPOL_PREFERRED for socket 0 01:25:04.170 EAL: Restoring previous memory policy: 4 01:25:04.170 EAL: Calling mem event callback 'spdk:(nil)' 01:25:04.170 EAL: request: mp_malloc_sync 01:25:04.170 EAL: No shared files mode enabled, IPC is disabled 01:25:04.170 EAL: Heap on socket 0 was expanded by 258MB 01:25:04.735 EAL: Calling mem event callback 'spdk:(nil)' 01:25:04.735 EAL: request: mp_malloc_sync 01:25:04.735 EAL: No shared files mode enabled, IPC is disabled 01:25:04.735 EAL: Heap on socket 0 was shrunk by 258MB 01:25:04.992 EAL: Trying to obtain current memory policy. 01:25:04.992 EAL: Setting policy MPOL_PREFERRED for socket 0 01:25:05.250 EAL: Restoring previous memory policy: 4 01:25:05.250 EAL: Calling mem event callback 'spdk:(nil)' 01:25:05.250 EAL: request: mp_malloc_sync 01:25:05.250 EAL: No shared files mode enabled, IPC is disabled 01:25:05.250 EAL: Heap on socket 0 was expanded by 514MB 01:25:06.181 EAL: Calling mem event callback 'spdk:(nil)' 01:25:06.181 EAL: request: mp_malloc_sync 01:25:06.181 EAL: No shared files mode enabled, IPC is disabled 01:25:06.181 EAL: Heap on socket 0 was shrunk by 514MB 01:25:07.113 EAL: Trying to obtain current memory policy. 01:25:07.113 EAL: Setting policy MPOL_PREFERRED for socket 0 01:25:07.371 EAL: Restoring previous memory policy: 4 01:25:07.371 EAL: Calling mem event callback 'spdk:(nil)' 01:25:07.371 EAL: request: mp_malloc_sync 01:25:07.371 EAL: No shared files mode enabled, IPC is disabled 01:25:07.371 EAL: Heap on socket 0 was expanded by 1026MB 01:25:09.282 EAL: Calling mem event callback 'spdk:(nil)' 01:25:09.541 EAL: request: mp_malloc_sync 01:25:09.541 EAL: No shared files mode enabled, IPC is disabled 01:25:09.541 EAL: Heap on socket 0 was shrunk by 1026MB 01:25:10.913 passed 01:25:10.913 01:25:10.913 Run Summary: Type Total Ran Passed Failed Inactive 01:25:10.913 suites 1 1 n/a 0 0 01:25:10.913 tests 2 2 2 0 0 01:25:10.913 asserts 5649 5649 5649 0 n/a 01:25:10.913 01:25:10.913 Elapsed time = 8.030 seconds 01:25:10.913 EAL: Calling mem event callback 'spdk:(nil)' 01:25:10.913 EAL: request: mp_malloc_sync 01:25:10.913 EAL: No shared files mode enabled, IPC is disabled 01:25:10.913 EAL: Heap on socket 0 was shrunk by 2MB 01:25:10.913 EAL: No shared files mode enabled, IPC is disabled 01:25:10.913 EAL: No shared files mode enabled, IPC is disabled 01:25:10.913 EAL: No shared files mode enabled, IPC is disabled 01:25:10.913 01:25:10.913 real 0m8.393s 01:25:10.913 user 0m6.728s 01:25:10.913 sys 0m1.479s 01:25:10.913 05:20:02 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 01:25:10.913 05:20:02 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 01:25:10.913 ************************************ 01:25:10.913 END TEST env_vtophys 01:25:10.913 ************************************ 01:25:10.913 05:20:02 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 01:25:10.913 05:20:02 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:25:10.913 05:20:02 env -- common/autotest_common.sh@1111 -- # xtrace_disable 01:25:10.913 05:20:02 env -- common/autotest_common.sh@10 -- # set +x 01:25:10.913 ************************************ 01:25:10.913 START TEST env_pci 01:25:10.913 ************************************ 01:25:10.913 05:20:02 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 01:25:10.913 01:25:10.913 01:25:10.913 CUnit - A unit testing framework for C - Version 2.1-3 01:25:10.913 http://cunit.sourceforge.net/ 01:25:10.913 01:25:10.913 01:25:10.913 Suite: pci 01:25:10.913 Test: pci_hook ...[2024-12-09 05:20:02.495067] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57806 has claimed it 01:25:10.913 passed 01:25:10.913 01:25:10.913 EAL: Cannot find device (10000:00:01.0) 01:25:10.913 EAL: Failed to attach device on primary process 01:25:10.913 Run Summary: Type Total Ran Passed Failed Inactive 01:25:10.913 suites 1 1 n/a 0 0 01:25:10.913 tests 1 1 1 0 0 01:25:10.913 asserts 25 25 25 0 n/a 01:25:10.913 01:25:10.913 Elapsed time = 0.008 seconds 01:25:11.171 01:25:11.171 real 0m0.080s 01:25:11.171 user 0m0.041s 01:25:11.171 sys 0m0.039s 01:25:11.171 05:20:02 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 01:25:11.171 05:20:02 env.env_pci -- common/autotest_common.sh@10 -- # set +x 01:25:11.171 ************************************ 01:25:11.171 END TEST env_pci 01:25:11.171 ************************************ 01:25:11.171 05:20:02 env -- env/env.sh@14 -- # argv='-c 0x1 ' 01:25:11.171 05:20:02 env -- env/env.sh@15 -- # uname 01:25:11.171 05:20:02 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 01:25:11.171 05:20:02 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 01:25:11.171 05:20:02 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 01:25:11.171 05:20:02 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 01:25:11.171 05:20:02 env -- common/autotest_common.sh@1111 -- # xtrace_disable 01:25:11.171 05:20:02 env -- common/autotest_common.sh@10 -- # set +x 01:25:11.171 ************************************ 01:25:11.171 START TEST env_dpdk_post_init 01:25:11.171 ************************************ 01:25:11.171 05:20:02 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 01:25:11.171 EAL: Detected CPU lcores: 10 01:25:11.171 EAL: Detected NUMA nodes: 1 01:25:11.171 EAL: Detected shared linkage of DPDK 01:25:11.171 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 01:25:11.171 EAL: Selected IOVA mode 'PA' 01:25:11.430 TELEMETRY: No legacy callbacks, legacy socket not created 01:25:11.430 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 01:25:11.430 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 01:25:11.430 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 01:25:11.430 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 01:25:11.430 Starting DPDK initialization... 01:25:11.430 Starting SPDK post initialization... 01:25:11.430 SPDK NVMe probe 01:25:11.430 Attaching to 0000:00:10.0 01:25:11.430 Attaching to 0000:00:11.0 01:25:11.430 Attaching to 0000:00:12.0 01:25:11.430 Attaching to 0000:00:13.0 01:25:11.430 Attached to 0000:00:10.0 01:25:11.430 Attached to 0000:00:11.0 01:25:11.430 Attached to 0000:00:13.0 01:25:11.430 Attached to 0000:00:12.0 01:25:11.430 Cleaning up... 01:25:11.430 01:25:11.430 real 0m0.296s 01:25:11.430 user 0m0.106s 01:25:11.430 sys 0m0.093s 01:25:11.430 05:20:02 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 01:25:11.430 05:20:02 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 01:25:11.430 ************************************ 01:25:11.430 END TEST env_dpdk_post_init 01:25:11.430 ************************************ 01:25:11.430 05:20:02 env -- env/env.sh@26 -- # uname 01:25:11.430 05:20:02 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 01:25:11.430 05:20:02 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 01:25:11.430 05:20:02 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:25:11.430 05:20:02 env -- common/autotest_common.sh@1111 -- # xtrace_disable 01:25:11.430 05:20:02 env -- common/autotest_common.sh@10 -- # set +x 01:25:11.431 ************************************ 01:25:11.431 START TEST env_mem_callbacks 01:25:11.431 ************************************ 01:25:11.431 05:20:02 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 01:25:11.431 EAL: Detected CPU lcores: 10 01:25:11.431 EAL: Detected NUMA nodes: 1 01:25:11.431 EAL: Detected shared linkage of DPDK 01:25:11.689 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 01:25:11.689 EAL: Selected IOVA mode 'PA' 01:25:11.689 TELEMETRY: No legacy callbacks, legacy socket not created 01:25:11.689 01:25:11.689 01:25:11.689 CUnit - A unit testing framework for C - Version 2.1-3 01:25:11.689 http://cunit.sourceforge.net/ 01:25:11.689 01:25:11.689 01:25:11.689 Suite: memory 01:25:11.689 Test: test ... 01:25:11.689 register 0x200000200000 2097152 01:25:11.689 malloc 3145728 01:25:11.689 register 0x200000400000 4194304 01:25:11.689 buf 0x2000004fffc0 len 3145728 PASSED 01:25:11.689 malloc 64 01:25:11.689 buf 0x2000004ffec0 len 64 PASSED 01:25:11.689 malloc 4194304 01:25:11.689 register 0x200000800000 6291456 01:25:11.689 buf 0x2000009fffc0 len 4194304 PASSED 01:25:11.689 free 0x2000004fffc0 3145728 01:25:11.689 free 0x2000004ffec0 64 01:25:11.689 unregister 0x200000400000 4194304 PASSED 01:25:11.689 free 0x2000009fffc0 4194304 01:25:11.689 unregister 0x200000800000 6291456 PASSED 01:25:11.689 malloc 8388608 01:25:11.689 register 0x200000400000 10485760 01:25:11.689 buf 0x2000005fffc0 len 8388608 PASSED 01:25:11.689 free 0x2000005fffc0 8388608 01:25:11.689 unregister 0x200000400000 10485760 PASSED 01:25:11.689 passed 01:25:11.689 01:25:11.689 Run Summary: Type Total Ran Passed Failed Inactive 01:25:11.689 suites 1 1 n/a 0 0 01:25:11.689 tests 1 1 1 0 0 01:25:11.689 asserts 15 15 15 0 n/a 01:25:11.689 01:25:11.689 Elapsed time = 0.077 seconds 01:25:11.689 01:25:11.689 real 0m0.300s 01:25:11.689 user 0m0.116s 01:25:11.689 sys 0m0.082s 01:25:11.689 05:20:03 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 01:25:11.689 05:20:03 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 01:25:11.689 ************************************ 01:25:11.689 END TEST env_mem_callbacks 01:25:11.689 ************************************ 01:25:11.948 01:25:11.948 real 0m9.931s 01:25:11.948 user 0m7.547s 01:25:11.948 sys 0m1.974s 01:25:11.948 05:20:03 env -- common/autotest_common.sh@1130 -- # xtrace_disable 01:25:11.948 05:20:03 env -- common/autotest_common.sh@10 -- # set +x 01:25:11.948 ************************************ 01:25:11.948 END TEST env 01:25:11.948 ************************************ 01:25:11.948 05:20:03 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 01:25:11.948 05:20:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:25:11.948 05:20:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:25:11.948 05:20:03 -- common/autotest_common.sh@10 -- # set +x 01:25:11.948 ************************************ 01:25:11.948 START TEST rpc 01:25:11.948 ************************************ 01:25:11.948 05:20:03 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 01:25:11.948 * Looking for test storage... 01:25:11.948 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 01:25:11.948 05:20:03 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:25:11.948 05:20:03 rpc -- common/autotest_common.sh@1693 -- # lcov --version 01:25:11.948 05:20:03 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:25:11.948 05:20:03 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:25:11.948 05:20:03 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:25:11.948 05:20:03 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 01:25:11.948 05:20:03 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 01:25:11.948 05:20:03 rpc -- scripts/common.sh@336 -- # IFS=.-: 01:25:11.948 05:20:03 rpc -- scripts/common.sh@336 -- # read -ra ver1 01:25:11.948 05:20:03 rpc -- scripts/common.sh@337 -- # IFS=.-: 01:25:11.948 05:20:03 rpc -- scripts/common.sh@337 -- # read -ra ver2 01:25:11.948 05:20:03 rpc -- scripts/common.sh@338 -- # local 'op=<' 01:25:11.948 05:20:03 rpc -- scripts/common.sh@340 -- # ver1_l=2 01:25:11.948 05:20:03 rpc -- scripts/common.sh@341 -- # ver2_l=1 01:25:11.948 05:20:03 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:25:11.948 05:20:03 rpc -- scripts/common.sh@344 -- # case "$op" in 01:25:11.948 05:20:03 rpc -- scripts/common.sh@345 -- # : 1 01:25:11.948 05:20:03 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 01:25:11.948 05:20:03 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:25:11.948 05:20:03 rpc -- scripts/common.sh@365 -- # decimal 1 01:25:11.948 05:20:03 rpc -- scripts/common.sh@353 -- # local d=1 01:25:11.948 05:20:03 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:25:11.948 05:20:03 rpc -- scripts/common.sh@355 -- # echo 1 01:25:11.948 05:20:03 rpc -- scripts/common.sh@365 -- # ver1[v]=1 01:25:11.948 05:20:03 rpc -- scripts/common.sh@366 -- # decimal 2 01:25:11.948 05:20:03 rpc -- scripts/common.sh@353 -- # local d=2 01:25:11.948 05:20:03 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:25:11.948 05:20:03 rpc -- scripts/common.sh@355 -- # echo 2 01:25:11.948 05:20:03 rpc -- scripts/common.sh@366 -- # ver2[v]=2 01:25:11.948 05:20:03 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:25:11.948 05:20:03 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:25:11.948 05:20:03 rpc -- scripts/common.sh@368 -- # return 0 01:25:11.948 05:20:03 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:25:11.948 05:20:03 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:25:11.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:25:11.948 --rc genhtml_branch_coverage=1 01:25:11.948 --rc genhtml_function_coverage=1 01:25:11.948 --rc genhtml_legend=1 01:25:11.948 --rc geninfo_all_blocks=1 01:25:11.948 --rc geninfo_unexecuted_blocks=1 01:25:11.948 01:25:11.948 ' 01:25:11.948 05:20:03 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:25:11.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:25:11.948 --rc genhtml_branch_coverage=1 01:25:11.948 --rc genhtml_function_coverage=1 01:25:11.948 --rc genhtml_legend=1 01:25:11.948 --rc geninfo_all_blocks=1 01:25:11.948 --rc geninfo_unexecuted_blocks=1 01:25:11.948 01:25:11.948 ' 01:25:11.948 05:20:03 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:25:11.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:25:11.948 --rc genhtml_branch_coverage=1 01:25:11.948 --rc genhtml_function_coverage=1 01:25:11.948 --rc genhtml_legend=1 01:25:11.948 --rc geninfo_all_blocks=1 01:25:11.948 --rc geninfo_unexecuted_blocks=1 01:25:11.948 01:25:11.948 ' 01:25:11.948 05:20:03 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:25:11.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:25:11.948 --rc genhtml_branch_coverage=1 01:25:11.948 --rc genhtml_function_coverage=1 01:25:11.948 --rc genhtml_legend=1 01:25:11.948 --rc geninfo_all_blocks=1 01:25:11.948 --rc geninfo_unexecuted_blocks=1 01:25:11.948 01:25:11.948 ' 01:25:11.948 05:20:03 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57933 01:25:11.948 05:20:03 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 01:25:11.948 05:20:03 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 01:25:11.948 05:20:03 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57933 01:25:11.948 05:20:03 rpc -- common/autotest_common.sh@835 -- # '[' -z 57933 ']' 01:25:11.948 05:20:03 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:25:11.948 05:20:03 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 01:25:11.948 05:20:03 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:25:11.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:25:11.948 05:20:03 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 01:25:11.948 05:20:03 rpc -- common/autotest_common.sh@10 -- # set +x 01:25:12.206 [2024-12-09 05:20:03.700554] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:25:12.206 [2024-12-09 05:20:03.700789] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57933 ] 01:25:12.465 [2024-12-09 05:20:03.887854] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:25:12.465 [2024-12-09 05:20:04.042935] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 01:25:12.465 [2024-12-09 05:20:04.043044] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57933' to capture a snapshot of events at runtime. 01:25:12.465 [2024-12-09 05:20:04.043071] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:25:12.465 [2024-12-09 05:20:04.043096] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:25:12.465 [2024-12-09 05:20:04.043107] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57933 for offline analysis/debug. 01:25:12.465 [2024-12-09 05:20:04.044602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:25:13.839 05:20:05 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:25:13.839 05:20:05 rpc -- common/autotest_common.sh@868 -- # return 0 01:25:13.839 05:20:05 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 01:25:13.839 05:20:05 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 01:25:13.839 05:20:05 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 01:25:13.839 05:20:05 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 01:25:13.839 05:20:05 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:25:13.839 05:20:05 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 01:25:13.839 05:20:05 rpc -- common/autotest_common.sh@10 -- # set +x 01:25:13.839 ************************************ 01:25:13.839 START TEST rpc_integrity 01:25:13.839 ************************************ 01:25:13.839 05:20:05 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 01:25:13.839 05:20:05 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 01:25:13.839 05:20:05 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:13.839 05:20:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 01:25:13.839 05:20:05 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:13.839 05:20:05 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 01:25:13.839 05:20:05 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 01:25:13.839 05:20:05 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 01:25:13.839 05:20:05 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 01:25:13.839 05:20:05 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:13.839 05:20:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 01:25:13.839 05:20:05 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:13.839 05:20:05 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 01:25:13.839 05:20:05 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 01:25:13.839 05:20:05 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:13.839 05:20:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 01:25:13.839 05:20:05 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:13.839 05:20:05 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 01:25:13.839 { 01:25:13.839 "name": "Malloc0", 01:25:13.839 "aliases": [ 01:25:13.839 "8c7e2f17-c6ea-4267-9b18-914c07003ebd" 01:25:13.839 ], 01:25:13.839 "product_name": "Malloc disk", 01:25:13.839 "block_size": 512, 01:25:13.839 "num_blocks": 16384, 01:25:13.839 "uuid": "8c7e2f17-c6ea-4267-9b18-914c07003ebd", 01:25:13.839 "assigned_rate_limits": { 01:25:13.839 "rw_ios_per_sec": 0, 01:25:13.839 "rw_mbytes_per_sec": 0, 01:25:13.839 "r_mbytes_per_sec": 0, 01:25:13.839 "w_mbytes_per_sec": 0 01:25:13.839 }, 01:25:13.839 "claimed": false, 01:25:13.839 "zoned": false, 01:25:13.839 "supported_io_types": { 01:25:13.839 "read": true, 01:25:13.839 "write": true, 01:25:13.839 "unmap": true, 01:25:13.839 "flush": true, 01:25:13.839 "reset": true, 01:25:13.839 "nvme_admin": false, 01:25:13.839 "nvme_io": false, 01:25:13.839 "nvme_io_md": false, 01:25:13.839 "write_zeroes": true, 01:25:13.839 "zcopy": true, 01:25:13.839 "get_zone_info": false, 01:25:13.839 "zone_management": false, 01:25:13.839 "zone_append": false, 01:25:13.839 "compare": false, 01:25:13.839 "compare_and_write": false, 01:25:13.839 "abort": true, 01:25:13.839 "seek_hole": false, 01:25:13.839 "seek_data": false, 01:25:13.839 "copy": true, 01:25:13.839 "nvme_iov_md": false 01:25:13.839 }, 01:25:13.839 "memory_domains": [ 01:25:13.839 { 01:25:13.839 "dma_device_id": "system", 01:25:13.839 "dma_device_type": 1 01:25:13.839 }, 01:25:13.839 { 01:25:13.839 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:25:13.839 "dma_device_type": 2 01:25:13.839 } 01:25:13.839 ], 01:25:13.839 "driver_specific": {} 01:25:13.839 } 01:25:13.839 ]' 01:25:13.839 05:20:05 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 01:25:13.839 05:20:05 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 01:25:13.839 05:20:05 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 01:25:13.839 05:20:05 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:13.839 05:20:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 01:25:13.839 [2024-12-09 05:20:05.192090] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 01:25:13.839 [2024-12-09 05:20:05.192241] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:25:13.839 [2024-12-09 05:20:05.192290] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 01:25:13.839 [2024-12-09 05:20:05.192311] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:25:13.839 [2024-12-09 05:20:05.195665] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:25:13.839 [2024-12-09 05:20:05.195738] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 01:25:13.839 Passthru0 01:25:13.839 05:20:05 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:13.839 05:20:05 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 01:25:13.839 05:20:05 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:13.839 05:20:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 01:25:13.839 05:20:05 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:13.839 05:20:05 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 01:25:13.839 { 01:25:13.839 "name": "Malloc0", 01:25:13.839 "aliases": [ 01:25:13.839 "8c7e2f17-c6ea-4267-9b18-914c07003ebd" 01:25:13.839 ], 01:25:13.839 "product_name": "Malloc disk", 01:25:13.839 "block_size": 512, 01:25:13.839 "num_blocks": 16384, 01:25:13.839 "uuid": "8c7e2f17-c6ea-4267-9b18-914c07003ebd", 01:25:13.839 "assigned_rate_limits": { 01:25:13.839 "rw_ios_per_sec": 0, 01:25:13.839 "rw_mbytes_per_sec": 0, 01:25:13.839 "r_mbytes_per_sec": 0, 01:25:13.839 "w_mbytes_per_sec": 0 01:25:13.839 }, 01:25:13.839 "claimed": true, 01:25:13.839 "claim_type": "exclusive_write", 01:25:13.839 "zoned": false, 01:25:13.839 "supported_io_types": { 01:25:13.839 "read": true, 01:25:13.839 "write": true, 01:25:13.839 "unmap": true, 01:25:13.839 "flush": true, 01:25:13.839 "reset": true, 01:25:13.839 "nvme_admin": false, 01:25:13.839 "nvme_io": false, 01:25:13.839 "nvme_io_md": false, 01:25:13.839 "write_zeroes": true, 01:25:13.839 "zcopy": true, 01:25:13.839 "get_zone_info": false, 01:25:13.839 "zone_management": false, 01:25:13.839 "zone_append": false, 01:25:13.839 "compare": false, 01:25:13.839 "compare_and_write": false, 01:25:13.839 "abort": true, 01:25:13.839 "seek_hole": false, 01:25:13.839 "seek_data": false, 01:25:13.839 "copy": true, 01:25:13.839 "nvme_iov_md": false 01:25:13.839 }, 01:25:13.839 "memory_domains": [ 01:25:13.839 { 01:25:13.839 "dma_device_id": "system", 01:25:13.840 "dma_device_type": 1 01:25:13.840 }, 01:25:13.840 { 01:25:13.840 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:25:13.840 "dma_device_type": 2 01:25:13.840 } 01:25:13.840 ], 01:25:13.840 "driver_specific": {} 01:25:13.840 }, 01:25:13.840 { 01:25:13.840 "name": "Passthru0", 01:25:13.840 "aliases": [ 01:25:13.840 "f30d055b-9300-5338-9870-c3b08b28f219" 01:25:13.840 ], 01:25:13.840 "product_name": "passthru", 01:25:13.840 "block_size": 512, 01:25:13.840 "num_blocks": 16384, 01:25:13.840 "uuid": "f30d055b-9300-5338-9870-c3b08b28f219", 01:25:13.840 "assigned_rate_limits": { 01:25:13.840 "rw_ios_per_sec": 0, 01:25:13.840 "rw_mbytes_per_sec": 0, 01:25:13.840 "r_mbytes_per_sec": 0, 01:25:13.840 "w_mbytes_per_sec": 0 01:25:13.840 }, 01:25:13.840 "claimed": false, 01:25:13.840 "zoned": false, 01:25:13.840 "supported_io_types": { 01:25:13.840 "read": true, 01:25:13.840 "write": true, 01:25:13.840 "unmap": true, 01:25:13.840 "flush": true, 01:25:13.840 "reset": true, 01:25:13.840 "nvme_admin": false, 01:25:13.840 "nvme_io": false, 01:25:13.840 "nvme_io_md": false, 01:25:13.840 "write_zeroes": true, 01:25:13.840 "zcopy": true, 01:25:13.840 "get_zone_info": false, 01:25:13.840 "zone_management": false, 01:25:13.840 "zone_append": false, 01:25:13.840 "compare": false, 01:25:13.840 "compare_and_write": false, 01:25:13.840 "abort": true, 01:25:13.840 "seek_hole": false, 01:25:13.840 "seek_data": false, 01:25:13.840 "copy": true, 01:25:13.840 "nvme_iov_md": false 01:25:13.840 }, 01:25:13.840 "memory_domains": [ 01:25:13.840 { 01:25:13.840 "dma_device_id": "system", 01:25:13.840 "dma_device_type": 1 01:25:13.840 }, 01:25:13.840 { 01:25:13.840 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:25:13.840 "dma_device_type": 2 01:25:13.840 } 01:25:13.840 ], 01:25:13.840 "driver_specific": { 01:25:13.840 "passthru": { 01:25:13.840 "name": "Passthru0", 01:25:13.840 "base_bdev_name": "Malloc0" 01:25:13.840 } 01:25:13.840 } 01:25:13.840 } 01:25:13.840 ]' 01:25:13.840 05:20:05 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 01:25:13.840 05:20:05 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 01:25:13.840 05:20:05 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 01:25:13.840 05:20:05 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:13.840 05:20:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 01:25:13.840 05:20:05 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:13.840 05:20:05 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 01:25:13.840 05:20:05 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:13.840 05:20:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 01:25:13.840 05:20:05 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:13.840 05:20:05 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 01:25:13.840 05:20:05 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:13.840 05:20:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 01:25:13.840 05:20:05 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:13.840 05:20:05 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 01:25:13.840 05:20:05 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 01:25:13.840 05:20:05 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 01:25:13.840 01:25:13.840 real 0m0.361s 01:25:13.840 user 0m0.219s 01:25:13.840 sys 0m0.043s 01:25:13.840 ************************************ 01:25:13.840 END TEST rpc_integrity 01:25:13.840 ************************************ 01:25:13.840 05:20:05 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 01:25:13.840 05:20:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 01:25:13.840 05:20:05 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 01:25:13.840 05:20:05 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:25:13.840 05:20:05 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 01:25:13.840 05:20:05 rpc -- common/autotest_common.sh@10 -- # set +x 01:25:13.840 ************************************ 01:25:13.840 START TEST rpc_plugins 01:25:13.840 ************************************ 01:25:13.840 05:20:05 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 01:25:13.840 05:20:05 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 01:25:13.840 05:20:05 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:13.840 05:20:05 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 01:25:13.840 05:20:05 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:13.840 05:20:05 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 01:25:13.840 05:20:05 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 01:25:13.840 05:20:05 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:13.840 05:20:05 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 01:25:14.097 05:20:05 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:14.097 05:20:05 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 01:25:14.097 { 01:25:14.097 "name": "Malloc1", 01:25:14.097 "aliases": [ 01:25:14.097 "e272caf2-1c6c-455d-94f9-b63b36e15808" 01:25:14.097 ], 01:25:14.097 "product_name": "Malloc disk", 01:25:14.097 "block_size": 4096, 01:25:14.097 "num_blocks": 256, 01:25:14.097 "uuid": "e272caf2-1c6c-455d-94f9-b63b36e15808", 01:25:14.097 "assigned_rate_limits": { 01:25:14.097 "rw_ios_per_sec": 0, 01:25:14.097 "rw_mbytes_per_sec": 0, 01:25:14.097 "r_mbytes_per_sec": 0, 01:25:14.097 "w_mbytes_per_sec": 0 01:25:14.097 }, 01:25:14.097 "claimed": false, 01:25:14.097 "zoned": false, 01:25:14.097 "supported_io_types": { 01:25:14.097 "read": true, 01:25:14.097 "write": true, 01:25:14.097 "unmap": true, 01:25:14.097 "flush": true, 01:25:14.097 "reset": true, 01:25:14.097 "nvme_admin": false, 01:25:14.097 "nvme_io": false, 01:25:14.097 "nvme_io_md": false, 01:25:14.097 "write_zeroes": true, 01:25:14.097 "zcopy": true, 01:25:14.097 "get_zone_info": false, 01:25:14.097 "zone_management": false, 01:25:14.097 "zone_append": false, 01:25:14.097 "compare": false, 01:25:14.097 "compare_and_write": false, 01:25:14.097 "abort": true, 01:25:14.097 "seek_hole": false, 01:25:14.097 "seek_data": false, 01:25:14.097 "copy": true, 01:25:14.097 "nvme_iov_md": false 01:25:14.097 }, 01:25:14.097 "memory_domains": [ 01:25:14.097 { 01:25:14.097 "dma_device_id": "system", 01:25:14.097 "dma_device_type": 1 01:25:14.097 }, 01:25:14.097 { 01:25:14.097 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:25:14.097 "dma_device_type": 2 01:25:14.097 } 01:25:14.097 ], 01:25:14.097 "driver_specific": {} 01:25:14.097 } 01:25:14.097 ]' 01:25:14.097 05:20:05 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 01:25:14.097 05:20:05 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 01:25:14.098 05:20:05 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 01:25:14.098 05:20:05 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:14.098 05:20:05 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 01:25:14.098 05:20:05 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:14.098 05:20:05 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 01:25:14.098 05:20:05 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:14.098 05:20:05 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 01:25:14.098 05:20:05 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:14.098 05:20:05 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 01:25:14.098 05:20:05 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 01:25:14.098 05:20:05 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 01:25:14.098 01:25:14.098 real 0m0.171s 01:25:14.098 user 0m0.109s 01:25:14.098 sys 0m0.020s 01:25:14.098 ************************************ 01:25:14.098 END TEST rpc_plugins 01:25:14.098 ************************************ 01:25:14.098 05:20:05 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 01:25:14.098 05:20:05 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 01:25:14.098 05:20:05 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 01:25:14.098 05:20:05 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:25:14.098 05:20:05 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 01:25:14.098 05:20:05 rpc -- common/autotest_common.sh@10 -- # set +x 01:25:14.098 ************************************ 01:25:14.098 START TEST rpc_trace_cmd_test 01:25:14.098 ************************************ 01:25:14.098 05:20:05 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 01:25:14.098 05:20:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 01:25:14.098 05:20:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 01:25:14.098 05:20:05 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:14.098 05:20:05 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 01:25:14.098 05:20:05 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:14.098 05:20:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 01:25:14.098 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57933", 01:25:14.098 "tpoint_group_mask": "0x8", 01:25:14.098 "iscsi_conn": { 01:25:14.098 "mask": "0x2", 01:25:14.098 "tpoint_mask": "0x0" 01:25:14.098 }, 01:25:14.098 "scsi": { 01:25:14.098 "mask": "0x4", 01:25:14.098 "tpoint_mask": "0x0" 01:25:14.098 }, 01:25:14.098 "bdev": { 01:25:14.098 "mask": "0x8", 01:25:14.098 "tpoint_mask": "0xffffffffffffffff" 01:25:14.098 }, 01:25:14.098 "nvmf_rdma": { 01:25:14.098 "mask": "0x10", 01:25:14.098 "tpoint_mask": "0x0" 01:25:14.098 }, 01:25:14.098 "nvmf_tcp": { 01:25:14.098 "mask": "0x20", 01:25:14.098 "tpoint_mask": "0x0" 01:25:14.098 }, 01:25:14.098 "ftl": { 01:25:14.098 "mask": "0x40", 01:25:14.098 "tpoint_mask": "0x0" 01:25:14.098 }, 01:25:14.098 "blobfs": { 01:25:14.098 "mask": "0x80", 01:25:14.098 "tpoint_mask": "0x0" 01:25:14.098 }, 01:25:14.098 "dsa": { 01:25:14.098 "mask": "0x200", 01:25:14.098 "tpoint_mask": "0x0" 01:25:14.098 }, 01:25:14.098 "thread": { 01:25:14.098 "mask": "0x400", 01:25:14.098 "tpoint_mask": "0x0" 01:25:14.098 }, 01:25:14.098 "nvme_pcie": { 01:25:14.098 "mask": "0x800", 01:25:14.098 "tpoint_mask": "0x0" 01:25:14.098 }, 01:25:14.098 "iaa": { 01:25:14.098 "mask": "0x1000", 01:25:14.098 "tpoint_mask": "0x0" 01:25:14.098 }, 01:25:14.098 "nvme_tcp": { 01:25:14.098 "mask": "0x2000", 01:25:14.098 "tpoint_mask": "0x0" 01:25:14.098 }, 01:25:14.098 "bdev_nvme": { 01:25:14.098 "mask": "0x4000", 01:25:14.098 "tpoint_mask": "0x0" 01:25:14.098 }, 01:25:14.098 "sock": { 01:25:14.098 "mask": "0x8000", 01:25:14.098 "tpoint_mask": "0x0" 01:25:14.098 }, 01:25:14.098 "blob": { 01:25:14.098 "mask": "0x10000", 01:25:14.098 "tpoint_mask": "0x0" 01:25:14.098 }, 01:25:14.098 "bdev_raid": { 01:25:14.098 "mask": "0x20000", 01:25:14.098 "tpoint_mask": "0x0" 01:25:14.098 }, 01:25:14.098 "scheduler": { 01:25:14.098 "mask": "0x40000", 01:25:14.098 "tpoint_mask": "0x0" 01:25:14.098 } 01:25:14.098 }' 01:25:14.098 05:20:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 01:25:14.356 05:20:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 01:25:14.356 05:20:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 01:25:14.356 05:20:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 01:25:14.356 05:20:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 01:25:14.356 05:20:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 01:25:14.356 05:20:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 01:25:14.356 05:20:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 01:25:14.356 05:20:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 01:25:14.356 05:20:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 01:25:14.356 01:25:14.356 real 0m0.292s 01:25:14.356 user 0m0.252s 01:25:14.356 sys 0m0.030s 01:25:14.356 05:20:05 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 01:25:14.356 05:20:05 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 01:25:14.356 ************************************ 01:25:14.356 END TEST rpc_trace_cmd_test 01:25:14.356 ************************************ 01:25:14.614 05:20:05 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 01:25:14.614 05:20:05 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 01:25:14.614 05:20:05 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 01:25:14.614 05:20:05 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:25:14.614 05:20:05 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 01:25:14.614 05:20:05 rpc -- common/autotest_common.sh@10 -- # set +x 01:25:14.614 ************************************ 01:25:14.614 START TEST rpc_daemon_integrity 01:25:14.614 ************************************ 01:25:14.614 05:20:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 01:25:14.614 05:20:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 01:25:14.614 05:20:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:14.614 05:20:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 01:25:14.614 05:20:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:14.614 05:20:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 01:25:14.614 05:20:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 01:25:14.614 05:20:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 01:25:14.614 05:20:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 01:25:14.614 05:20:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:14.614 05:20:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 01:25:14.614 05:20:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:14.614 05:20:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 01:25:14.614 05:20:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 01:25:14.614 05:20:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:14.614 05:20:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 01:25:14.614 05:20:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:14.614 05:20:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 01:25:14.614 { 01:25:14.614 "name": "Malloc2", 01:25:14.614 "aliases": [ 01:25:14.614 "94c49496-c907-4f48-b11c-1d46d272c51e" 01:25:14.614 ], 01:25:14.614 "product_name": "Malloc disk", 01:25:14.614 "block_size": 512, 01:25:14.614 "num_blocks": 16384, 01:25:14.614 "uuid": "94c49496-c907-4f48-b11c-1d46d272c51e", 01:25:14.614 "assigned_rate_limits": { 01:25:14.614 "rw_ios_per_sec": 0, 01:25:14.614 "rw_mbytes_per_sec": 0, 01:25:14.614 "r_mbytes_per_sec": 0, 01:25:14.614 "w_mbytes_per_sec": 0 01:25:14.614 }, 01:25:14.614 "claimed": false, 01:25:14.614 "zoned": false, 01:25:14.614 "supported_io_types": { 01:25:14.614 "read": true, 01:25:14.614 "write": true, 01:25:14.614 "unmap": true, 01:25:14.614 "flush": true, 01:25:14.614 "reset": true, 01:25:14.614 "nvme_admin": false, 01:25:14.614 "nvme_io": false, 01:25:14.614 "nvme_io_md": false, 01:25:14.614 "write_zeroes": true, 01:25:14.614 "zcopy": true, 01:25:14.614 "get_zone_info": false, 01:25:14.614 "zone_management": false, 01:25:14.614 "zone_append": false, 01:25:14.614 "compare": false, 01:25:14.614 "compare_and_write": false, 01:25:14.614 "abort": true, 01:25:14.614 "seek_hole": false, 01:25:14.614 "seek_data": false, 01:25:14.614 "copy": true, 01:25:14.614 "nvme_iov_md": false 01:25:14.614 }, 01:25:14.614 "memory_domains": [ 01:25:14.614 { 01:25:14.614 "dma_device_id": "system", 01:25:14.614 "dma_device_type": 1 01:25:14.614 }, 01:25:14.614 { 01:25:14.614 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:25:14.614 "dma_device_type": 2 01:25:14.614 } 01:25:14.614 ], 01:25:14.614 "driver_specific": {} 01:25:14.614 } 01:25:14.614 ]' 01:25:14.614 05:20:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 01:25:14.614 05:20:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 01:25:14.614 05:20:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 01:25:14.614 05:20:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:14.614 05:20:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 01:25:14.614 [2024-12-09 05:20:06.177313] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 01:25:14.614 [2024-12-09 05:20:06.177496] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:25:14.614 [2024-12-09 05:20:06.177535] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 01:25:14.614 [2024-12-09 05:20:06.177555] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:25:14.614 [2024-12-09 05:20:06.181046] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:25:14.614 [2024-12-09 05:20:06.181097] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 01:25:14.615 Passthru0 01:25:14.615 05:20:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:14.615 05:20:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 01:25:14.615 05:20:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:14.615 05:20:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 01:25:14.615 05:20:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:14.615 05:20:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 01:25:14.615 { 01:25:14.615 "name": "Malloc2", 01:25:14.615 "aliases": [ 01:25:14.615 "94c49496-c907-4f48-b11c-1d46d272c51e" 01:25:14.615 ], 01:25:14.615 "product_name": "Malloc disk", 01:25:14.615 "block_size": 512, 01:25:14.615 "num_blocks": 16384, 01:25:14.615 "uuid": "94c49496-c907-4f48-b11c-1d46d272c51e", 01:25:14.615 "assigned_rate_limits": { 01:25:14.615 "rw_ios_per_sec": 0, 01:25:14.615 "rw_mbytes_per_sec": 0, 01:25:14.615 "r_mbytes_per_sec": 0, 01:25:14.615 "w_mbytes_per_sec": 0 01:25:14.615 }, 01:25:14.615 "claimed": true, 01:25:14.615 "claim_type": "exclusive_write", 01:25:14.615 "zoned": false, 01:25:14.615 "supported_io_types": { 01:25:14.615 "read": true, 01:25:14.615 "write": true, 01:25:14.615 "unmap": true, 01:25:14.615 "flush": true, 01:25:14.615 "reset": true, 01:25:14.615 "nvme_admin": false, 01:25:14.615 "nvme_io": false, 01:25:14.615 "nvme_io_md": false, 01:25:14.615 "write_zeroes": true, 01:25:14.615 "zcopy": true, 01:25:14.615 "get_zone_info": false, 01:25:14.615 "zone_management": false, 01:25:14.615 "zone_append": false, 01:25:14.615 "compare": false, 01:25:14.615 "compare_and_write": false, 01:25:14.615 "abort": true, 01:25:14.615 "seek_hole": false, 01:25:14.615 "seek_data": false, 01:25:14.615 "copy": true, 01:25:14.615 "nvme_iov_md": false 01:25:14.615 }, 01:25:14.615 "memory_domains": [ 01:25:14.615 { 01:25:14.615 "dma_device_id": "system", 01:25:14.615 "dma_device_type": 1 01:25:14.615 }, 01:25:14.615 { 01:25:14.615 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:25:14.615 "dma_device_type": 2 01:25:14.615 } 01:25:14.615 ], 01:25:14.615 "driver_specific": {} 01:25:14.615 }, 01:25:14.615 { 01:25:14.615 "name": "Passthru0", 01:25:14.615 "aliases": [ 01:25:14.615 "360b6a70-226d-578d-a335-5a17c6c1426b" 01:25:14.615 ], 01:25:14.615 "product_name": "passthru", 01:25:14.615 "block_size": 512, 01:25:14.615 "num_blocks": 16384, 01:25:14.615 "uuid": "360b6a70-226d-578d-a335-5a17c6c1426b", 01:25:14.615 "assigned_rate_limits": { 01:25:14.615 "rw_ios_per_sec": 0, 01:25:14.615 "rw_mbytes_per_sec": 0, 01:25:14.615 "r_mbytes_per_sec": 0, 01:25:14.615 "w_mbytes_per_sec": 0 01:25:14.615 }, 01:25:14.615 "claimed": false, 01:25:14.615 "zoned": false, 01:25:14.615 "supported_io_types": { 01:25:14.615 "read": true, 01:25:14.615 "write": true, 01:25:14.615 "unmap": true, 01:25:14.615 "flush": true, 01:25:14.615 "reset": true, 01:25:14.615 "nvme_admin": false, 01:25:14.615 "nvme_io": false, 01:25:14.615 "nvme_io_md": false, 01:25:14.615 "write_zeroes": true, 01:25:14.615 "zcopy": true, 01:25:14.615 "get_zone_info": false, 01:25:14.615 "zone_management": false, 01:25:14.615 "zone_append": false, 01:25:14.615 "compare": false, 01:25:14.615 "compare_and_write": false, 01:25:14.615 "abort": true, 01:25:14.615 "seek_hole": false, 01:25:14.615 "seek_data": false, 01:25:14.615 "copy": true, 01:25:14.615 "nvme_iov_md": false 01:25:14.615 }, 01:25:14.615 "memory_domains": [ 01:25:14.615 { 01:25:14.615 "dma_device_id": "system", 01:25:14.615 "dma_device_type": 1 01:25:14.615 }, 01:25:14.615 { 01:25:14.615 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:25:14.615 "dma_device_type": 2 01:25:14.615 } 01:25:14.615 ], 01:25:14.615 "driver_specific": { 01:25:14.615 "passthru": { 01:25:14.615 "name": "Passthru0", 01:25:14.615 "base_bdev_name": "Malloc2" 01:25:14.615 } 01:25:14.615 } 01:25:14.615 } 01:25:14.615 ]' 01:25:14.615 05:20:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 01:25:14.872 05:20:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 01:25:14.872 05:20:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 01:25:14.872 05:20:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:14.872 05:20:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 01:25:14.873 05:20:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:14.873 05:20:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 01:25:14.873 05:20:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:14.873 05:20:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 01:25:14.873 05:20:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:14.873 05:20:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 01:25:14.873 05:20:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:14.873 05:20:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 01:25:14.873 05:20:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:14.873 05:20:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 01:25:14.873 05:20:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 01:25:14.873 05:20:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 01:25:14.873 01:25:14.873 real 0m0.385s 01:25:14.873 user 0m0.259s 01:25:14.873 sys 0m0.034s 01:25:14.873 05:20:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 01:25:14.873 05:20:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 01:25:14.873 ************************************ 01:25:14.873 END TEST rpc_daemon_integrity 01:25:14.873 ************************************ 01:25:14.873 05:20:06 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 01:25:14.873 05:20:06 rpc -- rpc/rpc.sh@84 -- # killprocess 57933 01:25:14.873 05:20:06 rpc -- common/autotest_common.sh@954 -- # '[' -z 57933 ']' 01:25:14.873 05:20:06 rpc -- common/autotest_common.sh@958 -- # kill -0 57933 01:25:14.873 05:20:06 rpc -- common/autotest_common.sh@959 -- # uname 01:25:14.873 05:20:06 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:25:14.873 05:20:06 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57933 01:25:14.873 05:20:06 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:25:14.873 killing process with pid 57933 01:25:14.873 05:20:06 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:25:14.873 05:20:06 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57933' 01:25:14.873 05:20:06 rpc -- common/autotest_common.sh@973 -- # kill 57933 01:25:14.873 05:20:06 rpc -- common/autotest_common.sh@978 -- # wait 57933 01:25:17.404 01:25:17.404 real 0m5.532s 01:25:17.404 user 0m6.191s 01:25:17.404 sys 0m1.028s 01:25:17.404 05:20:08 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 01:25:17.404 ************************************ 01:25:17.404 END TEST rpc 01:25:17.404 ************************************ 01:25:17.404 05:20:08 rpc -- common/autotest_common.sh@10 -- # set +x 01:25:17.404 05:20:08 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 01:25:17.404 05:20:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:25:17.404 05:20:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:25:17.404 05:20:08 -- common/autotest_common.sh@10 -- # set +x 01:25:17.404 ************************************ 01:25:17.404 START TEST skip_rpc 01:25:17.404 ************************************ 01:25:17.404 05:20:08 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 01:25:17.663 * Looking for test storage... 01:25:17.663 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 01:25:17.663 05:20:09 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:25:17.663 05:20:09 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 01:25:17.663 05:20:09 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:25:17.663 05:20:09 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:25:17.663 05:20:09 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:25:17.663 05:20:09 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 01:25:17.663 05:20:09 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 01:25:17.663 05:20:09 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 01:25:17.663 05:20:09 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 01:25:17.663 05:20:09 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 01:25:17.663 05:20:09 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 01:25:17.663 05:20:09 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 01:25:17.663 05:20:09 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 01:25:17.663 05:20:09 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 01:25:17.663 05:20:09 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:25:17.663 05:20:09 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 01:25:17.663 05:20:09 skip_rpc -- scripts/common.sh@345 -- # : 1 01:25:17.663 05:20:09 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 01:25:17.663 05:20:09 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:25:17.663 05:20:09 skip_rpc -- scripts/common.sh@365 -- # decimal 1 01:25:17.663 05:20:09 skip_rpc -- scripts/common.sh@353 -- # local d=1 01:25:17.663 05:20:09 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:25:17.663 05:20:09 skip_rpc -- scripts/common.sh@355 -- # echo 1 01:25:17.663 05:20:09 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 01:25:17.663 05:20:09 skip_rpc -- scripts/common.sh@366 -- # decimal 2 01:25:17.663 05:20:09 skip_rpc -- scripts/common.sh@353 -- # local d=2 01:25:17.663 05:20:09 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:25:17.663 05:20:09 skip_rpc -- scripts/common.sh@355 -- # echo 2 01:25:17.663 05:20:09 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 01:25:17.663 05:20:09 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:25:17.663 05:20:09 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:25:17.663 05:20:09 skip_rpc -- scripts/common.sh@368 -- # return 0 01:25:17.663 05:20:09 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:25:17.663 05:20:09 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:25:17.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:25:17.663 --rc genhtml_branch_coverage=1 01:25:17.663 --rc genhtml_function_coverage=1 01:25:17.663 --rc genhtml_legend=1 01:25:17.663 --rc geninfo_all_blocks=1 01:25:17.663 --rc geninfo_unexecuted_blocks=1 01:25:17.663 01:25:17.663 ' 01:25:17.663 05:20:09 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:25:17.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:25:17.663 --rc genhtml_branch_coverage=1 01:25:17.663 --rc genhtml_function_coverage=1 01:25:17.663 --rc genhtml_legend=1 01:25:17.663 --rc geninfo_all_blocks=1 01:25:17.663 --rc geninfo_unexecuted_blocks=1 01:25:17.663 01:25:17.663 ' 01:25:17.663 05:20:09 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:25:17.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:25:17.663 --rc genhtml_branch_coverage=1 01:25:17.663 --rc genhtml_function_coverage=1 01:25:17.663 --rc genhtml_legend=1 01:25:17.663 --rc geninfo_all_blocks=1 01:25:17.663 --rc geninfo_unexecuted_blocks=1 01:25:17.663 01:25:17.663 ' 01:25:17.663 05:20:09 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:25:17.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:25:17.663 --rc genhtml_branch_coverage=1 01:25:17.663 --rc genhtml_function_coverage=1 01:25:17.663 --rc genhtml_legend=1 01:25:17.663 --rc geninfo_all_blocks=1 01:25:17.663 --rc geninfo_unexecuted_blocks=1 01:25:17.663 01:25:17.663 ' 01:25:17.663 05:20:09 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 01:25:17.663 05:20:09 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 01:25:17.663 05:20:09 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 01:25:17.663 05:20:09 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:25:17.663 05:20:09 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 01:25:17.663 05:20:09 skip_rpc -- common/autotest_common.sh@10 -- # set +x 01:25:17.663 ************************************ 01:25:17.663 START TEST skip_rpc 01:25:17.663 ************************************ 01:25:17.663 05:20:09 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 01:25:17.663 05:20:09 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=58167 01:25:17.663 05:20:09 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 01:25:17.663 05:20:09 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 01:25:17.664 05:20:09 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 01:25:17.922 [2024-12-09 05:20:09.310598] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:25:17.922 [2024-12-09 05:20:09.310821] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58167 ] 01:25:17.922 [2024-12-09 05:20:09.504484] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:25:18.181 [2024-12-09 05:20:09.679848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:25:23.451 05:20:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 01:25:23.451 05:20:14 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 01:25:23.451 05:20:14 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 01:25:23.451 05:20:14 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 01:25:23.451 05:20:14 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:25:23.451 05:20:14 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 01:25:23.451 05:20:14 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:25:23.451 05:20:14 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 01:25:23.451 05:20:14 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:23.451 05:20:14 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 01:25:23.451 05:20:14 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 01:25:23.451 05:20:14 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 01:25:23.451 05:20:14 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:25:23.451 05:20:14 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:25:23.451 05:20:14 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:25:23.451 05:20:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 01:25:23.451 05:20:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 58167 01:25:23.451 05:20:14 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 58167 ']' 01:25:23.451 05:20:14 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 58167 01:25:23.451 05:20:14 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 01:25:23.451 05:20:14 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:25:23.451 05:20:14 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58167 01:25:23.451 05:20:14 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:25:23.451 05:20:14 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:25:23.451 05:20:14 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58167' 01:25:23.451 killing process with pid 58167 01:25:23.451 05:20:14 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 58167 01:25:23.451 05:20:14 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 58167 01:25:25.346 01:25:25.346 real 0m7.615s 01:25:25.346 user 0m6.934s 01:25:25.346 sys 0m0.578s 01:25:25.346 05:20:16 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 01:25:25.346 05:20:16 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 01:25:25.346 ************************************ 01:25:25.346 END TEST skip_rpc 01:25:25.346 ************************************ 01:25:25.346 05:20:16 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 01:25:25.346 05:20:16 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:25:25.346 05:20:16 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 01:25:25.346 05:20:16 skip_rpc -- common/autotest_common.sh@10 -- # set +x 01:25:25.346 ************************************ 01:25:25.346 START TEST skip_rpc_with_json 01:25:25.346 ************************************ 01:25:25.346 05:20:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 01:25:25.347 05:20:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 01:25:25.347 05:20:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=58277 01:25:25.347 05:20:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 01:25:25.347 05:20:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 58277 01:25:25.347 05:20:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 58277 ']' 01:25:25.347 05:20:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 01:25:25.347 05:20:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:25:25.347 05:20:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 01:25:25.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:25:25.347 05:20:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:25:25.347 05:20:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 01:25:25.347 05:20:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 01:25:25.604 [2024-12-09 05:20:16.978216] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:25:25.604 [2024-12-09 05:20:16.978408] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58277 ] 01:25:25.604 [2024-12-09 05:20:17.156855] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:25:25.863 [2024-12-09 05:20:17.306499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:25:26.799 05:20:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:25:26.799 05:20:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 01:25:26.799 05:20:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 01:25:26.799 05:20:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:26.799 05:20:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 01:25:26.799 [2024-12-09 05:20:18.292404] nvmf_rpc.c:2706:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 01:25:26.799 request: 01:25:26.799 { 01:25:26.799 "trtype": "tcp", 01:25:26.799 "method": "nvmf_get_transports", 01:25:26.799 "req_id": 1 01:25:26.799 } 01:25:26.799 Got JSON-RPC error response 01:25:26.799 response: 01:25:26.799 { 01:25:26.799 "code": -19, 01:25:26.799 "message": "No such device" 01:25:26.799 } 01:25:26.799 05:20:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 01:25:26.799 05:20:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 01:25:26.799 05:20:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:26.799 05:20:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 01:25:26.799 [2024-12-09 05:20:18.304587] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:25:26.799 05:20:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:26.799 05:20:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 01:25:26.799 05:20:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:26.799 05:20:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 01:25:27.057 05:20:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:27.057 05:20:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 01:25:27.057 { 01:25:27.057 "subsystems": [ 01:25:27.057 { 01:25:27.057 "subsystem": "fsdev", 01:25:27.057 "config": [ 01:25:27.057 { 01:25:27.057 "method": "fsdev_set_opts", 01:25:27.057 "params": { 01:25:27.057 "fsdev_io_pool_size": 65535, 01:25:27.057 "fsdev_io_cache_size": 256 01:25:27.057 } 01:25:27.057 } 01:25:27.057 ] 01:25:27.057 }, 01:25:27.057 { 01:25:27.057 "subsystem": "keyring", 01:25:27.057 "config": [] 01:25:27.057 }, 01:25:27.057 { 01:25:27.057 "subsystem": "iobuf", 01:25:27.057 "config": [ 01:25:27.057 { 01:25:27.057 "method": "iobuf_set_options", 01:25:27.057 "params": { 01:25:27.057 "small_pool_count": 8192, 01:25:27.057 "large_pool_count": 1024, 01:25:27.057 "small_bufsize": 8192, 01:25:27.057 "large_bufsize": 135168, 01:25:27.057 "enable_numa": false 01:25:27.057 } 01:25:27.057 } 01:25:27.057 ] 01:25:27.057 }, 01:25:27.057 { 01:25:27.057 "subsystem": "sock", 01:25:27.057 "config": [ 01:25:27.057 { 01:25:27.057 "method": "sock_set_default_impl", 01:25:27.057 "params": { 01:25:27.057 "impl_name": "posix" 01:25:27.057 } 01:25:27.057 }, 01:25:27.057 { 01:25:27.057 "method": "sock_impl_set_options", 01:25:27.057 "params": { 01:25:27.058 "impl_name": "ssl", 01:25:27.058 "recv_buf_size": 4096, 01:25:27.058 "send_buf_size": 4096, 01:25:27.058 "enable_recv_pipe": true, 01:25:27.058 "enable_quickack": false, 01:25:27.058 "enable_placement_id": 0, 01:25:27.058 "enable_zerocopy_send_server": true, 01:25:27.058 "enable_zerocopy_send_client": false, 01:25:27.058 "zerocopy_threshold": 0, 01:25:27.058 "tls_version": 0, 01:25:27.058 "enable_ktls": false 01:25:27.058 } 01:25:27.058 }, 01:25:27.058 { 01:25:27.058 "method": "sock_impl_set_options", 01:25:27.058 "params": { 01:25:27.058 "impl_name": "posix", 01:25:27.058 "recv_buf_size": 2097152, 01:25:27.058 "send_buf_size": 2097152, 01:25:27.058 "enable_recv_pipe": true, 01:25:27.058 "enable_quickack": false, 01:25:27.058 "enable_placement_id": 0, 01:25:27.058 "enable_zerocopy_send_server": true, 01:25:27.058 "enable_zerocopy_send_client": false, 01:25:27.058 "zerocopy_threshold": 0, 01:25:27.058 "tls_version": 0, 01:25:27.058 "enable_ktls": false 01:25:27.058 } 01:25:27.058 } 01:25:27.058 ] 01:25:27.058 }, 01:25:27.058 { 01:25:27.058 "subsystem": "vmd", 01:25:27.058 "config": [] 01:25:27.058 }, 01:25:27.058 { 01:25:27.058 "subsystem": "accel", 01:25:27.058 "config": [ 01:25:27.058 { 01:25:27.058 "method": "accel_set_options", 01:25:27.058 "params": { 01:25:27.058 "small_cache_size": 128, 01:25:27.058 "large_cache_size": 16, 01:25:27.058 "task_count": 2048, 01:25:27.058 "sequence_count": 2048, 01:25:27.058 "buf_count": 2048 01:25:27.058 } 01:25:27.058 } 01:25:27.058 ] 01:25:27.058 }, 01:25:27.058 { 01:25:27.058 "subsystem": "bdev", 01:25:27.058 "config": [ 01:25:27.058 { 01:25:27.058 "method": "bdev_set_options", 01:25:27.058 "params": { 01:25:27.058 "bdev_io_pool_size": 65535, 01:25:27.058 "bdev_io_cache_size": 256, 01:25:27.058 "bdev_auto_examine": true, 01:25:27.058 "iobuf_small_cache_size": 128, 01:25:27.058 "iobuf_large_cache_size": 16 01:25:27.058 } 01:25:27.058 }, 01:25:27.058 { 01:25:27.058 "method": "bdev_raid_set_options", 01:25:27.058 "params": { 01:25:27.058 "process_window_size_kb": 1024, 01:25:27.058 "process_max_bandwidth_mb_sec": 0 01:25:27.058 } 01:25:27.058 }, 01:25:27.058 { 01:25:27.058 "method": "bdev_iscsi_set_options", 01:25:27.058 "params": { 01:25:27.058 "timeout_sec": 30 01:25:27.058 } 01:25:27.058 }, 01:25:27.058 { 01:25:27.058 "method": "bdev_nvme_set_options", 01:25:27.058 "params": { 01:25:27.058 "action_on_timeout": "none", 01:25:27.058 "timeout_us": 0, 01:25:27.058 "timeout_admin_us": 0, 01:25:27.058 "keep_alive_timeout_ms": 10000, 01:25:27.058 "arbitration_burst": 0, 01:25:27.058 "low_priority_weight": 0, 01:25:27.058 "medium_priority_weight": 0, 01:25:27.058 "high_priority_weight": 0, 01:25:27.058 "nvme_adminq_poll_period_us": 10000, 01:25:27.058 "nvme_ioq_poll_period_us": 0, 01:25:27.058 "io_queue_requests": 0, 01:25:27.058 "delay_cmd_submit": true, 01:25:27.058 "transport_retry_count": 4, 01:25:27.058 "bdev_retry_count": 3, 01:25:27.058 "transport_ack_timeout": 0, 01:25:27.058 "ctrlr_loss_timeout_sec": 0, 01:25:27.058 "reconnect_delay_sec": 0, 01:25:27.058 "fast_io_fail_timeout_sec": 0, 01:25:27.058 "disable_auto_failback": false, 01:25:27.058 "generate_uuids": false, 01:25:27.058 "transport_tos": 0, 01:25:27.058 "nvme_error_stat": false, 01:25:27.058 "rdma_srq_size": 0, 01:25:27.058 "io_path_stat": false, 01:25:27.058 "allow_accel_sequence": false, 01:25:27.058 "rdma_max_cq_size": 0, 01:25:27.058 "rdma_cm_event_timeout_ms": 0, 01:25:27.058 "dhchap_digests": [ 01:25:27.058 "sha256", 01:25:27.058 "sha384", 01:25:27.058 "sha512" 01:25:27.058 ], 01:25:27.058 "dhchap_dhgroups": [ 01:25:27.058 "null", 01:25:27.058 "ffdhe2048", 01:25:27.058 "ffdhe3072", 01:25:27.058 "ffdhe4096", 01:25:27.058 "ffdhe6144", 01:25:27.058 "ffdhe8192" 01:25:27.058 ] 01:25:27.058 } 01:25:27.058 }, 01:25:27.058 { 01:25:27.058 "method": "bdev_nvme_set_hotplug", 01:25:27.058 "params": { 01:25:27.058 "period_us": 100000, 01:25:27.058 "enable": false 01:25:27.058 } 01:25:27.058 }, 01:25:27.058 { 01:25:27.058 "method": "bdev_wait_for_examine" 01:25:27.058 } 01:25:27.058 ] 01:25:27.058 }, 01:25:27.058 { 01:25:27.058 "subsystem": "scsi", 01:25:27.058 "config": null 01:25:27.058 }, 01:25:27.058 { 01:25:27.058 "subsystem": "scheduler", 01:25:27.058 "config": [ 01:25:27.058 { 01:25:27.058 "method": "framework_set_scheduler", 01:25:27.058 "params": { 01:25:27.058 "name": "static" 01:25:27.058 } 01:25:27.058 } 01:25:27.058 ] 01:25:27.058 }, 01:25:27.058 { 01:25:27.058 "subsystem": "vhost_scsi", 01:25:27.058 "config": [] 01:25:27.058 }, 01:25:27.058 { 01:25:27.058 "subsystem": "vhost_blk", 01:25:27.058 "config": [] 01:25:27.058 }, 01:25:27.058 { 01:25:27.058 "subsystem": "ublk", 01:25:27.058 "config": [] 01:25:27.058 }, 01:25:27.058 { 01:25:27.058 "subsystem": "nbd", 01:25:27.058 "config": [] 01:25:27.058 }, 01:25:27.058 { 01:25:27.058 "subsystem": "nvmf", 01:25:27.058 "config": [ 01:25:27.058 { 01:25:27.058 "method": "nvmf_set_config", 01:25:27.058 "params": { 01:25:27.058 "discovery_filter": "match_any", 01:25:27.058 "admin_cmd_passthru": { 01:25:27.058 "identify_ctrlr": false 01:25:27.058 }, 01:25:27.058 "dhchap_digests": [ 01:25:27.058 "sha256", 01:25:27.058 "sha384", 01:25:27.058 "sha512" 01:25:27.058 ], 01:25:27.058 "dhchap_dhgroups": [ 01:25:27.058 "null", 01:25:27.058 "ffdhe2048", 01:25:27.058 "ffdhe3072", 01:25:27.058 "ffdhe4096", 01:25:27.058 "ffdhe6144", 01:25:27.058 "ffdhe8192" 01:25:27.058 ] 01:25:27.058 } 01:25:27.058 }, 01:25:27.058 { 01:25:27.058 "method": "nvmf_set_max_subsystems", 01:25:27.058 "params": { 01:25:27.058 "max_subsystems": 1024 01:25:27.058 } 01:25:27.058 }, 01:25:27.058 { 01:25:27.058 "method": "nvmf_set_crdt", 01:25:27.058 "params": { 01:25:27.058 "crdt1": 0, 01:25:27.058 "crdt2": 0, 01:25:27.058 "crdt3": 0 01:25:27.058 } 01:25:27.058 }, 01:25:27.058 { 01:25:27.058 "method": "nvmf_create_transport", 01:25:27.058 "params": { 01:25:27.058 "trtype": "TCP", 01:25:27.058 "max_queue_depth": 128, 01:25:27.058 "max_io_qpairs_per_ctrlr": 127, 01:25:27.058 "in_capsule_data_size": 4096, 01:25:27.058 "max_io_size": 131072, 01:25:27.058 "io_unit_size": 131072, 01:25:27.058 "max_aq_depth": 128, 01:25:27.058 "num_shared_buffers": 511, 01:25:27.058 "buf_cache_size": 4294967295, 01:25:27.058 "dif_insert_or_strip": false, 01:25:27.058 "zcopy": false, 01:25:27.058 "c2h_success": true, 01:25:27.058 "sock_priority": 0, 01:25:27.058 "abort_timeout_sec": 1, 01:25:27.058 "ack_timeout": 0, 01:25:27.058 "data_wr_pool_size": 0 01:25:27.058 } 01:25:27.058 } 01:25:27.058 ] 01:25:27.058 }, 01:25:27.058 { 01:25:27.058 "subsystem": "iscsi", 01:25:27.058 "config": [ 01:25:27.058 { 01:25:27.058 "method": "iscsi_set_options", 01:25:27.058 "params": { 01:25:27.058 "node_base": "iqn.2016-06.io.spdk", 01:25:27.058 "max_sessions": 128, 01:25:27.058 "max_connections_per_session": 2, 01:25:27.058 "max_queue_depth": 64, 01:25:27.058 "default_time2wait": 2, 01:25:27.058 "default_time2retain": 20, 01:25:27.058 "first_burst_length": 8192, 01:25:27.058 "immediate_data": true, 01:25:27.058 "allow_duplicated_isid": false, 01:25:27.058 "error_recovery_level": 0, 01:25:27.058 "nop_timeout": 60, 01:25:27.058 "nop_in_interval": 30, 01:25:27.058 "disable_chap": false, 01:25:27.058 "require_chap": false, 01:25:27.058 "mutual_chap": false, 01:25:27.058 "chap_group": 0, 01:25:27.058 "max_large_datain_per_connection": 64, 01:25:27.058 "max_r2t_per_connection": 4, 01:25:27.058 "pdu_pool_size": 36864, 01:25:27.058 "immediate_data_pool_size": 16384, 01:25:27.058 "data_out_pool_size": 2048 01:25:27.058 } 01:25:27.058 } 01:25:27.058 ] 01:25:27.058 } 01:25:27.058 ] 01:25:27.058 } 01:25:27.058 05:20:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 01:25:27.058 05:20:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 58277 01:25:27.059 05:20:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58277 ']' 01:25:27.059 05:20:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58277 01:25:27.059 05:20:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 01:25:27.059 05:20:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:25:27.059 05:20:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58277 01:25:27.059 05:20:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:25:27.059 killing process with pid 58277 01:25:27.059 05:20:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:25:27.059 05:20:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58277' 01:25:27.059 05:20:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58277 01:25:27.059 05:20:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58277 01:25:29.590 05:20:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=58333 01:25:29.590 05:20:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 01:25:29.590 05:20:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 01:25:34.854 05:20:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 58333 01:25:34.854 05:20:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58333 ']' 01:25:34.854 05:20:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58333 01:25:34.854 05:20:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 01:25:34.854 05:20:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:25:34.854 05:20:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58333 01:25:34.854 05:20:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:25:34.854 05:20:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:25:34.854 killing process with pid 58333 01:25:34.854 05:20:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58333' 01:25:34.854 05:20:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58333 01:25:34.854 05:20:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58333 01:25:37.386 05:20:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 01:25:37.386 05:20:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 01:25:37.386 01:25:37.386 real 0m11.740s 01:25:37.386 user 0m10.967s 01:25:37.386 sys 0m1.263s 01:25:37.386 05:20:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 01:25:37.386 05:20:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 01:25:37.386 ************************************ 01:25:37.386 END TEST skip_rpc_with_json 01:25:37.386 ************************************ 01:25:37.386 05:20:28 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 01:25:37.386 05:20:28 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:25:37.386 05:20:28 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 01:25:37.386 05:20:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 01:25:37.386 ************************************ 01:25:37.386 START TEST skip_rpc_with_delay 01:25:37.386 ************************************ 01:25:37.386 05:20:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 01:25:37.386 05:20:28 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 01:25:37.386 05:20:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 01:25:37.386 05:20:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 01:25:37.386 05:20:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:25:37.386 05:20:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:25:37.386 05:20:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:25:37.386 05:20:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:25:37.386 05:20:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:25:37.386 05:20:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:25:37.386 05:20:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:25:37.386 05:20:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 01:25:37.386 05:20:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 01:25:37.386 [2024-12-09 05:20:28.778615] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 01:25:37.386 05:20:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 01:25:37.386 05:20:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:25:37.386 05:20:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:25:37.386 05:20:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:25:37.386 01:25:37.386 real 0m0.202s 01:25:37.386 user 0m0.106s 01:25:37.386 sys 0m0.092s 01:25:37.386 05:20:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 01:25:37.386 05:20:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 01:25:37.386 ************************************ 01:25:37.386 END TEST skip_rpc_with_delay 01:25:37.386 ************************************ 01:25:37.386 05:20:28 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 01:25:37.386 05:20:28 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 01:25:37.386 05:20:28 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 01:25:37.386 05:20:28 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:25:37.386 05:20:28 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 01:25:37.386 05:20:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 01:25:37.386 ************************************ 01:25:37.386 START TEST exit_on_failed_rpc_init 01:25:37.386 ************************************ 01:25:37.386 05:20:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 01:25:37.386 05:20:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=58471 01:25:37.386 05:20:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 58471 01:25:37.386 05:20:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 58471 ']' 01:25:37.386 05:20:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 01:25:37.386 05:20:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:25:37.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:25:37.386 05:20:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 01:25:37.386 05:20:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:25:37.386 05:20:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 01:25:37.386 05:20:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 01:25:37.645 [2024-12-09 05:20:29.036220] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:25:37.645 [2024-12-09 05:20:29.036432] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58471 ] 01:25:37.645 [2024-12-09 05:20:29.219855] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:25:37.903 [2024-12-09 05:20:29.365804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:25:38.837 05:20:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:25:38.837 05:20:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 01:25:38.837 05:20:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 01:25:38.837 05:20:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 01:25:38.837 05:20:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 01:25:38.837 05:20:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 01:25:38.837 05:20:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:25:38.837 05:20:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:25:38.837 05:20:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:25:38.837 05:20:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:25:38.837 05:20:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:25:38.837 05:20:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:25:38.837 05:20:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:25:38.837 05:20:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 01:25:38.837 05:20:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 01:25:39.094 [2024-12-09 05:20:30.496687] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:25:39.094 [2024-12-09 05:20:30.496898] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58490 ] 01:25:39.094 [2024-12-09 05:20:30.680658] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:25:39.351 [2024-12-09 05:20:30.809674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:25:39.351 [2024-12-09 05:20:30.809823] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 01:25:39.351 [2024-12-09 05:20:30.809845] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 01:25:39.351 [2024-12-09 05:20:30.809865] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 01:25:39.609 05:20:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 01:25:39.609 05:20:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:25:39.609 05:20:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 01:25:39.609 05:20:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 01:25:39.609 05:20:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 01:25:39.609 05:20:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:25:39.609 05:20:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 01:25:39.609 05:20:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 58471 01:25:39.609 05:20:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 58471 ']' 01:25:39.609 05:20:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 58471 01:25:39.609 05:20:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 01:25:39.609 05:20:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:25:39.609 05:20:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58471 01:25:39.609 05:20:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:25:39.609 killing process with pid 58471 01:25:39.609 05:20:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:25:39.609 05:20:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58471' 01:25:39.609 05:20:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 58471 01:25:39.609 05:20:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 58471 01:25:42.132 01:25:42.132 real 0m4.797s 01:25:42.132 user 0m5.181s 01:25:42.132 sys 0m0.794s 01:25:42.132 05:20:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 01:25:42.132 05:20:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 01:25:42.132 ************************************ 01:25:42.132 END TEST exit_on_failed_rpc_init 01:25:42.132 ************************************ 01:25:42.132 05:20:33 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 01:25:42.132 01:25:42.132 real 0m24.801s 01:25:42.132 user 0m23.380s 01:25:42.132 sys 0m2.961s 01:25:42.389 05:20:33 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 01:25:42.389 05:20:33 skip_rpc -- common/autotest_common.sh@10 -- # set +x 01:25:42.389 ************************************ 01:25:42.389 END TEST skip_rpc 01:25:42.389 ************************************ 01:25:42.389 05:20:33 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 01:25:42.389 05:20:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:25:42.389 05:20:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:25:42.390 05:20:33 -- common/autotest_common.sh@10 -- # set +x 01:25:42.390 ************************************ 01:25:42.390 START TEST rpc_client 01:25:42.390 ************************************ 01:25:42.390 05:20:33 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 01:25:42.390 * Looking for test storage... 01:25:42.390 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 01:25:42.390 05:20:33 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:25:42.390 05:20:33 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 01:25:42.390 05:20:33 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:25:42.390 05:20:33 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:25:42.390 05:20:33 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:25:42.390 05:20:33 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 01:25:42.390 05:20:33 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 01:25:42.390 05:20:33 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 01:25:42.390 05:20:33 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 01:25:42.390 05:20:33 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 01:25:42.390 05:20:33 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 01:25:42.390 05:20:33 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 01:25:42.390 05:20:33 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 01:25:42.390 05:20:33 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 01:25:42.390 05:20:33 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:25:42.390 05:20:33 rpc_client -- scripts/common.sh@344 -- # case "$op" in 01:25:42.390 05:20:33 rpc_client -- scripts/common.sh@345 -- # : 1 01:25:42.390 05:20:33 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 01:25:42.390 05:20:33 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:25:42.390 05:20:33 rpc_client -- scripts/common.sh@365 -- # decimal 1 01:25:42.390 05:20:33 rpc_client -- scripts/common.sh@353 -- # local d=1 01:25:42.390 05:20:33 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:25:42.390 05:20:33 rpc_client -- scripts/common.sh@355 -- # echo 1 01:25:42.390 05:20:33 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 01:25:42.390 05:20:33 rpc_client -- scripts/common.sh@366 -- # decimal 2 01:25:42.390 05:20:33 rpc_client -- scripts/common.sh@353 -- # local d=2 01:25:42.390 05:20:33 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:25:42.390 05:20:33 rpc_client -- scripts/common.sh@355 -- # echo 2 01:25:42.390 05:20:33 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 01:25:42.390 05:20:33 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:25:42.390 05:20:33 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:25:42.390 05:20:33 rpc_client -- scripts/common.sh@368 -- # return 0 01:25:42.390 05:20:33 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:25:42.390 05:20:33 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:25:42.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:25:42.390 --rc genhtml_branch_coverage=1 01:25:42.390 --rc genhtml_function_coverage=1 01:25:42.390 --rc genhtml_legend=1 01:25:42.390 --rc geninfo_all_blocks=1 01:25:42.390 --rc geninfo_unexecuted_blocks=1 01:25:42.390 01:25:42.390 ' 01:25:42.390 05:20:33 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:25:42.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:25:42.390 --rc genhtml_branch_coverage=1 01:25:42.390 --rc genhtml_function_coverage=1 01:25:42.390 --rc genhtml_legend=1 01:25:42.390 --rc geninfo_all_blocks=1 01:25:42.390 --rc geninfo_unexecuted_blocks=1 01:25:42.390 01:25:42.390 ' 01:25:42.390 05:20:33 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:25:42.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:25:42.390 --rc genhtml_branch_coverage=1 01:25:42.390 --rc genhtml_function_coverage=1 01:25:42.390 --rc genhtml_legend=1 01:25:42.390 --rc geninfo_all_blocks=1 01:25:42.390 --rc geninfo_unexecuted_blocks=1 01:25:42.390 01:25:42.390 ' 01:25:42.390 05:20:33 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:25:42.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:25:42.390 --rc genhtml_branch_coverage=1 01:25:42.390 --rc genhtml_function_coverage=1 01:25:42.390 --rc genhtml_legend=1 01:25:42.390 --rc geninfo_all_blocks=1 01:25:42.390 --rc geninfo_unexecuted_blocks=1 01:25:42.390 01:25:42.390 ' 01:25:42.390 05:20:33 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 01:25:42.650 OK 01:25:42.650 05:20:34 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 01:25:42.651 01:25:42.651 real 0m0.249s 01:25:42.651 user 0m0.151s 01:25:42.651 sys 0m0.107s 01:25:42.651 05:20:34 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 01:25:42.651 05:20:34 rpc_client -- common/autotest_common.sh@10 -- # set +x 01:25:42.651 ************************************ 01:25:42.651 END TEST rpc_client 01:25:42.651 ************************************ 01:25:42.651 05:20:34 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 01:25:42.651 05:20:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:25:42.651 05:20:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:25:42.651 05:20:34 -- common/autotest_common.sh@10 -- # set +x 01:25:42.651 ************************************ 01:25:42.651 START TEST json_config 01:25:42.651 ************************************ 01:25:42.651 05:20:34 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 01:25:42.651 05:20:34 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:25:42.651 05:20:34 json_config -- common/autotest_common.sh@1693 -- # lcov --version 01:25:42.651 05:20:34 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:25:42.651 05:20:34 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:25:42.651 05:20:34 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:25:42.651 05:20:34 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 01:25:42.651 05:20:34 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 01:25:42.651 05:20:34 json_config -- scripts/common.sh@336 -- # IFS=.-: 01:25:42.651 05:20:34 json_config -- scripts/common.sh@336 -- # read -ra ver1 01:25:42.651 05:20:34 json_config -- scripts/common.sh@337 -- # IFS=.-: 01:25:42.651 05:20:34 json_config -- scripts/common.sh@337 -- # read -ra ver2 01:25:42.651 05:20:34 json_config -- scripts/common.sh@338 -- # local 'op=<' 01:25:42.651 05:20:34 json_config -- scripts/common.sh@340 -- # ver1_l=2 01:25:42.651 05:20:34 json_config -- scripts/common.sh@341 -- # ver2_l=1 01:25:42.651 05:20:34 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:25:42.651 05:20:34 json_config -- scripts/common.sh@344 -- # case "$op" in 01:25:42.651 05:20:34 json_config -- scripts/common.sh@345 -- # : 1 01:25:42.651 05:20:34 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 01:25:42.651 05:20:34 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:25:42.651 05:20:34 json_config -- scripts/common.sh@365 -- # decimal 1 01:25:42.651 05:20:34 json_config -- scripts/common.sh@353 -- # local d=1 01:25:42.651 05:20:34 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:25:42.651 05:20:34 json_config -- scripts/common.sh@355 -- # echo 1 01:25:42.651 05:20:34 json_config -- scripts/common.sh@365 -- # ver1[v]=1 01:25:42.651 05:20:34 json_config -- scripts/common.sh@366 -- # decimal 2 01:25:42.651 05:20:34 json_config -- scripts/common.sh@353 -- # local d=2 01:25:42.651 05:20:34 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:25:42.651 05:20:34 json_config -- scripts/common.sh@355 -- # echo 2 01:25:42.651 05:20:34 json_config -- scripts/common.sh@366 -- # ver2[v]=2 01:25:42.651 05:20:34 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:25:42.651 05:20:34 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:25:42.651 05:20:34 json_config -- scripts/common.sh@368 -- # return 0 01:25:42.651 05:20:34 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:25:42.651 05:20:34 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:25:42.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:25:42.651 --rc genhtml_branch_coverage=1 01:25:42.651 --rc genhtml_function_coverage=1 01:25:42.651 --rc genhtml_legend=1 01:25:42.651 --rc geninfo_all_blocks=1 01:25:42.651 --rc geninfo_unexecuted_blocks=1 01:25:42.651 01:25:42.651 ' 01:25:42.651 05:20:34 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:25:42.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:25:42.651 --rc genhtml_branch_coverage=1 01:25:42.651 --rc genhtml_function_coverage=1 01:25:42.651 --rc genhtml_legend=1 01:25:42.651 --rc geninfo_all_blocks=1 01:25:42.651 --rc geninfo_unexecuted_blocks=1 01:25:42.651 01:25:42.651 ' 01:25:42.651 05:20:34 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:25:42.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:25:42.651 --rc genhtml_branch_coverage=1 01:25:42.651 --rc genhtml_function_coverage=1 01:25:42.651 --rc genhtml_legend=1 01:25:42.651 --rc geninfo_all_blocks=1 01:25:42.651 --rc geninfo_unexecuted_blocks=1 01:25:42.651 01:25:42.651 ' 01:25:42.651 05:20:34 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:25:42.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:25:42.651 --rc genhtml_branch_coverage=1 01:25:42.651 --rc genhtml_function_coverage=1 01:25:42.651 --rc genhtml_legend=1 01:25:42.651 --rc geninfo_all_blocks=1 01:25:42.651 --rc geninfo_unexecuted_blocks=1 01:25:42.651 01:25:42.651 ' 01:25:42.651 05:20:34 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:25:42.651 05:20:34 json_config -- nvmf/common.sh@7 -- # uname -s 01:25:42.651 05:20:34 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:25:42.651 05:20:34 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:25:42.651 05:20:34 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:25:42.651 05:20:34 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:25:42.651 05:20:34 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:25:42.651 05:20:34 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:25:42.651 05:20:34 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:25:42.651 05:20:34 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:25:42.651 05:20:34 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:25:42.651 05:20:34 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:25:42.935 05:20:34 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:fab57822-ba49-4e71-bebd-8b94bbcfdc8e 01:25:42.935 05:20:34 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=fab57822-ba49-4e71-bebd-8b94bbcfdc8e 01:25:42.935 05:20:34 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:25:42.935 05:20:34 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:25:42.935 05:20:34 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 01:25:42.935 05:20:34 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:25:42.935 05:20:34 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:25:42.935 05:20:34 json_config -- scripts/common.sh@15 -- # shopt -s extglob 01:25:42.935 05:20:34 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:25:42.935 05:20:34 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:25:42.935 05:20:34 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:25:42.935 05:20:34 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:25:42.935 05:20:34 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:25:42.935 05:20:34 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:25:42.935 05:20:34 json_config -- paths/export.sh@5 -- # export PATH 01:25:42.935 05:20:34 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:25:42.935 05:20:34 json_config -- nvmf/common.sh@51 -- # : 0 01:25:42.935 05:20:34 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:25:42.935 05:20:34 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:25:42.935 05:20:34 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:25:42.935 05:20:34 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:25:42.935 05:20:34 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:25:42.935 05:20:34 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:25:42.935 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:25:42.935 05:20:34 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:25:42.935 05:20:34 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:25:42.935 05:20:34 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 01:25:42.935 05:20:34 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 01:25:42.935 05:20:34 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 01:25:42.935 05:20:34 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 01:25:42.935 05:20:34 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 01:25:42.936 05:20:34 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 01:25:42.936 05:20:34 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 01:25:42.936 WARNING: No tests are enabled so not running JSON configuration tests 01:25:42.936 05:20:34 json_config -- json_config/json_config.sh@28 -- # exit 0 01:25:42.936 ************************************ 01:25:42.936 END TEST json_config 01:25:42.936 ************************************ 01:25:42.936 01:25:42.936 real 0m0.188s 01:25:42.936 user 0m0.114s 01:25:42.936 sys 0m0.072s 01:25:42.936 05:20:34 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 01:25:42.936 05:20:34 json_config -- common/autotest_common.sh@10 -- # set +x 01:25:42.936 05:20:34 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 01:25:42.936 05:20:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:25:42.936 05:20:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:25:42.936 05:20:34 -- common/autotest_common.sh@10 -- # set +x 01:25:42.936 ************************************ 01:25:42.936 START TEST json_config_extra_key 01:25:42.936 ************************************ 01:25:42.936 05:20:34 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 01:25:42.936 05:20:34 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:25:42.936 05:20:34 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 01:25:42.936 05:20:34 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:25:42.936 05:20:34 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:25:42.936 05:20:34 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:25:42.936 05:20:34 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 01:25:42.936 05:20:34 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 01:25:42.936 05:20:34 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 01:25:42.936 05:20:34 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 01:25:42.936 05:20:34 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 01:25:42.936 05:20:34 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 01:25:42.936 05:20:34 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 01:25:42.936 05:20:34 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 01:25:42.936 05:20:34 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 01:25:42.936 05:20:34 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:25:42.936 05:20:34 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 01:25:42.936 05:20:34 json_config_extra_key -- scripts/common.sh@345 -- # : 1 01:25:42.936 05:20:34 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 01:25:42.936 05:20:34 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:25:42.936 05:20:34 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 01:25:42.936 05:20:34 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 01:25:42.936 05:20:34 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:25:42.936 05:20:34 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 01:25:42.936 05:20:34 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 01:25:42.936 05:20:34 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 01:25:42.936 05:20:34 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 01:25:42.936 05:20:34 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:25:42.936 05:20:34 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 01:25:42.936 05:20:34 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 01:25:42.936 05:20:34 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:25:42.936 05:20:34 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:25:42.936 05:20:34 json_config_extra_key -- scripts/common.sh@368 -- # return 0 01:25:42.936 05:20:34 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:25:42.936 05:20:34 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:25:42.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:25:42.936 --rc genhtml_branch_coverage=1 01:25:42.936 --rc genhtml_function_coverage=1 01:25:42.936 --rc genhtml_legend=1 01:25:42.936 --rc geninfo_all_blocks=1 01:25:42.936 --rc geninfo_unexecuted_blocks=1 01:25:42.936 01:25:42.936 ' 01:25:42.936 05:20:34 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:25:42.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:25:42.936 --rc genhtml_branch_coverage=1 01:25:42.936 --rc genhtml_function_coverage=1 01:25:42.936 --rc genhtml_legend=1 01:25:42.936 --rc geninfo_all_blocks=1 01:25:42.936 --rc geninfo_unexecuted_blocks=1 01:25:42.936 01:25:42.936 ' 01:25:42.936 05:20:34 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:25:42.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:25:42.936 --rc genhtml_branch_coverage=1 01:25:42.936 --rc genhtml_function_coverage=1 01:25:42.936 --rc genhtml_legend=1 01:25:42.936 --rc geninfo_all_blocks=1 01:25:42.936 --rc geninfo_unexecuted_blocks=1 01:25:42.936 01:25:42.936 ' 01:25:42.936 05:20:34 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:25:42.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:25:42.936 --rc genhtml_branch_coverage=1 01:25:42.936 --rc genhtml_function_coverage=1 01:25:42.936 --rc genhtml_legend=1 01:25:42.936 --rc geninfo_all_blocks=1 01:25:42.936 --rc geninfo_unexecuted_blocks=1 01:25:42.936 01:25:42.936 ' 01:25:42.936 05:20:34 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:25:42.936 05:20:34 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 01:25:42.936 05:20:34 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:25:42.936 05:20:34 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:25:42.936 05:20:34 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:25:42.936 05:20:34 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:25:42.936 05:20:34 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:25:42.936 05:20:34 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:25:42.936 05:20:34 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:25:42.936 05:20:34 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:25:42.936 05:20:34 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:25:42.936 05:20:34 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:25:42.936 05:20:34 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:fab57822-ba49-4e71-bebd-8b94bbcfdc8e 01:25:42.936 05:20:34 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=fab57822-ba49-4e71-bebd-8b94bbcfdc8e 01:25:42.936 05:20:34 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:25:42.936 05:20:34 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:25:42.936 05:20:34 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 01:25:42.936 05:20:34 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:25:42.936 05:20:34 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:25:42.936 05:20:34 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 01:25:42.936 05:20:34 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:25:42.936 05:20:34 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:25:42.936 05:20:34 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:25:42.936 05:20:34 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:25:42.936 05:20:34 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:25:42.937 05:20:34 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:25:42.937 05:20:34 json_config_extra_key -- paths/export.sh@5 -- # export PATH 01:25:42.937 05:20:34 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:25:42.937 05:20:34 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 01:25:42.937 05:20:34 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:25:42.937 05:20:34 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:25:42.937 05:20:34 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:25:42.937 05:20:34 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:25:42.937 05:20:34 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:25:42.937 05:20:34 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:25:42.937 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:25:42.937 05:20:34 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:25:42.937 05:20:34 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:25:42.937 05:20:34 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 01:25:42.937 05:20:34 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 01:25:42.937 05:20:34 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 01:25:42.937 05:20:34 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 01:25:42.937 05:20:34 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 01:25:42.937 05:20:34 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 01:25:42.937 05:20:34 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 01:25:42.937 05:20:34 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 01:25:42.937 05:20:34 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 01:25:42.937 INFO: launching applications... 01:25:42.937 05:20:34 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 01:25:42.937 05:20:34 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 01:25:42.937 05:20:34 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 01:25:42.937 05:20:34 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 01:25:42.937 05:20:34 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 01:25:42.937 05:20:34 json_config_extra_key -- json_config/common.sh@10 -- # shift 01:25:42.937 05:20:34 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 01:25:42.937 05:20:34 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 01:25:42.937 05:20:34 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 01:25:42.937 05:20:34 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 01:25:42.937 05:20:34 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 01:25:42.937 05:20:34 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=58700 01:25:42.937 05:20:34 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 01:25:42.937 05:20:34 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 01:25:42.937 Waiting for target to run... 01:25:42.937 05:20:34 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 58700 /var/tmp/spdk_tgt.sock 01:25:42.937 05:20:34 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 58700 ']' 01:25:42.937 05:20:34 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 01:25:42.937 05:20:34 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 01:25:42.937 05:20:34 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 01:25:42.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 01:25:42.937 05:20:34 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 01:25:42.937 05:20:34 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 01:25:43.194 [2024-12-09 05:20:34.660285] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:25:43.194 [2024-12-09 05:20:34.660478] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58700 ] 01:25:43.758 [2024-12-09 05:20:35.159633] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:25:43.758 [2024-12-09 05:20:35.320534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:25:44.691 01:25:44.691 INFO: shutting down applications... 01:25:44.691 05:20:36 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:25:44.691 05:20:36 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 01:25:44.691 05:20:36 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 01:25:44.691 05:20:36 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 01:25:44.691 05:20:36 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 01:25:44.691 05:20:36 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 01:25:44.691 05:20:36 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 01:25:44.691 05:20:36 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 58700 ]] 01:25:44.691 05:20:36 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 58700 01:25:44.691 05:20:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 01:25:44.691 05:20:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 01:25:44.691 05:20:36 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58700 01:25:44.691 05:20:36 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 01:25:44.949 05:20:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 01:25:44.949 05:20:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 01:25:44.949 05:20:36 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58700 01:25:44.949 05:20:36 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 01:25:45.517 05:20:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 01:25:45.517 05:20:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 01:25:45.517 05:20:37 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58700 01:25:45.517 05:20:37 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 01:25:46.082 05:20:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 01:25:46.082 05:20:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 01:25:46.082 05:20:37 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58700 01:25:46.082 05:20:37 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 01:25:46.648 05:20:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 01:25:46.648 05:20:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 01:25:46.648 05:20:38 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58700 01:25:46.648 05:20:38 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 01:25:47.214 05:20:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 01:25:47.214 05:20:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 01:25:47.214 05:20:38 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58700 01:25:47.214 05:20:38 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 01:25:47.471 05:20:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 01:25:47.472 05:20:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 01:25:47.472 05:20:39 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58700 01:25:47.472 05:20:39 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 01:25:47.472 05:20:39 json_config_extra_key -- json_config/common.sh@43 -- # break 01:25:47.472 05:20:39 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 01:25:47.472 SPDK target shutdown done 01:25:47.472 Success 01:25:47.472 05:20:39 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 01:25:47.472 05:20:39 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 01:25:47.472 01:25:47.472 real 0m4.740s 01:25:47.472 user 0m4.300s 01:25:47.472 sys 0m0.708s 01:25:47.472 ************************************ 01:25:47.472 END TEST json_config_extra_key 01:25:47.472 ************************************ 01:25:47.472 05:20:39 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 01:25:47.472 05:20:39 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 01:25:47.730 05:20:39 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 01:25:47.730 05:20:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:25:47.730 05:20:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:25:47.730 05:20:39 -- common/autotest_common.sh@10 -- # set +x 01:25:47.730 ************************************ 01:25:47.730 START TEST alias_rpc 01:25:47.730 ************************************ 01:25:47.730 05:20:39 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 01:25:47.730 * Looking for test storage... 01:25:47.730 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 01:25:47.730 05:20:39 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:25:47.730 05:20:39 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 01:25:47.730 05:20:39 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:25:47.730 05:20:39 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:25:47.730 05:20:39 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:25:47.730 05:20:39 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 01:25:47.730 05:20:39 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 01:25:47.730 05:20:39 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 01:25:47.730 05:20:39 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 01:25:47.730 05:20:39 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 01:25:47.730 05:20:39 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 01:25:47.730 05:20:39 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 01:25:47.730 05:20:39 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 01:25:47.730 05:20:39 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 01:25:47.730 05:20:39 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:25:47.730 05:20:39 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 01:25:47.730 05:20:39 alias_rpc -- scripts/common.sh@345 -- # : 1 01:25:47.730 05:20:39 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 01:25:47.730 05:20:39 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:25:47.730 05:20:39 alias_rpc -- scripts/common.sh@365 -- # decimal 1 01:25:47.730 05:20:39 alias_rpc -- scripts/common.sh@353 -- # local d=1 01:25:47.730 05:20:39 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:25:47.730 05:20:39 alias_rpc -- scripts/common.sh@355 -- # echo 1 01:25:47.730 05:20:39 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 01:25:47.730 05:20:39 alias_rpc -- scripts/common.sh@366 -- # decimal 2 01:25:47.730 05:20:39 alias_rpc -- scripts/common.sh@353 -- # local d=2 01:25:47.730 05:20:39 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:25:47.730 05:20:39 alias_rpc -- scripts/common.sh@355 -- # echo 2 01:25:47.730 05:20:39 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 01:25:47.730 05:20:39 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:25:47.730 05:20:39 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:25:47.730 05:20:39 alias_rpc -- scripts/common.sh@368 -- # return 0 01:25:47.730 05:20:39 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:25:47.730 05:20:39 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:25:47.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:25:47.730 --rc genhtml_branch_coverage=1 01:25:47.730 --rc genhtml_function_coverage=1 01:25:47.730 --rc genhtml_legend=1 01:25:47.730 --rc geninfo_all_blocks=1 01:25:47.730 --rc geninfo_unexecuted_blocks=1 01:25:47.730 01:25:47.730 ' 01:25:47.730 05:20:39 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:25:47.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:25:47.730 --rc genhtml_branch_coverage=1 01:25:47.730 --rc genhtml_function_coverage=1 01:25:47.730 --rc genhtml_legend=1 01:25:47.730 --rc geninfo_all_blocks=1 01:25:47.730 --rc geninfo_unexecuted_blocks=1 01:25:47.730 01:25:47.730 ' 01:25:47.730 05:20:39 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:25:47.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:25:47.730 --rc genhtml_branch_coverage=1 01:25:47.730 --rc genhtml_function_coverage=1 01:25:47.730 --rc genhtml_legend=1 01:25:47.730 --rc geninfo_all_blocks=1 01:25:47.730 --rc geninfo_unexecuted_blocks=1 01:25:47.730 01:25:47.730 ' 01:25:47.730 05:20:39 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:25:47.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:25:47.730 --rc genhtml_branch_coverage=1 01:25:47.730 --rc genhtml_function_coverage=1 01:25:47.730 --rc genhtml_legend=1 01:25:47.730 --rc geninfo_all_blocks=1 01:25:47.730 --rc geninfo_unexecuted_blocks=1 01:25:47.730 01:25:47.730 ' 01:25:47.730 05:20:39 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 01:25:47.730 05:20:39 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=58812 01:25:47.730 05:20:39 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 58812 01:25:47.730 05:20:39 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:25:47.730 05:20:39 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 58812 ']' 01:25:47.730 05:20:39 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:25:47.730 05:20:39 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 01:25:47.730 05:20:39 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:25:47.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:25:47.730 05:20:39 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 01:25:47.730 05:20:39 alias_rpc -- common/autotest_common.sh@10 -- # set +x 01:25:47.989 [2024-12-09 05:20:39.470331] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:25:47.989 [2024-12-09 05:20:39.471117] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58812 ] 01:25:48.246 [2024-12-09 05:20:39.659402] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:25:48.246 [2024-12-09 05:20:39.796915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:25:49.179 05:20:40 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:25:49.179 05:20:40 alias_rpc -- common/autotest_common.sh@868 -- # return 0 01:25:49.179 05:20:40 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 01:25:49.746 05:20:41 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 58812 01:25:49.746 05:20:41 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 58812 ']' 01:25:49.746 05:20:41 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 58812 01:25:49.746 05:20:41 alias_rpc -- common/autotest_common.sh@959 -- # uname 01:25:49.746 05:20:41 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:25:49.746 05:20:41 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58812 01:25:49.746 killing process with pid 58812 01:25:49.746 05:20:41 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:25:49.746 05:20:41 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:25:49.746 05:20:41 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58812' 01:25:49.746 05:20:41 alias_rpc -- common/autotest_common.sh@973 -- # kill 58812 01:25:49.746 05:20:41 alias_rpc -- common/autotest_common.sh@978 -- # wait 58812 01:25:52.318 ************************************ 01:25:52.318 END TEST alias_rpc 01:25:52.318 ************************************ 01:25:52.318 01:25:52.318 real 0m4.271s 01:25:52.318 user 0m4.483s 01:25:52.318 sys 0m0.674s 01:25:52.318 05:20:43 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 01:25:52.318 05:20:43 alias_rpc -- common/autotest_common.sh@10 -- # set +x 01:25:52.318 05:20:43 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 01:25:52.318 05:20:43 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 01:25:52.318 05:20:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:25:52.318 05:20:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:25:52.318 05:20:43 -- common/autotest_common.sh@10 -- # set +x 01:25:52.318 ************************************ 01:25:52.318 START TEST spdkcli_tcp 01:25:52.318 ************************************ 01:25:52.318 05:20:43 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 01:25:52.318 * Looking for test storage... 01:25:52.318 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 01:25:52.318 05:20:43 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:25:52.318 05:20:43 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 01:25:52.318 05:20:43 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:25:52.318 05:20:43 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:25:52.318 05:20:43 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:25:52.318 05:20:43 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 01:25:52.318 05:20:43 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 01:25:52.318 05:20:43 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 01:25:52.318 05:20:43 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 01:25:52.318 05:20:43 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 01:25:52.318 05:20:43 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 01:25:52.318 05:20:43 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 01:25:52.318 05:20:43 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 01:25:52.318 05:20:43 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 01:25:52.318 05:20:43 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:25:52.318 05:20:43 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 01:25:52.318 05:20:43 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 01:25:52.318 05:20:43 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 01:25:52.318 05:20:43 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:25:52.318 05:20:43 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 01:25:52.318 05:20:43 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 01:25:52.318 05:20:43 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:25:52.318 05:20:43 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 01:25:52.318 05:20:43 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 01:25:52.318 05:20:43 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 01:25:52.318 05:20:43 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 01:25:52.318 05:20:43 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:25:52.318 05:20:43 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 01:25:52.319 05:20:43 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 01:25:52.319 05:20:43 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:25:52.319 05:20:43 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:25:52.319 05:20:43 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 01:25:52.319 05:20:43 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:25:52.319 05:20:43 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:25:52.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:25:52.319 --rc genhtml_branch_coverage=1 01:25:52.319 --rc genhtml_function_coverage=1 01:25:52.319 --rc genhtml_legend=1 01:25:52.319 --rc geninfo_all_blocks=1 01:25:52.319 --rc geninfo_unexecuted_blocks=1 01:25:52.319 01:25:52.319 ' 01:25:52.319 05:20:43 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:25:52.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:25:52.319 --rc genhtml_branch_coverage=1 01:25:52.319 --rc genhtml_function_coverage=1 01:25:52.319 --rc genhtml_legend=1 01:25:52.319 --rc geninfo_all_blocks=1 01:25:52.319 --rc geninfo_unexecuted_blocks=1 01:25:52.319 01:25:52.319 ' 01:25:52.319 05:20:43 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:25:52.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:25:52.319 --rc genhtml_branch_coverage=1 01:25:52.319 --rc genhtml_function_coverage=1 01:25:52.319 --rc genhtml_legend=1 01:25:52.319 --rc geninfo_all_blocks=1 01:25:52.319 --rc geninfo_unexecuted_blocks=1 01:25:52.319 01:25:52.319 ' 01:25:52.319 05:20:43 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:25:52.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:25:52.319 --rc genhtml_branch_coverage=1 01:25:52.319 --rc genhtml_function_coverage=1 01:25:52.319 --rc genhtml_legend=1 01:25:52.319 --rc geninfo_all_blocks=1 01:25:52.319 --rc geninfo_unexecuted_blocks=1 01:25:52.319 01:25:52.319 ' 01:25:52.319 05:20:43 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 01:25:52.319 05:20:43 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 01:25:52.319 05:20:43 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 01:25:52.319 05:20:43 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 01:25:52.319 05:20:43 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 01:25:52.319 05:20:43 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 01:25:52.319 05:20:43 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 01:25:52.319 05:20:43 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 01:25:52.319 05:20:43 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 01:25:52.319 05:20:43 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=58924 01:25:52.319 05:20:43 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 01:25:52.319 05:20:43 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 58924 01:25:52.319 05:20:43 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 58924 ']' 01:25:52.319 05:20:43 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:25:52.319 05:20:43 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 01:25:52.319 05:20:43 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:25:52.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:25:52.319 05:20:43 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 01:25:52.319 05:20:43 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 01:25:52.319 [2024-12-09 05:20:43.793524] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:25:52.319 [2024-12-09 05:20:43.794060] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58924 ] 01:25:52.577 [2024-12-09 05:20:43.966928] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 01:25:52.577 [2024-12-09 05:20:44.101829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:25:52.577 [2024-12-09 05:20:44.101829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:25:53.512 05:20:44 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:25:53.512 05:20:44 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 01:25:53.512 05:20:44 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58941 01:25:53.512 05:20:44 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 01:25:53.512 05:20:44 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 01:25:53.771 [ 01:25:53.771 "bdev_malloc_delete", 01:25:53.771 "bdev_malloc_create", 01:25:53.771 "bdev_null_resize", 01:25:53.771 "bdev_null_delete", 01:25:53.771 "bdev_null_create", 01:25:53.771 "bdev_nvme_cuse_unregister", 01:25:53.771 "bdev_nvme_cuse_register", 01:25:53.771 "bdev_opal_new_user", 01:25:53.771 "bdev_opal_set_lock_state", 01:25:53.771 "bdev_opal_delete", 01:25:53.771 "bdev_opal_get_info", 01:25:53.771 "bdev_opal_create", 01:25:53.771 "bdev_nvme_opal_revert", 01:25:53.771 "bdev_nvme_opal_init", 01:25:53.771 "bdev_nvme_send_cmd", 01:25:53.771 "bdev_nvme_set_keys", 01:25:53.771 "bdev_nvme_get_path_iostat", 01:25:53.771 "bdev_nvme_get_mdns_discovery_info", 01:25:53.771 "bdev_nvme_stop_mdns_discovery", 01:25:53.771 "bdev_nvme_start_mdns_discovery", 01:25:53.771 "bdev_nvme_set_multipath_policy", 01:25:53.771 "bdev_nvme_set_preferred_path", 01:25:53.771 "bdev_nvme_get_io_paths", 01:25:53.771 "bdev_nvme_remove_error_injection", 01:25:53.771 "bdev_nvme_add_error_injection", 01:25:53.771 "bdev_nvme_get_discovery_info", 01:25:53.771 "bdev_nvme_stop_discovery", 01:25:53.771 "bdev_nvme_start_discovery", 01:25:53.771 "bdev_nvme_get_controller_health_info", 01:25:53.771 "bdev_nvme_disable_controller", 01:25:53.771 "bdev_nvme_enable_controller", 01:25:53.771 "bdev_nvme_reset_controller", 01:25:53.771 "bdev_nvme_get_transport_statistics", 01:25:53.771 "bdev_nvme_apply_firmware", 01:25:53.771 "bdev_nvme_detach_controller", 01:25:53.771 "bdev_nvme_get_controllers", 01:25:53.771 "bdev_nvme_attach_controller", 01:25:53.771 "bdev_nvme_set_hotplug", 01:25:53.771 "bdev_nvme_set_options", 01:25:53.771 "bdev_passthru_delete", 01:25:53.771 "bdev_passthru_create", 01:25:53.771 "bdev_lvol_set_parent_bdev", 01:25:53.771 "bdev_lvol_set_parent", 01:25:53.771 "bdev_lvol_check_shallow_copy", 01:25:53.771 "bdev_lvol_start_shallow_copy", 01:25:53.771 "bdev_lvol_grow_lvstore", 01:25:53.771 "bdev_lvol_get_lvols", 01:25:53.771 "bdev_lvol_get_lvstores", 01:25:53.771 "bdev_lvol_delete", 01:25:53.771 "bdev_lvol_set_read_only", 01:25:53.771 "bdev_lvol_resize", 01:25:53.771 "bdev_lvol_decouple_parent", 01:25:53.771 "bdev_lvol_inflate", 01:25:53.771 "bdev_lvol_rename", 01:25:53.771 "bdev_lvol_clone_bdev", 01:25:53.771 "bdev_lvol_clone", 01:25:53.771 "bdev_lvol_snapshot", 01:25:53.771 "bdev_lvol_create", 01:25:53.771 "bdev_lvol_delete_lvstore", 01:25:53.771 "bdev_lvol_rename_lvstore", 01:25:53.771 "bdev_lvol_create_lvstore", 01:25:53.771 "bdev_raid_set_options", 01:25:53.771 "bdev_raid_remove_base_bdev", 01:25:53.771 "bdev_raid_add_base_bdev", 01:25:53.771 "bdev_raid_delete", 01:25:53.771 "bdev_raid_create", 01:25:53.771 "bdev_raid_get_bdevs", 01:25:53.771 "bdev_error_inject_error", 01:25:53.771 "bdev_error_delete", 01:25:53.771 "bdev_error_create", 01:25:53.771 "bdev_split_delete", 01:25:53.771 "bdev_split_create", 01:25:53.771 "bdev_delay_delete", 01:25:53.771 "bdev_delay_create", 01:25:53.771 "bdev_delay_update_latency", 01:25:53.771 "bdev_zone_block_delete", 01:25:53.771 "bdev_zone_block_create", 01:25:53.771 "blobfs_create", 01:25:53.771 "blobfs_detect", 01:25:53.771 "blobfs_set_cache_size", 01:25:53.771 "bdev_xnvme_delete", 01:25:53.771 "bdev_xnvme_create", 01:25:53.771 "bdev_aio_delete", 01:25:53.771 "bdev_aio_rescan", 01:25:53.771 "bdev_aio_create", 01:25:53.771 "bdev_ftl_set_property", 01:25:53.771 "bdev_ftl_get_properties", 01:25:53.771 "bdev_ftl_get_stats", 01:25:53.771 "bdev_ftl_unmap", 01:25:53.771 "bdev_ftl_unload", 01:25:53.771 "bdev_ftl_delete", 01:25:53.771 "bdev_ftl_load", 01:25:53.771 "bdev_ftl_create", 01:25:53.771 "bdev_virtio_attach_controller", 01:25:53.771 "bdev_virtio_scsi_get_devices", 01:25:53.771 "bdev_virtio_detach_controller", 01:25:53.771 "bdev_virtio_blk_set_hotplug", 01:25:53.771 "bdev_iscsi_delete", 01:25:53.771 "bdev_iscsi_create", 01:25:53.771 "bdev_iscsi_set_options", 01:25:53.771 "accel_error_inject_error", 01:25:53.771 "ioat_scan_accel_module", 01:25:53.771 "dsa_scan_accel_module", 01:25:53.771 "iaa_scan_accel_module", 01:25:53.771 "keyring_file_remove_key", 01:25:53.771 "keyring_file_add_key", 01:25:53.771 "keyring_linux_set_options", 01:25:53.771 "fsdev_aio_delete", 01:25:53.771 "fsdev_aio_create", 01:25:53.771 "iscsi_get_histogram", 01:25:53.771 "iscsi_enable_histogram", 01:25:53.771 "iscsi_set_options", 01:25:53.771 "iscsi_get_auth_groups", 01:25:53.771 "iscsi_auth_group_remove_secret", 01:25:53.771 "iscsi_auth_group_add_secret", 01:25:53.771 "iscsi_delete_auth_group", 01:25:53.771 "iscsi_create_auth_group", 01:25:53.771 "iscsi_set_discovery_auth", 01:25:53.771 "iscsi_get_options", 01:25:53.771 "iscsi_target_node_request_logout", 01:25:53.771 "iscsi_target_node_set_redirect", 01:25:53.771 "iscsi_target_node_set_auth", 01:25:53.771 "iscsi_target_node_add_lun", 01:25:53.771 "iscsi_get_stats", 01:25:53.771 "iscsi_get_connections", 01:25:53.771 "iscsi_portal_group_set_auth", 01:25:53.771 "iscsi_start_portal_group", 01:25:53.771 "iscsi_delete_portal_group", 01:25:53.771 "iscsi_create_portal_group", 01:25:53.771 "iscsi_get_portal_groups", 01:25:53.771 "iscsi_delete_target_node", 01:25:53.771 "iscsi_target_node_remove_pg_ig_maps", 01:25:53.771 "iscsi_target_node_add_pg_ig_maps", 01:25:53.771 "iscsi_create_target_node", 01:25:53.771 "iscsi_get_target_nodes", 01:25:53.771 "iscsi_delete_initiator_group", 01:25:53.771 "iscsi_initiator_group_remove_initiators", 01:25:53.771 "iscsi_initiator_group_add_initiators", 01:25:53.771 "iscsi_create_initiator_group", 01:25:53.771 "iscsi_get_initiator_groups", 01:25:53.771 "nvmf_set_crdt", 01:25:53.771 "nvmf_set_config", 01:25:53.771 "nvmf_set_max_subsystems", 01:25:53.771 "nvmf_stop_mdns_prr", 01:25:53.771 "nvmf_publish_mdns_prr", 01:25:53.771 "nvmf_subsystem_get_listeners", 01:25:53.771 "nvmf_subsystem_get_qpairs", 01:25:53.771 "nvmf_subsystem_get_controllers", 01:25:53.771 "nvmf_get_stats", 01:25:53.771 "nvmf_get_transports", 01:25:53.771 "nvmf_create_transport", 01:25:53.771 "nvmf_get_targets", 01:25:53.771 "nvmf_delete_target", 01:25:53.771 "nvmf_create_target", 01:25:53.771 "nvmf_subsystem_allow_any_host", 01:25:53.771 "nvmf_subsystem_set_keys", 01:25:53.771 "nvmf_subsystem_remove_host", 01:25:53.771 "nvmf_subsystem_add_host", 01:25:53.771 "nvmf_ns_remove_host", 01:25:53.771 "nvmf_ns_add_host", 01:25:53.771 "nvmf_subsystem_remove_ns", 01:25:53.771 "nvmf_subsystem_set_ns_ana_group", 01:25:53.771 "nvmf_subsystem_add_ns", 01:25:53.771 "nvmf_subsystem_listener_set_ana_state", 01:25:53.771 "nvmf_discovery_get_referrals", 01:25:53.771 "nvmf_discovery_remove_referral", 01:25:53.771 "nvmf_discovery_add_referral", 01:25:53.771 "nvmf_subsystem_remove_listener", 01:25:53.771 "nvmf_subsystem_add_listener", 01:25:53.771 "nvmf_delete_subsystem", 01:25:53.771 "nvmf_create_subsystem", 01:25:53.771 "nvmf_get_subsystems", 01:25:53.771 "env_dpdk_get_mem_stats", 01:25:53.771 "nbd_get_disks", 01:25:53.771 "nbd_stop_disk", 01:25:53.771 "nbd_start_disk", 01:25:53.771 "ublk_recover_disk", 01:25:53.771 "ublk_get_disks", 01:25:53.771 "ublk_stop_disk", 01:25:53.771 "ublk_start_disk", 01:25:53.771 "ublk_destroy_target", 01:25:53.771 "ublk_create_target", 01:25:53.771 "virtio_blk_create_transport", 01:25:53.771 "virtio_blk_get_transports", 01:25:53.771 "vhost_controller_set_coalescing", 01:25:53.771 "vhost_get_controllers", 01:25:53.771 "vhost_delete_controller", 01:25:53.771 "vhost_create_blk_controller", 01:25:53.771 "vhost_scsi_controller_remove_target", 01:25:53.771 "vhost_scsi_controller_add_target", 01:25:53.771 "vhost_start_scsi_controller", 01:25:53.771 "vhost_create_scsi_controller", 01:25:53.772 "thread_set_cpumask", 01:25:53.772 "scheduler_set_options", 01:25:53.772 "framework_get_governor", 01:25:53.772 "framework_get_scheduler", 01:25:53.772 "framework_set_scheduler", 01:25:53.772 "framework_get_reactors", 01:25:53.772 "thread_get_io_channels", 01:25:53.772 "thread_get_pollers", 01:25:53.772 "thread_get_stats", 01:25:53.772 "framework_monitor_context_switch", 01:25:53.772 "spdk_kill_instance", 01:25:53.772 "log_enable_timestamps", 01:25:53.772 "log_get_flags", 01:25:53.772 "log_clear_flag", 01:25:53.772 "log_set_flag", 01:25:53.772 "log_get_level", 01:25:53.772 "log_set_level", 01:25:53.772 "log_get_print_level", 01:25:53.772 "log_set_print_level", 01:25:53.772 "framework_enable_cpumask_locks", 01:25:53.772 "framework_disable_cpumask_locks", 01:25:53.772 "framework_wait_init", 01:25:53.772 "framework_start_init", 01:25:53.772 "scsi_get_devices", 01:25:53.772 "bdev_get_histogram", 01:25:53.772 "bdev_enable_histogram", 01:25:53.772 "bdev_set_qos_limit", 01:25:53.772 "bdev_set_qd_sampling_period", 01:25:53.772 "bdev_get_bdevs", 01:25:53.772 "bdev_reset_iostat", 01:25:53.772 "bdev_get_iostat", 01:25:53.772 "bdev_examine", 01:25:53.772 "bdev_wait_for_examine", 01:25:53.772 "bdev_set_options", 01:25:53.772 "accel_get_stats", 01:25:53.772 "accel_set_options", 01:25:53.772 "accel_set_driver", 01:25:53.772 "accel_crypto_key_destroy", 01:25:53.772 "accel_crypto_keys_get", 01:25:53.772 "accel_crypto_key_create", 01:25:53.772 "accel_assign_opc", 01:25:53.772 "accel_get_module_info", 01:25:53.772 "accel_get_opc_assignments", 01:25:53.772 "vmd_rescan", 01:25:53.772 "vmd_remove_device", 01:25:53.772 "vmd_enable", 01:25:53.772 "sock_get_default_impl", 01:25:53.772 "sock_set_default_impl", 01:25:53.772 "sock_impl_set_options", 01:25:53.772 "sock_impl_get_options", 01:25:53.772 "iobuf_get_stats", 01:25:53.772 "iobuf_set_options", 01:25:53.772 "keyring_get_keys", 01:25:53.772 "framework_get_pci_devices", 01:25:53.772 "framework_get_config", 01:25:53.772 "framework_get_subsystems", 01:25:53.772 "fsdev_set_opts", 01:25:53.772 "fsdev_get_opts", 01:25:53.772 "trace_get_info", 01:25:53.772 "trace_get_tpoint_group_mask", 01:25:53.772 "trace_disable_tpoint_group", 01:25:53.772 "trace_enable_tpoint_group", 01:25:53.772 "trace_clear_tpoint_mask", 01:25:53.772 "trace_set_tpoint_mask", 01:25:53.772 "notify_get_notifications", 01:25:53.772 "notify_get_types", 01:25:53.772 "spdk_get_version", 01:25:53.772 "rpc_get_methods" 01:25:53.772 ] 01:25:53.772 05:20:45 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 01:25:53.772 05:20:45 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 01:25:53.772 05:20:45 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 01:25:53.772 05:20:45 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 01:25:53.772 05:20:45 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 58924 01:25:53.772 05:20:45 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 58924 ']' 01:25:53.772 05:20:45 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 58924 01:25:53.772 05:20:45 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 01:25:53.772 05:20:45 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:25:53.772 05:20:45 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58924 01:25:53.772 killing process with pid 58924 01:25:53.772 05:20:45 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:25:53.772 05:20:45 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:25:53.772 05:20:45 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58924' 01:25:53.772 05:20:45 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 58924 01:25:53.772 05:20:45 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 58924 01:25:56.300 ************************************ 01:25:56.300 END TEST spdkcli_tcp 01:25:56.300 ************************************ 01:25:56.300 01:25:56.300 real 0m4.101s 01:25:56.300 user 0m7.307s 01:25:56.300 sys 0m0.666s 01:25:56.300 05:20:47 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 01:25:56.300 05:20:47 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 01:25:56.300 05:20:47 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 01:25:56.300 05:20:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:25:56.300 05:20:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:25:56.300 05:20:47 -- common/autotest_common.sh@10 -- # set +x 01:25:56.300 ************************************ 01:25:56.300 START TEST dpdk_mem_utility 01:25:56.300 ************************************ 01:25:56.300 05:20:47 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 01:25:56.300 * Looking for test storage... 01:25:56.300 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 01:25:56.300 05:20:47 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:25:56.300 05:20:47 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 01:25:56.300 05:20:47 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:25:56.300 05:20:47 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:25:56.300 05:20:47 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:25:56.300 05:20:47 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 01:25:56.300 05:20:47 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 01:25:56.300 05:20:47 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 01:25:56.300 05:20:47 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 01:25:56.300 05:20:47 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 01:25:56.300 05:20:47 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 01:25:56.300 05:20:47 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 01:25:56.300 05:20:47 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 01:25:56.300 05:20:47 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 01:25:56.300 05:20:47 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:25:56.300 05:20:47 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 01:25:56.300 05:20:47 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 01:25:56.300 05:20:47 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 01:25:56.300 05:20:47 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:25:56.300 05:20:47 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 01:25:56.300 05:20:47 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 01:25:56.300 05:20:47 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:25:56.300 05:20:47 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 01:25:56.300 05:20:47 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 01:25:56.300 05:20:47 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 01:25:56.300 05:20:47 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 01:25:56.300 05:20:47 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:25:56.300 05:20:47 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 01:25:56.300 05:20:47 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 01:25:56.300 05:20:47 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:25:56.300 05:20:47 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:25:56.300 05:20:47 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 01:25:56.300 05:20:47 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:25:56.300 05:20:47 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:25:56.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:25:56.300 --rc genhtml_branch_coverage=1 01:25:56.300 --rc genhtml_function_coverage=1 01:25:56.300 --rc genhtml_legend=1 01:25:56.300 --rc geninfo_all_blocks=1 01:25:56.300 --rc geninfo_unexecuted_blocks=1 01:25:56.300 01:25:56.300 ' 01:25:56.300 05:20:47 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:25:56.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:25:56.300 --rc genhtml_branch_coverage=1 01:25:56.300 --rc genhtml_function_coverage=1 01:25:56.300 --rc genhtml_legend=1 01:25:56.300 --rc geninfo_all_blocks=1 01:25:56.300 --rc geninfo_unexecuted_blocks=1 01:25:56.300 01:25:56.300 ' 01:25:56.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:25:56.301 05:20:47 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:25:56.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:25:56.301 --rc genhtml_branch_coverage=1 01:25:56.301 --rc genhtml_function_coverage=1 01:25:56.301 --rc genhtml_legend=1 01:25:56.301 --rc geninfo_all_blocks=1 01:25:56.301 --rc geninfo_unexecuted_blocks=1 01:25:56.301 01:25:56.301 ' 01:25:56.301 05:20:47 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:25:56.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:25:56.301 --rc genhtml_branch_coverage=1 01:25:56.301 --rc genhtml_function_coverage=1 01:25:56.301 --rc genhtml_legend=1 01:25:56.301 --rc geninfo_all_blocks=1 01:25:56.301 --rc geninfo_unexecuted_blocks=1 01:25:56.301 01:25:56.301 ' 01:25:56.301 05:20:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 01:25:56.301 05:20:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=59046 01:25:56.301 05:20:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:25:56.301 05:20:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 59046 01:25:56.301 05:20:47 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 59046 ']' 01:25:56.301 05:20:47 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:25:56.301 05:20:47 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 01:25:56.301 05:20:47 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:25:56.301 05:20:47 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 01:25:56.301 05:20:47 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 01:25:56.301 [2024-12-09 05:20:47.882652] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:25:56.301 [2024-12-09 05:20:47.883695] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59046 ] 01:25:56.558 [2024-12-09 05:20:48.055267] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:25:56.558 [2024-12-09 05:20:48.172074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:25:57.489 05:20:49 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:25:57.489 05:20:49 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 01:25:57.489 05:20:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 01:25:57.489 05:20:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 01:25:57.489 05:20:49 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:57.489 05:20:49 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 01:25:57.489 { 01:25:57.489 "filename": "/tmp/spdk_mem_dump.txt" 01:25:57.489 } 01:25:57.489 05:20:49 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:57.489 05:20:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 01:25:57.489 DPDK memory size 824.000000 MiB in 1 heap(s) 01:25:57.489 1 heaps totaling size 824.000000 MiB 01:25:57.489 size: 824.000000 MiB heap id: 0 01:25:57.489 end heaps---------- 01:25:57.489 9 mempools totaling size 603.782043 MiB 01:25:57.489 size: 212.674988 MiB name: PDU_immediate_data_Pool 01:25:57.489 size: 158.602051 MiB name: PDU_data_out_Pool 01:25:57.489 size: 100.555481 MiB name: bdev_io_59046 01:25:57.489 size: 50.003479 MiB name: msgpool_59046 01:25:57.489 size: 36.509338 MiB name: fsdev_io_59046 01:25:57.489 size: 21.763794 MiB name: PDU_Pool 01:25:57.489 size: 19.513306 MiB name: SCSI_TASK_Pool 01:25:57.489 size: 4.133484 MiB name: evtpool_59046 01:25:57.489 size: 0.026123 MiB name: Session_Pool 01:25:57.489 end mempools------- 01:25:57.489 6 memzones totaling size 4.142822 MiB 01:25:57.489 size: 1.000366 MiB name: RG_ring_0_59046 01:25:57.489 size: 1.000366 MiB name: RG_ring_1_59046 01:25:57.489 size: 1.000366 MiB name: RG_ring_4_59046 01:25:57.489 size: 1.000366 MiB name: RG_ring_5_59046 01:25:57.489 size: 0.125366 MiB name: RG_ring_2_59046 01:25:57.489 size: 0.015991 MiB name: RG_ring_3_59046 01:25:57.489 end memzones------- 01:25:57.489 05:20:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 01:25:57.747 heap id: 0 total size: 824.000000 MiB number of busy elements: 311 number of free elements: 18 01:25:57.747 list of free elements. size: 16.782349 MiB 01:25:57.747 element at address: 0x200006400000 with size: 1.995972 MiB 01:25:57.747 element at address: 0x20000a600000 with size: 1.995972 MiB 01:25:57.747 element at address: 0x200003e00000 with size: 1.991028 MiB 01:25:57.747 element at address: 0x200019500040 with size: 0.999939 MiB 01:25:57.747 element at address: 0x200019900040 with size: 0.999939 MiB 01:25:57.747 element at address: 0x200019a00000 with size: 0.999084 MiB 01:25:57.747 element at address: 0x200032600000 with size: 0.994324 MiB 01:25:57.747 element at address: 0x200000400000 with size: 0.992004 MiB 01:25:57.747 element at address: 0x200019200000 with size: 0.959656 MiB 01:25:57.747 element at address: 0x200019d00040 with size: 0.936401 MiB 01:25:57.747 element at address: 0x200000200000 with size: 0.716980 MiB 01:25:57.747 element at address: 0x20001b400000 with size: 0.563660 MiB 01:25:57.747 element at address: 0x200000c00000 with size: 0.489197 MiB 01:25:57.747 element at address: 0x200019600000 with size: 0.487976 MiB 01:25:57.747 element at address: 0x200019e00000 with size: 0.485413 MiB 01:25:57.747 element at address: 0x200012c00000 with size: 0.433472 MiB 01:25:57.747 element at address: 0x200028800000 with size: 0.390442 MiB 01:25:57.747 element at address: 0x200000800000 with size: 0.350891 MiB 01:25:57.747 list of standard malloc elements. size: 199.286743 MiB 01:25:57.747 element at address: 0x20000a7fef80 with size: 132.000183 MiB 01:25:57.747 element at address: 0x2000065fef80 with size: 64.000183 MiB 01:25:57.747 element at address: 0x2000193fff80 with size: 1.000183 MiB 01:25:57.747 element at address: 0x2000197fff80 with size: 1.000183 MiB 01:25:57.747 element at address: 0x200019bfff80 with size: 1.000183 MiB 01:25:57.747 element at address: 0x2000003d9e80 with size: 0.140808 MiB 01:25:57.747 element at address: 0x200019deff40 with size: 0.062683 MiB 01:25:57.747 element at address: 0x2000003fdf40 with size: 0.007996 MiB 01:25:57.747 element at address: 0x20000a5ff040 with size: 0.000427 MiB 01:25:57.747 element at address: 0x200019defdc0 with size: 0.000366 MiB 01:25:57.747 element at address: 0x200012bff040 with size: 0.000305 MiB 01:25:57.747 element at address: 0x2000002d7b00 with size: 0.000244 MiB 01:25:57.747 element at address: 0x2000003d9d80 with size: 0.000244 MiB 01:25:57.748 element at address: 0x2000004fdf40 with size: 0.000244 MiB 01:25:57.748 element at address: 0x2000004fe040 with size: 0.000244 MiB 01:25:57.748 element at address: 0x2000004fe140 with size: 0.000244 MiB 01:25:57.748 element at address: 0x2000004fe240 with size: 0.000244 MiB 01:25:57.748 element at address: 0x2000004fe340 with size: 0.000244 MiB 01:25:57.748 element at address: 0x2000004fe440 with size: 0.000244 MiB 01:25:57.748 element at address: 0x2000004fe540 with size: 0.000244 MiB 01:25:57.748 element at address: 0x2000004fe640 with size: 0.000244 MiB 01:25:57.748 element at address: 0x2000004fe740 with size: 0.000244 MiB 01:25:57.748 element at address: 0x2000004fe840 with size: 0.000244 MiB 01:25:57.748 element at address: 0x2000004fe940 with size: 0.000244 MiB 01:25:57.748 element at address: 0x2000004fea40 with size: 0.000244 MiB 01:25:57.748 element at address: 0x2000004feb40 with size: 0.000244 MiB 01:25:57.748 element at address: 0x2000004fec40 with size: 0.000244 MiB 01:25:57.748 element at address: 0x2000004fed40 with size: 0.000244 MiB 01:25:57.748 element at address: 0x2000004fee40 with size: 0.000244 MiB 01:25:57.748 element at address: 0x2000004fef40 with size: 0.000244 MiB 01:25:57.748 element at address: 0x2000004ff040 with size: 0.000244 MiB 01:25:57.748 element at address: 0x2000004ff140 with size: 0.000244 MiB 01:25:57.748 element at address: 0x2000004ff240 with size: 0.000244 MiB 01:25:57.748 element at address: 0x2000004ff340 with size: 0.000244 MiB 01:25:57.748 element at address: 0x2000004ff440 with size: 0.000244 MiB 01:25:57.748 element at address: 0x2000004ff540 with size: 0.000244 MiB 01:25:57.748 element at address: 0x2000004ff640 with size: 0.000244 MiB 01:25:57.748 element at address: 0x2000004ff740 with size: 0.000244 MiB 01:25:57.748 element at address: 0x2000004ff840 with size: 0.000244 MiB 01:25:57.748 element at address: 0x2000004ff940 with size: 0.000244 MiB 01:25:57.748 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 01:25:57.748 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 01:25:57.748 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 01:25:57.748 element at address: 0x20000087e1c0 with size: 0.000244 MiB 01:25:57.748 element at address: 0x20000087e2c0 with size: 0.000244 MiB 01:25:57.748 element at address: 0x20000087e3c0 with size: 0.000244 MiB 01:25:57.748 element at address: 0x20000087e4c0 with size: 0.000244 MiB 01:25:57.748 element at address: 0x20000087e5c0 with size: 0.000244 MiB 01:25:57.748 element at address: 0x20000087e6c0 with size: 0.000244 MiB 01:25:57.748 element at address: 0x20000087e7c0 with size: 0.000244 MiB 01:25:57.748 element at address: 0x20000087e8c0 with size: 0.000244 MiB 01:25:57.748 element at address: 0x20000087e9c0 with size: 0.000244 MiB 01:25:57.748 element at address: 0x20000087eac0 with size: 0.000244 MiB 01:25:57.748 element at address: 0x20000087ebc0 with size: 0.000244 MiB 01:25:57.748 element at address: 0x20000087ecc0 with size: 0.000244 MiB 01:25:57.748 element at address: 0x20000087edc0 with size: 0.000244 MiB 01:25:57.748 element at address: 0x20000087eec0 with size: 0.000244 MiB 01:25:57.748 element at address: 0x20000087efc0 with size: 0.000244 MiB 01:25:57.748 element at address: 0x20000087f0c0 with size: 0.000244 MiB 01:25:57.748 element at address: 0x20000087f1c0 with size: 0.000244 MiB 01:25:57.748 element at address: 0x20000087f2c0 with size: 0.000244 MiB 01:25:57.748 element at address: 0x20000087f3c0 with size: 0.000244 MiB 01:25:57.748 element at address: 0x20000087f4c0 with size: 0.000244 MiB 01:25:57.748 element at address: 0x2000008ff800 with size: 0.000244 MiB 01:25:57.748 element at address: 0x2000008ffa80 with size: 0.000244 MiB 01:25:57.748 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 01:25:57.748 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 01:25:57.748 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 01:25:57.748 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 01:25:57.748 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 01:25:57.748 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 01:25:57.748 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 01:25:57.748 element at address: 0x200000c7dac0 with size: 0.000244 MiB 01:25:57.748 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 01:25:57.748 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 01:25:57.748 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 01:25:57.748 element at address: 0x200000c7dec0 with size: 0.000244 MiB 01:25:57.748 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 01:25:57.748 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 01:25:57.748 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 01:25:57.748 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 01:25:57.748 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 01:25:57.748 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 01:25:57.748 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 01:25:57.748 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 01:25:57.748 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 01:25:57.748 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 01:25:57.748 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 01:25:57.748 element at address: 0x200000c7eac0 with size: 0.000244 MiB 01:25:57.748 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 01:25:57.748 element at address: 0x200000cfef00 with size: 0.000244 MiB 01:25:57.748 element at address: 0x200000cff000 with size: 0.000244 MiB 01:25:57.748 element at address: 0x20000a5ff200 with size: 0.000244 MiB 01:25:57.748 element at address: 0x20000a5ff300 with size: 0.000244 MiB 01:25:57.748 element at address: 0x20000a5ff400 with size: 0.000244 MiB 01:25:57.748 element at address: 0x20000a5ff500 with size: 0.000244 MiB 01:25:57.748 element at address: 0x20000a5ff600 with size: 0.000244 MiB 01:25:57.748 element at address: 0x20000a5ff700 with size: 0.000244 MiB 01:25:57.748 element at address: 0x20000a5ff800 with size: 0.000244 MiB 01:25:57.748 element at address: 0x20000a5ff900 with size: 0.000244 MiB 01:25:57.748 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 01:25:57.748 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 01:25:57.748 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 01:25:57.748 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 01:25:57.748 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 01:25:57.748 element at address: 0x20000a5fff00 with size: 0.000244 MiB 01:25:57.748 element at address: 0x200012bff180 with size: 0.000244 MiB 01:25:57.748 element at address: 0x200012bff280 with size: 0.000244 MiB 01:25:57.748 element at address: 0x200012bff380 with size: 0.000244 MiB 01:25:57.748 element at address: 0x200012bff480 with size: 0.000244 MiB 01:25:57.748 element at address: 0x200012bff580 with size: 0.000244 MiB 01:25:57.748 element at address: 0x200012bff680 with size: 0.000244 MiB 01:25:57.748 element at address: 0x200012bff780 with size: 0.000244 MiB 01:25:57.748 element at address: 0x200012bff880 with size: 0.000244 MiB 01:25:57.748 element at address: 0x200012bff980 with size: 0.000244 MiB 01:25:57.748 element at address: 0x200012bffa80 with size: 0.000244 MiB 01:25:57.748 element at address: 0x200012bffb80 with size: 0.000244 MiB 01:25:57.748 element at address: 0x200012bffc80 with size: 0.000244 MiB 01:25:57.748 element at address: 0x200012bfff00 with size: 0.000244 MiB 01:25:57.748 element at address: 0x200012c6ef80 with size: 0.000244 MiB 01:25:57.748 element at address: 0x200012c6f080 with size: 0.000244 MiB 01:25:57.748 element at address: 0x200012c6f180 with size: 0.000244 MiB 01:25:57.748 element at address: 0x200012c6f280 with size: 0.000244 MiB 01:25:57.748 element at address: 0x200012c6f380 with size: 0.000244 MiB 01:25:57.748 element at address: 0x200012c6f480 with size: 0.000244 MiB 01:25:57.748 element at address: 0x200012c6f580 with size: 0.000244 MiB 01:25:57.748 element at address: 0x200012c6f680 with size: 0.000244 MiB 01:25:57.748 element at address: 0x200012c6f780 with size: 0.000244 MiB 01:25:57.748 element at address: 0x200012c6f880 with size: 0.000244 MiB 01:25:57.748 element at address: 0x200012cefbc0 with size: 0.000244 MiB 01:25:57.748 element at address: 0x2000192fdd00 with size: 0.000244 MiB 01:25:57.748 element at address: 0x20001967cec0 with size: 0.000244 MiB 01:25:57.748 element at address: 0x20001967cfc0 with size: 0.000244 MiB 01:25:57.748 element at address: 0x20001967d0c0 with size: 0.000244 MiB 01:25:57.748 element at address: 0x20001967d1c0 with size: 0.000244 MiB 01:25:57.748 element at address: 0x20001967d2c0 with size: 0.000244 MiB 01:25:57.748 element at address: 0x20001967d3c0 with size: 0.000244 MiB 01:25:57.748 element at address: 0x20001967d4c0 with size: 0.000244 MiB 01:25:57.748 element at address: 0x20001967d5c0 with size: 0.000244 MiB 01:25:57.748 element at address: 0x20001967d6c0 with size: 0.000244 MiB 01:25:57.748 element at address: 0x20001967d7c0 with size: 0.000244 MiB 01:25:57.748 element at address: 0x20001967d8c0 with size: 0.000244 MiB 01:25:57.748 element at address: 0x20001967d9c0 with size: 0.000244 MiB 01:25:57.748 element at address: 0x2000196fdd00 with size: 0.000244 MiB 01:25:57.748 element at address: 0x200019affc40 with size: 0.000244 MiB 01:25:57.748 element at address: 0x200019defbc0 with size: 0.000244 MiB 01:25:57.749 element at address: 0x200019defcc0 with size: 0.000244 MiB 01:25:57.749 element at address: 0x200019ebc680 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20001b4904c0 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20001b4905c0 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20001b4906c0 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20001b4907c0 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20001b4908c0 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20001b4909c0 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20001b490ac0 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20001b490bc0 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20001b490cc0 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20001b490dc0 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20001b490ec0 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20001b490fc0 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20001b4910c0 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20001b4911c0 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20001b4912c0 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20001b4913c0 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20001b4914c0 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20001b4915c0 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20001b4916c0 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20001b4917c0 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20001b4918c0 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20001b4919c0 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20001b491ac0 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20001b491bc0 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20001b491cc0 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20001b491dc0 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20001b491ec0 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20001b491fc0 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20001b4920c0 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20001b4921c0 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20001b4922c0 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20001b4923c0 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20001b4924c0 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20001b4925c0 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20001b4926c0 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20001b4927c0 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20001b4928c0 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20001b4929c0 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20001b492ac0 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20001b492bc0 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20001b492cc0 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20001b492dc0 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20001b492ec0 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20001b492fc0 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20001b4930c0 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20001b4931c0 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20001b4932c0 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20001b4933c0 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20001b4934c0 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20001b4935c0 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20001b4936c0 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20001b4937c0 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20001b4938c0 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20001b4939c0 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20001b493ac0 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20001b493bc0 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20001b493cc0 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20001b493dc0 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20001b493ec0 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20001b493fc0 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20001b4940c0 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20001b4941c0 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20001b4942c0 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20001b4943c0 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20001b4944c0 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20001b4945c0 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20001b4946c0 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20001b4947c0 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20001b4948c0 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20001b4949c0 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20001b494ac0 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20001b494bc0 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20001b494cc0 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20001b494dc0 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20001b494ec0 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20001b494fc0 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20001b4950c0 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20001b4951c0 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20001b4952c0 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20001b4953c0 with size: 0.000244 MiB 01:25:57.749 element at address: 0x200028863f40 with size: 0.000244 MiB 01:25:57.749 element at address: 0x200028864040 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20002886ad00 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20002886af80 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20002886b080 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20002886b180 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20002886b280 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20002886b380 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20002886b480 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20002886b580 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20002886b680 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20002886b780 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20002886b880 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20002886b980 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20002886ba80 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20002886bb80 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20002886bc80 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20002886bd80 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20002886be80 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20002886bf80 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20002886c080 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20002886c180 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20002886c280 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20002886c380 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20002886c480 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20002886c580 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20002886c680 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20002886c780 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20002886c880 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20002886c980 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20002886ca80 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20002886cb80 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20002886cc80 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20002886cd80 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20002886ce80 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20002886cf80 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20002886d080 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20002886d180 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20002886d280 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20002886d380 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20002886d480 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20002886d580 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20002886d680 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20002886d780 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20002886d880 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20002886d980 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20002886da80 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20002886db80 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20002886dc80 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20002886dd80 with size: 0.000244 MiB 01:25:57.749 element at address: 0x20002886de80 with size: 0.000244 MiB 01:25:57.750 element at address: 0x20002886df80 with size: 0.000244 MiB 01:25:57.750 element at address: 0x20002886e080 with size: 0.000244 MiB 01:25:57.750 element at address: 0x20002886e180 with size: 0.000244 MiB 01:25:57.750 element at address: 0x20002886e280 with size: 0.000244 MiB 01:25:57.750 element at address: 0x20002886e380 with size: 0.000244 MiB 01:25:57.750 element at address: 0x20002886e480 with size: 0.000244 MiB 01:25:57.750 element at address: 0x20002886e580 with size: 0.000244 MiB 01:25:57.750 element at address: 0x20002886e680 with size: 0.000244 MiB 01:25:57.750 element at address: 0x20002886e780 with size: 0.000244 MiB 01:25:57.750 element at address: 0x20002886e880 with size: 0.000244 MiB 01:25:57.750 element at address: 0x20002886e980 with size: 0.000244 MiB 01:25:57.750 element at address: 0x20002886ea80 with size: 0.000244 MiB 01:25:57.750 element at address: 0x20002886eb80 with size: 0.000244 MiB 01:25:57.750 element at address: 0x20002886ec80 with size: 0.000244 MiB 01:25:57.750 element at address: 0x20002886ed80 with size: 0.000244 MiB 01:25:57.750 element at address: 0x20002886ee80 with size: 0.000244 MiB 01:25:57.750 element at address: 0x20002886ef80 with size: 0.000244 MiB 01:25:57.750 element at address: 0x20002886f080 with size: 0.000244 MiB 01:25:57.750 element at address: 0x20002886f180 with size: 0.000244 MiB 01:25:57.750 element at address: 0x20002886f280 with size: 0.000244 MiB 01:25:57.750 element at address: 0x20002886f380 with size: 0.000244 MiB 01:25:57.750 element at address: 0x20002886f480 with size: 0.000244 MiB 01:25:57.750 element at address: 0x20002886f580 with size: 0.000244 MiB 01:25:57.750 element at address: 0x20002886f680 with size: 0.000244 MiB 01:25:57.750 element at address: 0x20002886f780 with size: 0.000244 MiB 01:25:57.750 element at address: 0x20002886f880 with size: 0.000244 MiB 01:25:57.750 element at address: 0x20002886f980 with size: 0.000244 MiB 01:25:57.750 element at address: 0x20002886fa80 with size: 0.000244 MiB 01:25:57.750 element at address: 0x20002886fb80 with size: 0.000244 MiB 01:25:57.750 element at address: 0x20002886fc80 with size: 0.000244 MiB 01:25:57.750 element at address: 0x20002886fd80 with size: 0.000244 MiB 01:25:57.750 element at address: 0x20002886fe80 with size: 0.000244 MiB 01:25:57.750 list of memzone associated elements. size: 607.930908 MiB 01:25:57.750 element at address: 0x20001b4954c0 with size: 211.416809 MiB 01:25:57.750 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 01:25:57.750 element at address: 0x20002886ff80 with size: 157.562622 MiB 01:25:57.750 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 01:25:57.750 element at address: 0x200012df1e40 with size: 100.055115 MiB 01:25:57.750 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_59046_0 01:25:57.750 element at address: 0x200000dff340 with size: 48.003113 MiB 01:25:57.750 associated memzone info: size: 48.002930 MiB name: MP_msgpool_59046_0 01:25:57.750 element at address: 0x200003ffdb40 with size: 36.008972 MiB 01:25:57.750 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_59046_0 01:25:57.750 element at address: 0x200019fbe900 with size: 20.255615 MiB 01:25:57.750 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 01:25:57.750 element at address: 0x2000327feb00 with size: 18.005127 MiB 01:25:57.750 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 01:25:57.750 element at address: 0x2000004ffec0 with size: 3.000305 MiB 01:25:57.750 associated memzone info: size: 3.000122 MiB name: MP_evtpool_59046_0 01:25:57.750 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 01:25:57.750 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_59046 01:25:57.750 element at address: 0x2000002d7c00 with size: 1.008179 MiB 01:25:57.750 associated memzone info: size: 1.007996 MiB name: MP_evtpool_59046 01:25:57.750 element at address: 0x2000196fde00 with size: 1.008179 MiB 01:25:57.750 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 01:25:57.750 element at address: 0x200019ebc780 with size: 1.008179 MiB 01:25:57.750 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 01:25:57.750 element at address: 0x2000192fde00 with size: 1.008179 MiB 01:25:57.750 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 01:25:57.750 element at address: 0x200012cefcc0 with size: 1.008179 MiB 01:25:57.750 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 01:25:57.750 element at address: 0x200000cff100 with size: 1.000549 MiB 01:25:57.750 associated memzone info: size: 1.000366 MiB name: RG_ring_0_59046 01:25:57.750 element at address: 0x2000008ffb80 with size: 1.000549 MiB 01:25:57.750 associated memzone info: size: 1.000366 MiB name: RG_ring_1_59046 01:25:57.750 element at address: 0x200019affd40 with size: 1.000549 MiB 01:25:57.750 associated memzone info: size: 1.000366 MiB name: RG_ring_4_59046 01:25:57.750 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 01:25:57.750 associated memzone info: size: 1.000366 MiB name: RG_ring_5_59046 01:25:57.750 element at address: 0x20000087f5c0 with size: 0.500549 MiB 01:25:57.750 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_59046 01:25:57.750 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 01:25:57.750 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_59046 01:25:57.750 element at address: 0x20001967dac0 with size: 0.500549 MiB 01:25:57.750 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 01:25:57.750 element at address: 0x200012c6f980 with size: 0.500549 MiB 01:25:57.750 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 01:25:57.750 element at address: 0x200019e7c440 with size: 0.250549 MiB 01:25:57.750 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 01:25:57.750 element at address: 0x2000002b78c0 with size: 0.125549 MiB 01:25:57.750 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_59046 01:25:57.750 element at address: 0x20000085df80 with size: 0.125549 MiB 01:25:57.750 associated memzone info: size: 0.125366 MiB name: RG_ring_2_59046 01:25:57.750 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 01:25:57.750 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 01:25:57.750 element at address: 0x200028864140 with size: 0.023804 MiB 01:25:57.750 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 01:25:57.750 element at address: 0x200000859d40 with size: 0.016174 MiB 01:25:57.750 associated memzone info: size: 0.015991 MiB name: RG_ring_3_59046 01:25:57.750 element at address: 0x20002886a2c0 with size: 0.002502 MiB 01:25:57.750 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 01:25:57.750 element at address: 0x2000004ffa40 with size: 0.000366 MiB 01:25:57.750 associated memzone info: size: 0.000183 MiB name: MP_msgpool_59046 01:25:57.750 element at address: 0x2000008ff900 with size: 0.000366 MiB 01:25:57.750 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_59046 01:25:57.750 element at address: 0x200012bffd80 with size: 0.000366 MiB 01:25:57.750 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_59046 01:25:57.750 element at address: 0x20002886ae00 with size: 0.000366 MiB 01:25:57.750 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 01:25:57.750 05:20:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 01:25:57.750 05:20:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 59046 01:25:57.750 05:20:49 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 59046 ']' 01:25:57.750 05:20:49 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 59046 01:25:57.750 05:20:49 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 01:25:57.750 05:20:49 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:25:57.750 05:20:49 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59046 01:25:57.750 killing process with pid 59046 01:25:57.750 05:20:49 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:25:57.750 05:20:49 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:25:57.750 05:20:49 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59046' 01:25:57.750 05:20:49 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 59046 01:25:57.750 05:20:49 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 59046 01:26:00.307 01:26:00.307 real 0m3.851s 01:26:00.307 user 0m3.844s 01:26:00.307 sys 0m0.632s 01:26:00.307 05:20:51 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 01:26:00.307 ************************************ 01:26:00.307 END TEST dpdk_mem_utility 01:26:00.307 ************************************ 01:26:00.307 05:20:51 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 01:26:00.307 05:20:51 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 01:26:00.307 05:20:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:26:00.307 05:20:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:26:00.307 05:20:51 -- common/autotest_common.sh@10 -- # set +x 01:26:00.307 ************************************ 01:26:00.307 START TEST event 01:26:00.307 ************************************ 01:26:00.307 05:20:51 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 01:26:00.307 * Looking for test storage... 01:26:00.307 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 01:26:00.307 05:20:51 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:26:00.307 05:20:51 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:26:00.307 05:20:51 event -- common/autotest_common.sh@1693 -- # lcov --version 01:26:00.307 05:20:51 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:26:00.307 05:20:51 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:26:00.307 05:20:51 event -- scripts/common.sh@333 -- # local ver1 ver1_l 01:26:00.307 05:20:51 event -- scripts/common.sh@334 -- # local ver2 ver2_l 01:26:00.307 05:20:51 event -- scripts/common.sh@336 -- # IFS=.-: 01:26:00.307 05:20:51 event -- scripts/common.sh@336 -- # read -ra ver1 01:26:00.307 05:20:51 event -- scripts/common.sh@337 -- # IFS=.-: 01:26:00.307 05:20:51 event -- scripts/common.sh@337 -- # read -ra ver2 01:26:00.307 05:20:51 event -- scripts/common.sh@338 -- # local 'op=<' 01:26:00.307 05:20:51 event -- scripts/common.sh@340 -- # ver1_l=2 01:26:00.307 05:20:51 event -- scripts/common.sh@341 -- # ver2_l=1 01:26:00.307 05:20:51 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:26:00.307 05:20:51 event -- scripts/common.sh@344 -- # case "$op" in 01:26:00.307 05:20:51 event -- scripts/common.sh@345 -- # : 1 01:26:00.307 05:20:51 event -- scripts/common.sh@364 -- # (( v = 0 )) 01:26:00.307 05:20:51 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:26:00.307 05:20:51 event -- scripts/common.sh@365 -- # decimal 1 01:26:00.307 05:20:51 event -- scripts/common.sh@353 -- # local d=1 01:26:00.307 05:20:51 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:26:00.307 05:20:51 event -- scripts/common.sh@355 -- # echo 1 01:26:00.307 05:20:51 event -- scripts/common.sh@365 -- # ver1[v]=1 01:26:00.307 05:20:51 event -- scripts/common.sh@366 -- # decimal 2 01:26:00.307 05:20:51 event -- scripts/common.sh@353 -- # local d=2 01:26:00.307 05:20:51 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:26:00.307 05:20:51 event -- scripts/common.sh@355 -- # echo 2 01:26:00.307 05:20:51 event -- scripts/common.sh@366 -- # ver2[v]=2 01:26:00.307 05:20:51 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:26:00.307 05:20:51 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:26:00.307 05:20:51 event -- scripts/common.sh@368 -- # return 0 01:26:00.307 05:20:51 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:26:00.307 05:20:51 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:26:00.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:26:00.307 --rc genhtml_branch_coverage=1 01:26:00.307 --rc genhtml_function_coverage=1 01:26:00.307 --rc genhtml_legend=1 01:26:00.307 --rc geninfo_all_blocks=1 01:26:00.307 --rc geninfo_unexecuted_blocks=1 01:26:00.307 01:26:00.307 ' 01:26:00.307 05:20:51 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:26:00.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:26:00.307 --rc genhtml_branch_coverage=1 01:26:00.307 --rc genhtml_function_coverage=1 01:26:00.307 --rc genhtml_legend=1 01:26:00.307 --rc geninfo_all_blocks=1 01:26:00.307 --rc geninfo_unexecuted_blocks=1 01:26:00.307 01:26:00.307 ' 01:26:00.307 05:20:51 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:26:00.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:26:00.307 --rc genhtml_branch_coverage=1 01:26:00.307 --rc genhtml_function_coverage=1 01:26:00.307 --rc genhtml_legend=1 01:26:00.307 --rc geninfo_all_blocks=1 01:26:00.307 --rc geninfo_unexecuted_blocks=1 01:26:00.307 01:26:00.307 ' 01:26:00.307 05:20:51 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:26:00.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:26:00.307 --rc genhtml_branch_coverage=1 01:26:00.307 --rc genhtml_function_coverage=1 01:26:00.307 --rc genhtml_legend=1 01:26:00.308 --rc geninfo_all_blocks=1 01:26:00.308 --rc geninfo_unexecuted_blocks=1 01:26:00.308 01:26:00.308 ' 01:26:00.308 05:20:51 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 01:26:00.308 05:20:51 event -- bdev/nbd_common.sh@6 -- # set -e 01:26:00.308 05:20:51 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 01:26:00.308 05:20:51 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 01:26:00.308 05:20:51 event -- common/autotest_common.sh@1111 -- # xtrace_disable 01:26:00.308 05:20:51 event -- common/autotest_common.sh@10 -- # set +x 01:26:00.308 ************************************ 01:26:00.308 START TEST event_perf 01:26:00.308 ************************************ 01:26:00.308 05:20:51 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 01:26:00.308 Running I/O for 1 seconds...[2024-12-09 05:20:51.756315] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:26:00.308 [2024-12-09 05:20:51.756748] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59149 ] 01:26:00.566 [2024-12-09 05:20:51.944022] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 01:26:00.566 [2024-12-09 05:20:52.076959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:26:00.566 [2024-12-09 05:20:52.077063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:26:00.566 [2024-12-09 05:20:52.077214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:26:00.566 Running I/O for 1 seconds...[2024-12-09 05:20:52.077230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:26:01.941 01:26:01.941 lcore 0: 194671 01:26:01.941 lcore 1: 194669 01:26:01.941 lcore 2: 194670 01:26:01.941 lcore 3: 194672 01:26:01.941 done. 01:26:01.941 01:26:01.941 real 0m1.695s 01:26:01.941 ************************************ 01:26:01.941 END TEST event_perf 01:26:01.941 ************************************ 01:26:01.941 user 0m4.443s 01:26:01.941 sys 0m0.129s 01:26:01.941 05:20:53 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 01:26:01.941 05:20:53 event.event_perf -- common/autotest_common.sh@10 -- # set +x 01:26:01.941 05:20:53 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 01:26:01.941 05:20:53 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:26:01.941 05:20:53 event -- common/autotest_common.sh@1111 -- # xtrace_disable 01:26:01.941 05:20:53 event -- common/autotest_common.sh@10 -- # set +x 01:26:01.941 ************************************ 01:26:01.941 START TEST event_reactor 01:26:01.941 ************************************ 01:26:01.941 05:20:53 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 01:26:01.941 [2024-12-09 05:20:53.497924] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:26:01.942 [2024-12-09 05:20:53.498087] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59194 ] 01:26:02.200 [2024-12-09 05:20:53.674759] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:26:02.200 [2024-12-09 05:20:53.793280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:26:03.576 test_start 01:26:03.576 oneshot 01:26:03.576 tick 100 01:26:03.576 tick 100 01:26:03.576 tick 250 01:26:03.576 tick 100 01:26:03.576 tick 100 01:26:03.576 tick 250 01:26:03.576 tick 500 01:26:03.576 tick 100 01:26:03.576 tick 100 01:26:03.576 tick 100 01:26:03.576 tick 250 01:26:03.576 tick 100 01:26:03.576 tick 100 01:26:03.576 test_end 01:26:03.576 ************************************ 01:26:03.576 END TEST event_reactor 01:26:03.576 ************************************ 01:26:03.576 01:26:03.576 real 0m1.635s 01:26:03.576 user 0m1.426s 01:26:03.576 sys 0m0.100s 01:26:03.576 05:20:55 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 01:26:03.576 05:20:55 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 01:26:03.576 05:20:55 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 01:26:03.576 05:20:55 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:26:03.576 05:20:55 event -- common/autotest_common.sh@1111 -- # xtrace_disable 01:26:03.576 05:20:55 event -- common/autotest_common.sh@10 -- # set +x 01:26:03.576 ************************************ 01:26:03.576 START TEST event_reactor_perf 01:26:03.576 ************************************ 01:26:03.576 05:20:55 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 01:26:03.834 [2024-12-09 05:20:55.193412] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:26:03.834 [2024-12-09 05:20:55.193838] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59231 ] 01:26:03.834 [2024-12-09 05:20:55.385921] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:26:04.092 [2024-12-09 05:20:55.513015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:26:05.465 test_start 01:26:05.465 test_end 01:26:05.465 Performance: 306915 events per second 01:26:05.465 01:26:05.465 real 0m1.671s 01:26:05.465 user 0m1.452s 01:26:05.465 sys 0m0.108s 01:26:05.465 05:20:56 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 01:26:05.465 05:20:56 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 01:26:05.465 ************************************ 01:26:05.465 END TEST event_reactor_perf 01:26:05.465 ************************************ 01:26:05.465 05:20:56 event -- event/event.sh@49 -- # uname -s 01:26:05.465 05:20:56 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 01:26:05.465 05:20:56 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 01:26:05.465 05:20:56 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:26:05.465 05:20:56 event -- common/autotest_common.sh@1111 -- # xtrace_disable 01:26:05.465 05:20:56 event -- common/autotest_common.sh@10 -- # set +x 01:26:05.465 ************************************ 01:26:05.465 START TEST event_scheduler 01:26:05.465 ************************************ 01:26:05.465 05:20:56 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 01:26:05.465 * Looking for test storage... 01:26:05.465 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 01:26:05.465 05:20:56 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:26:05.465 05:20:56 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 01:26:05.465 05:20:56 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:26:05.465 05:20:57 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:26:05.465 05:20:57 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:26:05.465 05:20:57 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 01:26:05.465 05:20:57 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 01:26:05.465 05:20:57 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 01:26:05.465 05:20:57 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 01:26:05.465 05:20:57 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 01:26:05.465 05:20:57 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 01:26:05.465 05:20:57 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 01:26:05.465 05:20:57 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 01:26:05.465 05:20:57 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 01:26:05.465 05:20:57 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:26:05.465 05:20:57 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 01:26:05.465 05:20:57 event.event_scheduler -- scripts/common.sh@345 -- # : 1 01:26:05.465 05:20:57 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 01:26:05.465 05:20:57 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:26:05.465 05:20:57 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 01:26:05.465 05:20:57 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 01:26:05.465 05:20:57 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:26:05.465 05:20:57 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 01:26:05.465 05:20:57 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 01:26:05.465 05:20:57 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 01:26:05.465 05:20:57 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 01:26:05.465 05:20:57 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:26:05.465 05:20:57 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 01:26:05.465 05:20:57 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 01:26:05.465 05:20:57 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:26:05.465 05:20:57 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:26:05.465 05:20:57 event.event_scheduler -- scripts/common.sh@368 -- # return 0 01:26:05.465 05:20:57 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:26:05.465 05:20:57 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:26:05.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:26:05.465 --rc genhtml_branch_coverage=1 01:26:05.465 --rc genhtml_function_coverage=1 01:26:05.465 --rc genhtml_legend=1 01:26:05.465 --rc geninfo_all_blocks=1 01:26:05.465 --rc geninfo_unexecuted_blocks=1 01:26:05.465 01:26:05.465 ' 01:26:05.465 05:20:57 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:26:05.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:26:05.465 --rc genhtml_branch_coverage=1 01:26:05.465 --rc genhtml_function_coverage=1 01:26:05.465 --rc genhtml_legend=1 01:26:05.465 --rc geninfo_all_blocks=1 01:26:05.465 --rc geninfo_unexecuted_blocks=1 01:26:05.465 01:26:05.465 ' 01:26:05.465 05:20:57 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:26:05.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:26:05.465 --rc genhtml_branch_coverage=1 01:26:05.465 --rc genhtml_function_coverage=1 01:26:05.465 --rc genhtml_legend=1 01:26:05.465 --rc geninfo_all_blocks=1 01:26:05.465 --rc geninfo_unexecuted_blocks=1 01:26:05.465 01:26:05.465 ' 01:26:05.465 05:20:57 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:26:05.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:26:05.465 --rc genhtml_branch_coverage=1 01:26:05.465 --rc genhtml_function_coverage=1 01:26:05.465 --rc genhtml_legend=1 01:26:05.465 --rc geninfo_all_blocks=1 01:26:05.465 --rc geninfo_unexecuted_blocks=1 01:26:05.465 01:26:05.465 ' 01:26:05.465 05:20:57 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 01:26:05.465 05:20:57 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=59306 01:26:05.465 05:20:57 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 01:26:05.465 05:20:57 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 01:26:05.465 05:20:57 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 59306 01:26:05.465 05:20:57 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 59306 ']' 01:26:05.465 05:20:57 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:26:05.465 05:20:57 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 01:26:05.465 05:20:57 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:26:05.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:26:05.465 05:20:57 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 01:26:05.465 05:20:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 01:26:05.726 [2024-12-09 05:20:57.180643] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:26:05.726 [2024-12-09 05:20:57.180867] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59306 ] 01:26:06.018 [2024-12-09 05:20:57.363571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 01:26:06.018 [2024-12-09 05:20:57.504621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:26:06.018 [2024-12-09 05:20:57.504766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:26:06.018 [2024-12-09 05:20:57.504927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:26:06.018 [2024-12-09 05:20:57.504941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:26:06.584 05:20:58 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:26:06.584 05:20:58 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 01:26:06.585 05:20:58 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 01:26:06.585 05:20:58 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:06.585 05:20:58 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 01:26:06.585 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 01:26:06.585 POWER: Cannot set governor of lcore 0 to userspace 01:26:06.585 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 01:26:06.585 POWER: Cannot set governor of lcore 0 to performance 01:26:06.585 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 01:26:06.585 POWER: Cannot set governor of lcore 0 to userspace 01:26:06.585 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 01:26:06.585 POWER: Cannot set governor of lcore 0 to userspace 01:26:06.585 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 01:26:06.585 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 01:26:06.585 POWER: Unable to set Power Management Environment for lcore 0 01:26:06.585 [2024-12-09 05:20:58.175392] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 01:26:06.585 [2024-12-09 05:20:58.175421] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 01:26:06.585 [2024-12-09 05:20:58.175435] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 01:26:06.585 [2024-12-09 05:20:58.175459] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 01:26:06.585 [2024-12-09 05:20:58.175471] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 01:26:06.585 [2024-12-09 05:20:58.175485] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 01:26:06.585 05:20:58 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:06.585 05:20:58 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 01:26:06.585 05:20:58 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:06.585 05:20:58 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 01:26:07.150 [2024-12-09 05:20:58.496251] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 01:26:07.150 05:20:58 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:07.150 05:20:58 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 01:26:07.150 05:20:58 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:26:07.150 05:20:58 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 01:26:07.150 05:20:58 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 01:26:07.150 ************************************ 01:26:07.150 START TEST scheduler_create_thread 01:26:07.150 ************************************ 01:26:07.150 05:20:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 01:26:07.150 05:20:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 01:26:07.150 05:20:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:07.150 05:20:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 01:26:07.150 2 01:26:07.150 05:20:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:07.150 05:20:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 01:26:07.150 05:20:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:07.150 05:20:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 01:26:07.150 3 01:26:07.150 05:20:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:07.150 05:20:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 01:26:07.150 05:20:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:07.150 05:20:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 01:26:07.150 4 01:26:07.150 05:20:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:07.150 05:20:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 01:26:07.150 05:20:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:07.150 05:20:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 01:26:07.150 5 01:26:07.150 05:20:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:07.150 05:20:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 01:26:07.150 05:20:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:07.150 05:20:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 01:26:07.150 6 01:26:07.150 05:20:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:07.150 05:20:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 01:26:07.150 05:20:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:07.151 05:20:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 01:26:07.151 7 01:26:07.151 05:20:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:07.151 05:20:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 01:26:07.151 05:20:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:07.151 05:20:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 01:26:07.151 8 01:26:07.151 05:20:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:07.151 05:20:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 01:26:07.151 05:20:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:07.151 05:20:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 01:26:07.151 9 01:26:07.151 05:20:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:07.151 05:20:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 01:26:07.151 05:20:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:07.151 05:20:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 01:26:07.151 10 01:26:07.151 05:20:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:07.151 05:20:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 01:26:07.151 05:20:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:07.151 05:20:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 01:26:07.151 05:20:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:07.151 05:20:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 01:26:07.151 05:20:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 01:26:07.151 05:20:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:07.151 05:20:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 01:26:07.151 05:20:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:07.151 05:20:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 01:26:07.151 05:20:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:07.151 05:20:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 01:26:08.527 05:21:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:08.527 05:21:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 01:26:08.527 05:21:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 01:26:08.527 05:21:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:08.527 05:21:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 01:26:09.912 ************************************ 01:26:09.912 END TEST scheduler_create_thread 01:26:09.912 ************************************ 01:26:09.912 05:21:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:09.912 01:26:09.912 real 0m2.617s 01:26:09.912 user 0m0.012s 01:26:09.912 sys 0m0.011s 01:26:09.912 05:21:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 01:26:09.912 05:21:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 01:26:09.912 05:21:01 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 01:26:09.912 05:21:01 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 59306 01:26:09.912 05:21:01 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 59306 ']' 01:26:09.912 05:21:01 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 59306 01:26:09.912 05:21:01 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 01:26:09.912 05:21:01 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:26:09.912 05:21:01 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59306 01:26:09.912 killing process with pid 59306 01:26:09.912 05:21:01 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 01:26:09.912 05:21:01 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 01:26:09.912 05:21:01 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59306' 01:26:09.912 05:21:01 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 59306 01:26:09.912 05:21:01 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 59306 01:26:10.170 [2024-12-09 05:21:01.607473] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 01:26:11.550 01:26:11.550 real 0m5.972s 01:26:11.550 user 0m10.461s 01:26:11.550 sys 0m0.542s 01:26:11.550 ************************************ 01:26:11.550 END TEST event_scheduler 01:26:11.550 ************************************ 01:26:11.550 05:21:02 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 01:26:11.550 05:21:02 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 01:26:11.550 05:21:02 event -- event/event.sh@51 -- # modprobe -n nbd 01:26:11.550 05:21:02 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 01:26:11.550 05:21:02 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:26:11.550 05:21:02 event -- common/autotest_common.sh@1111 -- # xtrace_disable 01:26:11.550 05:21:02 event -- common/autotest_common.sh@10 -- # set +x 01:26:11.550 ************************************ 01:26:11.550 START TEST app_repeat 01:26:11.550 ************************************ 01:26:11.550 05:21:02 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 01:26:11.550 05:21:02 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:26:11.550 05:21:02 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:26:11.550 05:21:02 event.app_repeat -- event/event.sh@13 -- # local nbd_list 01:26:11.550 05:21:02 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 01:26:11.550 05:21:02 event.app_repeat -- event/event.sh@14 -- # local bdev_list 01:26:11.550 05:21:02 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 01:26:11.550 05:21:02 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 01:26:11.550 05:21:02 event.app_repeat -- event/event.sh@19 -- # repeat_pid=59419 01:26:11.550 05:21:02 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 01:26:11.550 05:21:02 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 01:26:11.550 05:21:02 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 59419' 01:26:11.550 Process app_repeat pid: 59419 01:26:11.550 05:21:02 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 01:26:11.550 spdk_app_start Round 0 01:26:11.550 05:21:02 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 01:26:11.550 05:21:02 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59419 /var/tmp/spdk-nbd.sock 01:26:11.550 05:21:02 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59419 ']' 01:26:11.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 01:26:11.550 05:21:02 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 01:26:11.550 05:21:02 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 01:26:11.550 05:21:02 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 01:26:11.550 05:21:02 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 01:26:11.550 05:21:02 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 01:26:11.550 [2024-12-09 05:21:02.965606] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:26:11.550 [2024-12-09 05:21:02.965784] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59419 ] 01:26:11.550 [2024-12-09 05:21:03.150264] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 01:26:11.809 [2024-12-09 05:21:03.316985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:26:11.809 [2024-12-09 05:21:03.316985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:26:12.746 05:21:04 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:26:12.746 05:21:04 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 01:26:12.746 05:21:04 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 01:26:12.746 Malloc0 01:26:13.004 05:21:04 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 01:26:13.263 Malloc1 01:26:13.263 05:21:04 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 01:26:13.263 05:21:04 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:26:13.263 05:21:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 01:26:13.263 05:21:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 01:26:13.263 05:21:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:26:13.263 05:21:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 01:26:13.263 05:21:04 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 01:26:13.263 05:21:04 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:26:13.263 05:21:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 01:26:13.263 05:21:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 01:26:13.263 05:21:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:26:13.263 05:21:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 01:26:13.263 05:21:04 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 01:26:13.263 05:21:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 01:26:13.263 05:21:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 01:26:13.263 05:21:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 01:26:13.521 /dev/nbd0 01:26:13.521 05:21:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 01:26:13.521 05:21:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 01:26:13.521 05:21:05 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 01:26:13.521 05:21:05 event.app_repeat -- common/autotest_common.sh@873 -- # local i 01:26:13.521 05:21:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:26:13.521 05:21:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:26:13.521 05:21:05 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 01:26:13.521 05:21:05 event.app_repeat -- common/autotest_common.sh@877 -- # break 01:26:13.521 05:21:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:26:13.521 05:21:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:26:13.521 05:21:05 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 01:26:13.521 1+0 records in 01:26:13.521 1+0 records out 01:26:13.521 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000269843 s, 15.2 MB/s 01:26:13.521 05:21:05 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 01:26:13.521 05:21:05 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 01:26:13.521 05:21:05 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 01:26:13.521 05:21:05 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:26:13.521 05:21:05 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 01:26:13.521 05:21:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:26:13.521 05:21:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 01:26:13.521 05:21:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 01:26:13.779 /dev/nbd1 01:26:13.779 05:21:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 01:26:13.779 05:21:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 01:26:13.779 05:21:05 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 01:26:13.779 05:21:05 event.app_repeat -- common/autotest_common.sh@873 -- # local i 01:26:13.779 05:21:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:26:13.779 05:21:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:26:13.779 05:21:05 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 01:26:13.779 05:21:05 event.app_repeat -- common/autotest_common.sh@877 -- # break 01:26:13.779 05:21:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:26:13.779 05:21:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:26:13.779 05:21:05 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 01:26:13.779 1+0 records in 01:26:13.779 1+0 records out 01:26:13.779 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000313463 s, 13.1 MB/s 01:26:13.779 05:21:05 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 01:26:13.779 05:21:05 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 01:26:13.779 05:21:05 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 01:26:13.779 05:21:05 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:26:13.779 05:21:05 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 01:26:13.779 05:21:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:26:13.779 05:21:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 01:26:13.779 05:21:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 01:26:13.779 05:21:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:26:13.779 05:21:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 01:26:14.345 05:21:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 01:26:14.345 { 01:26:14.345 "nbd_device": "/dev/nbd0", 01:26:14.345 "bdev_name": "Malloc0" 01:26:14.345 }, 01:26:14.345 { 01:26:14.345 "nbd_device": "/dev/nbd1", 01:26:14.345 "bdev_name": "Malloc1" 01:26:14.345 } 01:26:14.345 ]' 01:26:14.345 05:21:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 01:26:14.345 { 01:26:14.345 "nbd_device": "/dev/nbd0", 01:26:14.345 "bdev_name": "Malloc0" 01:26:14.345 }, 01:26:14.345 { 01:26:14.345 "nbd_device": "/dev/nbd1", 01:26:14.345 "bdev_name": "Malloc1" 01:26:14.345 } 01:26:14.345 ]' 01:26:14.345 05:21:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 01:26:14.345 05:21:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 01:26:14.345 /dev/nbd1' 01:26:14.345 05:21:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 01:26:14.345 /dev/nbd1' 01:26:14.345 05:21:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 01:26:14.345 05:21:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 01:26:14.345 05:21:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 01:26:14.345 05:21:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 01:26:14.345 05:21:05 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 01:26:14.345 05:21:05 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 01:26:14.345 05:21:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:26:14.345 05:21:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 01:26:14.345 05:21:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 01:26:14.345 05:21:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 01:26:14.345 05:21:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 01:26:14.345 05:21:05 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 01:26:14.345 256+0 records in 01:26:14.345 256+0 records out 01:26:14.345 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00797019 s, 132 MB/s 01:26:14.345 05:21:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 01:26:14.345 05:21:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 01:26:14.345 256+0 records in 01:26:14.345 256+0 records out 01:26:14.345 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0263063 s, 39.9 MB/s 01:26:14.345 05:21:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 01:26:14.345 05:21:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 01:26:14.345 256+0 records in 01:26:14.345 256+0 records out 01:26:14.345 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0354458 s, 29.6 MB/s 01:26:14.345 05:21:05 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 01:26:14.345 05:21:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:26:14.345 05:21:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 01:26:14.346 05:21:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 01:26:14.346 05:21:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 01:26:14.346 05:21:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 01:26:14.346 05:21:05 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 01:26:14.346 05:21:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 01:26:14.346 05:21:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 01:26:14.346 05:21:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 01:26:14.346 05:21:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 01:26:14.346 05:21:05 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 01:26:14.346 05:21:05 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 01:26:14.346 05:21:05 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:26:14.346 05:21:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:26:14.346 05:21:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 01:26:14.346 05:21:05 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 01:26:14.346 05:21:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:26:14.346 05:21:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 01:26:14.603 05:21:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 01:26:14.603 05:21:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 01:26:14.603 05:21:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 01:26:14.603 05:21:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:26:14.603 05:21:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:26:14.603 05:21:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 01:26:14.603 05:21:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 01:26:14.603 05:21:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 01:26:14.603 05:21:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:26:14.603 05:21:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 01:26:15.169 05:21:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 01:26:15.169 05:21:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 01:26:15.169 05:21:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 01:26:15.169 05:21:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:26:15.169 05:21:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:26:15.169 05:21:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 01:26:15.169 05:21:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 01:26:15.169 05:21:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 01:26:15.169 05:21:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 01:26:15.169 05:21:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:26:15.169 05:21:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 01:26:15.169 05:21:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 01:26:15.169 05:21:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 01:26:15.169 05:21:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 01:26:15.427 05:21:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 01:26:15.427 05:21:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 01:26:15.427 05:21:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 01:26:15.427 05:21:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 01:26:15.427 05:21:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 01:26:15.427 05:21:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 01:26:15.427 05:21:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 01:26:15.427 05:21:06 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 01:26:15.427 05:21:06 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 01:26:15.427 05:21:06 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 01:26:15.994 05:21:07 event.app_repeat -- event/event.sh@35 -- # sleep 3 01:26:16.982 [2024-12-09 05:21:08.494007] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 01:26:17.241 [2024-12-09 05:21:08.623841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:26:17.241 [2024-12-09 05:21:08.623854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:26:17.241 [2024-12-09 05:21:08.816704] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 01:26:17.241 [2024-12-09 05:21:08.816812] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 01:26:19.140 05:21:10 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 01:26:19.140 spdk_app_start Round 1 01:26:19.140 05:21:10 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 01:26:19.140 05:21:10 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59419 /var/tmp/spdk-nbd.sock 01:26:19.140 05:21:10 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59419 ']' 01:26:19.140 05:21:10 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 01:26:19.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 01:26:19.140 05:21:10 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 01:26:19.140 05:21:10 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 01:26:19.140 05:21:10 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 01:26:19.140 05:21:10 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 01:26:19.140 05:21:10 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:26:19.140 05:21:10 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 01:26:19.140 05:21:10 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 01:26:19.398 Malloc0 01:26:19.398 05:21:10 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 01:26:19.656 Malloc1 01:26:19.656 05:21:11 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 01:26:19.656 05:21:11 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:26:19.656 05:21:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 01:26:19.656 05:21:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 01:26:19.656 05:21:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:26:19.656 05:21:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 01:26:19.656 05:21:11 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 01:26:19.656 05:21:11 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:26:19.656 05:21:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 01:26:19.656 05:21:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 01:26:19.656 05:21:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:26:19.656 05:21:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 01:26:19.656 05:21:11 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 01:26:19.656 05:21:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 01:26:19.656 05:21:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 01:26:19.656 05:21:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 01:26:19.914 /dev/nbd0 01:26:19.914 05:21:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 01:26:19.914 05:21:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 01:26:19.914 05:21:11 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 01:26:19.914 05:21:11 event.app_repeat -- common/autotest_common.sh@873 -- # local i 01:26:19.914 05:21:11 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:26:19.914 05:21:11 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:26:19.914 05:21:11 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 01:26:19.914 05:21:11 event.app_repeat -- common/autotest_common.sh@877 -- # break 01:26:19.914 05:21:11 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:26:19.914 05:21:11 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:26:19.914 05:21:11 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 01:26:19.914 1+0 records in 01:26:19.914 1+0 records out 01:26:19.914 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000219665 s, 18.6 MB/s 01:26:19.914 05:21:11 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 01:26:19.914 05:21:11 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 01:26:19.914 05:21:11 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 01:26:19.914 05:21:11 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:26:19.914 05:21:11 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 01:26:19.914 05:21:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:26:19.914 05:21:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 01:26:19.914 05:21:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 01:26:20.173 /dev/nbd1 01:26:20.431 05:21:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 01:26:20.431 05:21:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 01:26:20.431 05:21:11 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 01:26:20.431 05:21:11 event.app_repeat -- common/autotest_common.sh@873 -- # local i 01:26:20.431 05:21:11 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:26:20.431 05:21:11 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:26:20.431 05:21:11 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 01:26:20.431 05:21:11 event.app_repeat -- common/autotest_common.sh@877 -- # break 01:26:20.431 05:21:11 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:26:20.431 05:21:11 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:26:20.431 05:21:11 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 01:26:20.431 1+0 records in 01:26:20.431 1+0 records out 01:26:20.431 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000333512 s, 12.3 MB/s 01:26:20.431 05:21:11 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 01:26:20.431 05:21:11 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 01:26:20.431 05:21:11 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 01:26:20.431 05:21:11 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:26:20.431 05:21:11 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 01:26:20.431 05:21:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:26:20.431 05:21:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 01:26:20.431 05:21:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 01:26:20.431 05:21:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:26:20.431 05:21:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 01:26:20.689 05:21:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 01:26:20.689 { 01:26:20.689 "nbd_device": "/dev/nbd0", 01:26:20.689 "bdev_name": "Malloc0" 01:26:20.689 }, 01:26:20.689 { 01:26:20.689 "nbd_device": "/dev/nbd1", 01:26:20.689 "bdev_name": "Malloc1" 01:26:20.689 } 01:26:20.689 ]' 01:26:20.689 05:21:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 01:26:20.689 { 01:26:20.689 "nbd_device": "/dev/nbd0", 01:26:20.689 "bdev_name": "Malloc0" 01:26:20.689 }, 01:26:20.689 { 01:26:20.689 "nbd_device": "/dev/nbd1", 01:26:20.689 "bdev_name": "Malloc1" 01:26:20.689 } 01:26:20.689 ]' 01:26:20.689 05:21:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 01:26:20.689 05:21:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 01:26:20.689 /dev/nbd1' 01:26:20.689 05:21:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 01:26:20.689 05:21:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 01:26:20.689 /dev/nbd1' 01:26:20.689 05:21:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 01:26:20.689 05:21:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 01:26:20.689 05:21:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 01:26:20.689 05:21:12 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 01:26:20.689 05:21:12 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 01:26:20.689 05:21:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:26:20.689 05:21:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 01:26:20.689 05:21:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 01:26:20.689 05:21:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 01:26:20.689 05:21:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 01:26:20.689 05:21:12 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 01:26:20.689 256+0 records in 01:26:20.689 256+0 records out 01:26:20.689 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0070564 s, 149 MB/s 01:26:20.689 05:21:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 01:26:20.689 05:21:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 01:26:20.689 256+0 records in 01:26:20.689 256+0 records out 01:26:20.689 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.026726 s, 39.2 MB/s 01:26:20.689 05:21:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 01:26:20.689 05:21:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 01:26:20.689 256+0 records in 01:26:20.689 256+0 records out 01:26:20.689 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0283787 s, 36.9 MB/s 01:26:20.689 05:21:12 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 01:26:20.689 05:21:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:26:20.689 05:21:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 01:26:20.689 05:21:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 01:26:20.689 05:21:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 01:26:20.689 05:21:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 01:26:20.689 05:21:12 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 01:26:20.689 05:21:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 01:26:20.689 05:21:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 01:26:20.689 05:21:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 01:26:20.689 05:21:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 01:26:20.689 05:21:12 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 01:26:20.689 05:21:12 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 01:26:20.689 05:21:12 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:26:20.689 05:21:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:26:20.689 05:21:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 01:26:20.689 05:21:12 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 01:26:20.689 05:21:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:26:20.689 05:21:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 01:26:20.947 05:21:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 01:26:20.947 05:21:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 01:26:20.947 05:21:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 01:26:20.947 05:21:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:26:20.947 05:21:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:26:20.947 05:21:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 01:26:20.947 05:21:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 01:26:20.947 05:21:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 01:26:20.947 05:21:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:26:20.947 05:21:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 01:26:21.206 05:21:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 01:26:21.206 05:21:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 01:26:21.206 05:21:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 01:26:21.206 05:21:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:26:21.206 05:21:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:26:21.206 05:21:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 01:26:21.206 05:21:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 01:26:21.206 05:21:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 01:26:21.206 05:21:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 01:26:21.206 05:21:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:26:21.206 05:21:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 01:26:21.464 05:21:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 01:26:21.464 05:21:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 01:26:21.464 05:21:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 01:26:21.722 05:21:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 01:26:21.722 05:21:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 01:26:21.722 05:21:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 01:26:21.722 05:21:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 01:26:21.722 05:21:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 01:26:21.722 05:21:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 01:26:21.722 05:21:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 01:26:21.722 05:21:13 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 01:26:21.722 05:21:13 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 01:26:21.722 05:21:13 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 01:26:21.979 05:21:13 event.app_repeat -- event/event.sh@35 -- # sleep 3 01:26:23.406 [2024-12-09 05:21:14.619322] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 01:26:23.406 [2024-12-09 05:21:14.745840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:26:23.406 [2024-12-09 05:21:14.745848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:26:23.406 [2024-12-09 05:21:14.935626] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 01:26:23.406 [2024-12-09 05:21:14.935713] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 01:26:25.303 spdk_app_start Round 2 01:26:25.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 01:26:25.303 05:21:16 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 01:26:25.303 05:21:16 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 01:26:25.303 05:21:16 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59419 /var/tmp/spdk-nbd.sock 01:26:25.303 05:21:16 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59419 ']' 01:26:25.303 05:21:16 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 01:26:25.303 05:21:16 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 01:26:25.303 05:21:16 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 01:26:25.303 05:21:16 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 01:26:25.303 05:21:16 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 01:26:25.303 05:21:16 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:26:25.303 05:21:16 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 01:26:25.303 05:21:16 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 01:26:25.561 Malloc0 01:26:25.819 05:21:17 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 01:26:26.077 Malloc1 01:26:26.077 05:21:17 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 01:26:26.077 05:21:17 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:26:26.077 05:21:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 01:26:26.077 05:21:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 01:26:26.077 05:21:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:26:26.077 05:21:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 01:26:26.077 05:21:17 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 01:26:26.077 05:21:17 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:26:26.077 05:21:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 01:26:26.078 05:21:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 01:26:26.078 05:21:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:26:26.078 05:21:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 01:26:26.078 05:21:17 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 01:26:26.078 05:21:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 01:26:26.078 05:21:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 01:26:26.078 05:21:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 01:26:26.336 /dev/nbd0 01:26:26.336 05:21:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 01:26:26.336 05:21:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 01:26:26.336 05:21:17 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 01:26:26.336 05:21:17 event.app_repeat -- common/autotest_common.sh@873 -- # local i 01:26:26.336 05:21:17 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:26:26.336 05:21:17 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:26:26.336 05:21:17 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 01:26:26.336 05:21:17 event.app_repeat -- common/autotest_common.sh@877 -- # break 01:26:26.336 05:21:17 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:26:26.336 05:21:17 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:26:26.336 05:21:17 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 01:26:26.336 1+0 records in 01:26:26.336 1+0 records out 01:26:26.336 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00026694 s, 15.3 MB/s 01:26:26.336 05:21:17 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 01:26:26.336 05:21:17 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 01:26:26.336 05:21:17 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 01:26:26.336 05:21:17 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:26:26.336 05:21:17 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 01:26:26.336 05:21:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:26:26.336 05:21:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 01:26:26.336 05:21:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 01:26:26.594 /dev/nbd1 01:26:26.594 05:21:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 01:26:26.594 05:21:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 01:26:26.594 05:21:18 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 01:26:26.594 05:21:18 event.app_repeat -- common/autotest_common.sh@873 -- # local i 01:26:26.594 05:21:18 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:26:26.594 05:21:18 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:26:26.594 05:21:18 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 01:26:26.594 05:21:18 event.app_repeat -- common/autotest_common.sh@877 -- # break 01:26:26.594 05:21:18 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:26:26.594 05:21:18 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:26:26.594 05:21:18 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 01:26:26.594 1+0 records in 01:26:26.594 1+0 records out 01:26:26.594 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000367128 s, 11.2 MB/s 01:26:26.594 05:21:18 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 01:26:26.594 05:21:18 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 01:26:26.594 05:21:18 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 01:26:26.594 05:21:18 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:26:26.594 05:21:18 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 01:26:26.594 05:21:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:26:26.594 05:21:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 01:26:26.594 05:21:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 01:26:26.594 05:21:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:26:26.595 05:21:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 01:26:26.873 05:21:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 01:26:26.873 { 01:26:26.873 "nbd_device": "/dev/nbd0", 01:26:26.873 "bdev_name": "Malloc0" 01:26:26.873 }, 01:26:26.873 { 01:26:26.873 "nbd_device": "/dev/nbd1", 01:26:26.873 "bdev_name": "Malloc1" 01:26:26.873 } 01:26:26.873 ]' 01:26:26.873 05:21:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 01:26:26.873 { 01:26:26.873 "nbd_device": "/dev/nbd0", 01:26:26.873 "bdev_name": "Malloc0" 01:26:26.873 }, 01:26:26.873 { 01:26:26.873 "nbd_device": "/dev/nbd1", 01:26:26.873 "bdev_name": "Malloc1" 01:26:26.873 } 01:26:26.873 ]' 01:26:26.873 05:21:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 01:26:27.130 05:21:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 01:26:27.130 /dev/nbd1' 01:26:27.130 05:21:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 01:26:27.131 05:21:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 01:26:27.131 /dev/nbd1' 01:26:27.131 05:21:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 01:26:27.131 05:21:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 01:26:27.131 05:21:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 01:26:27.131 05:21:18 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 01:26:27.131 05:21:18 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 01:26:27.131 05:21:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:26:27.131 05:21:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 01:26:27.131 05:21:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 01:26:27.131 05:21:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 01:26:27.131 05:21:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 01:26:27.131 05:21:18 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 01:26:27.131 256+0 records in 01:26:27.131 256+0 records out 01:26:27.131 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00722959 s, 145 MB/s 01:26:27.131 05:21:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 01:26:27.131 05:21:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 01:26:27.131 256+0 records in 01:26:27.131 256+0 records out 01:26:27.131 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0282515 s, 37.1 MB/s 01:26:27.131 05:21:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 01:26:27.131 05:21:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 01:26:27.131 256+0 records in 01:26:27.131 256+0 records out 01:26:27.131 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0362401 s, 28.9 MB/s 01:26:27.131 05:21:18 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 01:26:27.131 05:21:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:26:27.131 05:21:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 01:26:27.131 05:21:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 01:26:27.131 05:21:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 01:26:27.131 05:21:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 01:26:27.131 05:21:18 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 01:26:27.131 05:21:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 01:26:27.131 05:21:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 01:26:27.131 05:21:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 01:26:27.131 05:21:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 01:26:27.131 05:21:18 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 01:26:27.131 05:21:18 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 01:26:27.131 05:21:18 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:26:27.131 05:21:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:26:27.131 05:21:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 01:26:27.131 05:21:18 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 01:26:27.131 05:21:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:26:27.131 05:21:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 01:26:27.389 05:21:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 01:26:27.389 05:21:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 01:26:27.389 05:21:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 01:26:27.389 05:21:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:26:27.389 05:21:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:26:27.389 05:21:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 01:26:27.389 05:21:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 01:26:27.389 05:21:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 01:26:27.389 05:21:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:26:27.389 05:21:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 01:26:27.954 05:21:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 01:26:27.954 05:21:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 01:26:27.954 05:21:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 01:26:27.954 05:21:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:26:27.954 05:21:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:26:27.954 05:21:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 01:26:27.954 05:21:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 01:26:27.954 05:21:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 01:26:27.954 05:21:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 01:26:27.954 05:21:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:26:27.954 05:21:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 01:26:28.212 05:21:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 01:26:28.213 05:21:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 01:26:28.213 05:21:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 01:26:28.213 05:21:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 01:26:28.213 05:21:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 01:26:28.213 05:21:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 01:26:28.213 05:21:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 01:26:28.213 05:21:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 01:26:28.213 05:21:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 01:26:28.213 05:21:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 01:26:28.213 05:21:19 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 01:26:28.213 05:21:19 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 01:26:28.213 05:21:19 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 01:26:28.780 05:21:20 event.app_repeat -- event/event.sh@35 -- # sleep 3 01:26:29.714 [2024-12-09 05:21:21.201303] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 01:26:30.028 [2024-12-09 05:21:21.334872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:26:30.028 [2024-12-09 05:21:21.334885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:26:30.028 [2024-12-09 05:21:21.531368] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 01:26:30.028 [2024-12-09 05:21:21.531508] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 01:26:31.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 01:26:31.959 05:21:23 event.app_repeat -- event/event.sh@38 -- # waitforlisten 59419 /var/tmp/spdk-nbd.sock 01:26:31.959 05:21:23 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59419 ']' 01:26:31.959 05:21:23 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 01:26:31.959 05:21:23 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 01:26:31.959 05:21:23 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 01:26:31.959 05:21:23 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 01:26:31.959 05:21:23 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 01:26:31.959 05:21:23 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:26:31.959 05:21:23 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 01:26:31.959 05:21:23 event.app_repeat -- event/event.sh@39 -- # killprocess 59419 01:26:31.959 05:21:23 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 59419 ']' 01:26:31.959 05:21:23 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 59419 01:26:31.959 05:21:23 event.app_repeat -- common/autotest_common.sh@959 -- # uname 01:26:31.959 05:21:23 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:26:31.959 05:21:23 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59419 01:26:31.959 killing process with pid 59419 01:26:31.959 05:21:23 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:26:31.959 05:21:23 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:26:31.959 05:21:23 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59419' 01:26:31.959 05:21:23 event.app_repeat -- common/autotest_common.sh@973 -- # kill 59419 01:26:31.959 05:21:23 event.app_repeat -- common/autotest_common.sh@978 -- # wait 59419 01:26:32.895 spdk_app_start is called in Round 0. 01:26:32.895 Shutdown signal received, stop current app iteration 01:26:32.895 Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 reinitialization... 01:26:32.895 spdk_app_start is called in Round 1. 01:26:32.895 Shutdown signal received, stop current app iteration 01:26:32.895 Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 reinitialization... 01:26:32.895 spdk_app_start is called in Round 2. 01:26:32.895 Shutdown signal received, stop current app iteration 01:26:32.895 Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 reinitialization... 01:26:32.895 spdk_app_start is called in Round 3. 01:26:32.895 Shutdown signal received, stop current app iteration 01:26:32.895 05:21:24 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 01:26:32.895 05:21:24 event.app_repeat -- event/event.sh@42 -- # return 0 01:26:32.895 01:26:32.895 real 0m21.482s 01:26:32.895 user 0m47.401s 01:26:32.895 sys 0m3.071s 01:26:32.895 05:21:24 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 01:26:32.895 05:21:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 01:26:32.895 ************************************ 01:26:32.895 END TEST app_repeat 01:26:32.895 ************************************ 01:26:32.895 05:21:24 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 01:26:32.895 05:21:24 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 01:26:32.895 05:21:24 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:26:32.895 05:21:24 event -- common/autotest_common.sh@1111 -- # xtrace_disable 01:26:32.896 05:21:24 event -- common/autotest_common.sh@10 -- # set +x 01:26:32.896 ************************************ 01:26:32.896 START TEST cpu_locks 01:26:32.896 ************************************ 01:26:32.896 05:21:24 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 01:26:33.155 * Looking for test storage... 01:26:33.155 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 01:26:33.155 05:21:24 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:26:33.155 05:21:24 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 01:26:33.155 05:21:24 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:26:33.155 05:21:24 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:26:33.155 05:21:24 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:26:33.155 05:21:24 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 01:26:33.155 05:21:24 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 01:26:33.155 05:21:24 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 01:26:33.155 05:21:24 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 01:26:33.155 05:21:24 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 01:26:33.155 05:21:24 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 01:26:33.155 05:21:24 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 01:26:33.155 05:21:24 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 01:26:33.155 05:21:24 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 01:26:33.155 05:21:24 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:26:33.155 05:21:24 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 01:26:33.155 05:21:24 event.cpu_locks -- scripts/common.sh@345 -- # : 1 01:26:33.155 05:21:24 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 01:26:33.155 05:21:24 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:26:33.155 05:21:24 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 01:26:33.155 05:21:24 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 01:26:33.155 05:21:24 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:26:33.155 05:21:24 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 01:26:33.155 05:21:24 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 01:26:33.155 05:21:24 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 01:26:33.155 05:21:24 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 01:26:33.155 05:21:24 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:26:33.155 05:21:24 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 01:26:33.155 05:21:24 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 01:26:33.155 05:21:24 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:26:33.155 05:21:24 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:26:33.155 05:21:24 event.cpu_locks -- scripts/common.sh@368 -- # return 0 01:26:33.155 05:21:24 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:26:33.155 05:21:24 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:26:33.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:26:33.155 --rc genhtml_branch_coverage=1 01:26:33.155 --rc genhtml_function_coverage=1 01:26:33.155 --rc genhtml_legend=1 01:26:33.155 --rc geninfo_all_blocks=1 01:26:33.155 --rc geninfo_unexecuted_blocks=1 01:26:33.155 01:26:33.155 ' 01:26:33.155 05:21:24 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:26:33.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:26:33.155 --rc genhtml_branch_coverage=1 01:26:33.155 --rc genhtml_function_coverage=1 01:26:33.155 --rc genhtml_legend=1 01:26:33.155 --rc geninfo_all_blocks=1 01:26:33.155 --rc geninfo_unexecuted_blocks=1 01:26:33.155 01:26:33.155 ' 01:26:33.155 05:21:24 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:26:33.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:26:33.155 --rc genhtml_branch_coverage=1 01:26:33.155 --rc genhtml_function_coverage=1 01:26:33.155 --rc genhtml_legend=1 01:26:33.155 --rc geninfo_all_blocks=1 01:26:33.155 --rc geninfo_unexecuted_blocks=1 01:26:33.155 01:26:33.155 ' 01:26:33.155 05:21:24 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:26:33.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:26:33.155 --rc genhtml_branch_coverage=1 01:26:33.155 --rc genhtml_function_coverage=1 01:26:33.155 --rc genhtml_legend=1 01:26:33.155 --rc geninfo_all_blocks=1 01:26:33.155 --rc geninfo_unexecuted_blocks=1 01:26:33.155 01:26:33.155 ' 01:26:33.155 05:21:24 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 01:26:33.155 05:21:24 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 01:26:33.155 05:21:24 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 01:26:33.155 05:21:24 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 01:26:33.155 05:21:24 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:26:33.155 05:21:24 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 01:26:33.155 05:21:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 01:26:33.155 ************************************ 01:26:33.155 START TEST default_locks 01:26:33.155 ************************************ 01:26:33.155 05:21:24 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 01:26:33.155 05:21:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=59890 01:26:33.155 05:21:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 59890 01:26:33.156 05:21:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 01:26:33.156 05:21:24 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 59890 ']' 01:26:33.156 05:21:24 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:26:33.156 05:21:24 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 01:26:33.156 05:21:24 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:26:33.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:26:33.156 05:21:24 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 01:26:33.156 05:21:24 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 01:26:33.156 [2024-12-09 05:21:24.752554] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:26:33.156 [2024-12-09 05:21:24.752816] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59890 ] 01:26:33.414 [2024-12-09 05:21:24.938452] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:26:33.682 [2024-12-09 05:21:25.057661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:26:34.616 05:21:25 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:26:34.616 05:21:25 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 01:26:34.616 05:21:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 59890 01:26:34.616 05:21:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 59890 01:26:34.616 05:21:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 01:26:34.874 05:21:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 59890 01:26:34.874 05:21:26 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 59890 ']' 01:26:34.874 05:21:26 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 59890 01:26:34.874 05:21:26 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 01:26:34.874 05:21:26 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:26:34.874 05:21:26 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59890 01:26:34.874 05:21:26 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:26:34.874 05:21:26 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:26:34.874 05:21:26 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59890' 01:26:34.874 killing process with pid 59890 01:26:34.874 05:21:26 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 59890 01:26:34.874 05:21:26 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 59890 01:26:37.401 05:21:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 59890 01:26:37.401 05:21:28 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 01:26:37.401 05:21:28 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59890 01:26:37.401 05:21:28 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 01:26:37.401 05:21:28 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:26:37.401 05:21:28 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 01:26:37.401 05:21:28 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:26:37.401 05:21:28 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 59890 01:26:37.401 05:21:28 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 59890 ']' 01:26:37.401 05:21:28 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:26:37.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:26:37.401 05:21:28 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 01:26:37.401 05:21:28 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:26:37.401 05:21:28 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 01:26:37.401 05:21:28 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 01:26:37.401 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59890) - No such process 01:26:37.401 ERROR: process (pid: 59890) is no longer running 01:26:37.401 05:21:28 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:26:37.401 05:21:28 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 01:26:37.401 05:21:28 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 01:26:37.401 05:21:28 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:26:37.401 05:21:28 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:26:37.401 05:21:28 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:26:37.401 05:21:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 01:26:37.401 05:21:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 01:26:37.401 05:21:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 01:26:37.401 05:21:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 01:26:37.401 01:26:37.401 real 0m3.923s 01:26:37.401 user 0m3.839s 01:26:37.401 sys 0m0.718s 01:26:37.401 05:21:28 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 01:26:37.401 05:21:28 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 01:26:37.401 ************************************ 01:26:37.401 END TEST default_locks 01:26:37.401 ************************************ 01:26:37.401 05:21:28 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 01:26:37.401 05:21:28 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:26:37.401 05:21:28 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 01:26:37.401 05:21:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 01:26:37.401 ************************************ 01:26:37.401 START TEST default_locks_via_rpc 01:26:37.401 ************************************ 01:26:37.401 05:21:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 01:26:37.401 05:21:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=59967 01:26:37.401 05:21:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 59967 01:26:37.401 05:21:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59967 ']' 01:26:37.401 05:21:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:26:37.402 05:21:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 01:26:37.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:26:37.402 05:21:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:26:37.402 05:21:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 01:26:37.402 05:21:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 01:26:37.402 05:21:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 01:26:37.402 [2024-12-09 05:21:28.743786] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:26:37.402 [2024-12-09 05:21:28.744038] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59967 ] 01:26:37.402 [2024-12-09 05:21:28.933387] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:26:37.659 [2024-12-09 05:21:29.058739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:26:38.592 05:21:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:26:38.592 05:21:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 01:26:38.592 05:21:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 01:26:38.592 05:21:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:38.592 05:21:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 01:26:38.592 05:21:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:38.592 05:21:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 01:26:38.592 05:21:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 01:26:38.592 05:21:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 01:26:38.592 05:21:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 01:26:38.592 05:21:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 01:26:38.592 05:21:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:38.592 05:21:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 01:26:38.592 05:21:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:38.592 05:21:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 59967 01:26:38.592 05:21:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 01:26:38.592 05:21:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 59967 01:26:38.850 05:21:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 59967 01:26:38.850 05:21:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 59967 ']' 01:26:38.850 05:21:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 59967 01:26:38.850 05:21:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 01:26:38.850 05:21:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:26:38.850 05:21:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59967 01:26:38.850 05:21:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:26:38.850 05:21:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:26:38.850 killing process with pid 59967 01:26:38.850 05:21:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59967' 01:26:38.850 05:21:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 59967 01:26:38.850 05:21:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 59967 01:26:41.378 01:26:41.378 real 0m4.049s 01:26:41.378 user 0m4.084s 01:26:41.378 sys 0m0.777s 01:26:41.378 05:21:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 01:26:41.378 05:21:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 01:26:41.378 ************************************ 01:26:41.378 END TEST default_locks_via_rpc 01:26:41.378 ************************************ 01:26:41.378 05:21:32 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 01:26:41.378 05:21:32 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:26:41.378 05:21:32 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 01:26:41.378 05:21:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 01:26:41.378 ************************************ 01:26:41.378 START TEST non_locking_app_on_locked_coremask 01:26:41.378 ************************************ 01:26:41.378 05:21:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 01:26:41.378 05:21:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=60036 01:26:41.378 05:21:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 60036 /var/tmp/spdk.sock 01:26:41.378 05:21:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60036 ']' 01:26:41.378 05:21:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 01:26:41.378 05:21:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:26:41.378 05:21:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 01:26:41.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:26:41.378 05:21:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:26:41.378 05:21:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 01:26:41.378 05:21:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 01:26:41.378 [2024-12-09 05:21:32.841910] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:26:41.378 [2024-12-09 05:21:32.842130] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60036 ] 01:26:41.636 [2024-12-09 05:21:33.028885] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:26:41.636 [2024-12-09 05:21:33.150434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:26:42.571 05:21:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:26:42.571 05:21:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 01:26:42.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 01:26:42.571 05:21:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=60058 01:26:42.571 05:21:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 60058 /var/tmp/spdk2.sock 01:26:42.571 05:21:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60058 ']' 01:26:42.571 05:21:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 01:26:42.571 05:21:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 01:26:42.571 05:21:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 01:26:42.571 05:21:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 01:26:42.571 05:21:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 01:26:42.571 05:21:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 01:26:42.571 [2024-12-09 05:21:34.143743] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:26:42.571 [2024-12-09 05:21:34.143984] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60058 ] 01:26:42.828 [2024-12-09 05:21:34.341496] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 01:26:42.828 [2024-12-09 05:21:34.341572] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:26:43.087 [2024-12-09 05:21:34.614179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:26:45.664 05:21:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:26:45.664 05:21:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 01:26:45.664 05:21:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 60036 01:26:45.664 05:21:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60036 01:26:45.664 05:21:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 01:26:46.230 05:21:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 60036 01:26:46.230 05:21:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60036 ']' 01:26:46.231 05:21:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60036 01:26:46.231 05:21:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 01:26:46.231 05:21:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:26:46.231 05:21:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60036 01:26:46.231 05:21:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:26:46.231 05:21:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:26:46.231 killing process with pid 60036 01:26:46.231 05:21:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60036' 01:26:46.231 05:21:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60036 01:26:46.231 05:21:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60036 01:26:51.498 05:21:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 60058 01:26:51.498 05:21:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60058 ']' 01:26:51.498 05:21:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60058 01:26:51.498 05:21:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 01:26:51.498 05:21:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:26:51.498 05:21:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60058 01:26:51.498 05:21:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:26:51.498 05:21:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:26:51.498 killing process with pid 60058 01:26:51.498 05:21:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60058' 01:26:51.498 05:21:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60058 01:26:51.498 05:21:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60058 01:26:52.887 01:26:52.887 real 0m11.576s 01:26:52.887 user 0m12.045s 01:26:52.887 sys 0m1.579s 01:26:52.887 05:21:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 01:26:52.887 05:21:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 01:26:52.887 ************************************ 01:26:52.887 END TEST non_locking_app_on_locked_coremask 01:26:52.887 ************************************ 01:26:52.887 05:21:44 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 01:26:52.887 05:21:44 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:26:52.887 05:21:44 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 01:26:52.887 05:21:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 01:26:52.887 ************************************ 01:26:52.887 START TEST locking_app_on_unlocked_coremask 01:26:52.887 ************************************ 01:26:52.887 05:21:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 01:26:52.887 05:21:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60208 01:26:52.887 05:21:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 01:26:52.887 05:21:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 60208 /var/tmp/spdk.sock 01:26:52.887 05:21:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60208 ']' 01:26:52.887 05:21:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:26:52.887 05:21:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 01:26:52.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:26:52.887 05:21:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:26:52.887 05:21:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 01:26:52.887 05:21:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 01:26:52.887 [2024-12-09 05:21:44.455297] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:26:52.887 [2024-12-09 05:21:44.455501] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60208 ] 01:26:53.151 [2024-12-09 05:21:44.627615] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 01:26:53.151 [2024-12-09 05:21:44.627658] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:26:53.151 [2024-12-09 05:21:44.729915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:26:54.095 05:21:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:26:54.095 05:21:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 01:26:54.095 05:21:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60230 01:26:54.095 05:21:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 01:26:54.095 05:21:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 60230 /var/tmp/spdk2.sock 01:26:54.095 05:21:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60230 ']' 01:26:54.095 05:21:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 01:26:54.095 05:21:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 01:26:54.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 01:26:54.095 05:21:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 01:26:54.095 05:21:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 01:26:54.095 05:21:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 01:26:54.095 [2024-12-09 05:21:45.680690] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:26:54.095 [2024-12-09 05:21:45.680894] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60230 ] 01:26:54.353 [2024-12-09 05:21:45.885702] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:26:54.611 [2024-12-09 05:21:46.135695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:26:57.140 05:21:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:26:57.140 05:21:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 01:26:57.140 05:21:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 60230 01:26:57.141 05:21:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60230 01:26:57.141 05:21:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 01:26:57.707 05:21:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 60208 01:26:57.707 05:21:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60208 ']' 01:26:57.707 05:21:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60208 01:26:57.707 05:21:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 01:26:57.707 05:21:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:26:57.707 05:21:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60208 01:26:57.966 05:21:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:26:57.966 05:21:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:26:57.966 killing process with pid 60208 01:26:57.966 05:21:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60208' 01:26:57.966 05:21:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60208 01:26:57.966 05:21:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60208 01:27:02.156 05:21:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 60230 01:27:02.156 05:21:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60230 ']' 01:27:02.156 05:21:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60230 01:27:02.156 05:21:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 01:27:02.156 05:21:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:27:02.156 05:21:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60230 01:27:02.156 05:21:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:27:02.156 05:21:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:27:02.156 killing process with pid 60230 01:27:02.156 05:21:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60230' 01:27:02.156 05:21:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60230 01:27:02.156 05:21:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60230 01:27:04.684 01:27:04.684 real 0m11.488s 01:27:04.684 user 0m11.957s 01:27:04.684 sys 0m1.535s 01:27:04.684 05:21:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 01:27:04.684 05:21:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 01:27:04.684 ************************************ 01:27:04.684 END TEST locking_app_on_unlocked_coremask 01:27:04.684 ************************************ 01:27:04.684 05:21:55 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 01:27:04.684 05:21:55 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:27:04.684 05:21:55 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 01:27:04.684 05:21:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 01:27:04.684 ************************************ 01:27:04.684 START TEST locking_app_on_locked_coremask 01:27:04.684 ************************************ 01:27:04.684 05:21:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 01:27:04.684 05:21:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60377 01:27:04.684 05:21:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 01:27:04.684 05:21:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 60377 /var/tmp/spdk.sock 01:27:04.684 05:21:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60377 ']' 01:27:04.684 05:21:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:27:04.684 05:21:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 01:27:04.684 05:21:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:27:04.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:27:04.684 05:21:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 01:27:04.684 05:21:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 01:27:04.684 [2024-12-09 05:21:55.969517] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:27:04.684 [2024-12-09 05:21:55.969712] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60377 ] 01:27:04.684 [2024-12-09 05:21:56.141106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:27:04.684 [2024-12-09 05:21:56.272779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:27:05.619 05:21:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:27:05.619 05:21:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 01:27:05.619 05:21:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60394 01:27:05.619 05:21:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 01:27:05.619 05:21:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60394 /var/tmp/spdk2.sock 01:27:05.619 05:21:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 01:27:05.619 05:21:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60394 /var/tmp/spdk2.sock 01:27:05.619 05:21:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 01:27:05.619 05:21:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:27:05.619 05:21:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 01:27:05.619 05:21:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:27:05.619 05:21:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60394 /var/tmp/spdk2.sock 01:27:05.619 05:21:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60394 ']' 01:27:05.619 05:21:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 01:27:05.619 05:21:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 01:27:05.619 05:21:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 01:27:05.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 01:27:05.619 05:21:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 01:27:05.619 05:21:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 01:27:05.619 [2024-12-09 05:21:57.200999] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:27:05.619 [2024-12-09 05:21:57.201836] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60394 ] 01:27:05.878 [2024-12-09 05:21:57.396785] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60377 has claimed it. 01:27:05.878 [2024-12-09 05:21:57.396854] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 01:27:06.445 ERROR: process (pid: 60394) is no longer running 01:27:06.445 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60394) - No such process 01:27:06.445 05:21:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:27:06.445 05:21:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 01:27:06.445 05:21:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 01:27:06.445 05:21:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:27:06.445 05:21:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:27:06.445 05:21:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:27:06.445 05:21:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 60377 01:27:06.445 05:21:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60377 01:27:06.445 05:21:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 01:27:06.703 05:21:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 60377 01:27:06.703 05:21:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60377 ']' 01:27:06.703 05:21:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60377 01:27:06.703 05:21:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 01:27:06.703 05:21:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:27:06.703 05:21:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60377 01:27:06.703 killing process with pid 60377 01:27:06.703 05:21:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:27:06.704 05:21:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:27:06.704 05:21:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60377' 01:27:06.704 05:21:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60377 01:27:06.704 05:21:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60377 01:27:09.235 01:27:09.235 real 0m4.529s 01:27:09.236 user 0m4.946s 01:27:09.236 sys 0m0.800s 01:27:09.236 ************************************ 01:27:09.236 END TEST locking_app_on_locked_coremask 01:27:09.236 ************************************ 01:27:09.236 05:22:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 01:27:09.236 05:22:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 01:27:09.236 05:22:00 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 01:27:09.236 05:22:00 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:27:09.236 05:22:00 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 01:27:09.236 05:22:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 01:27:09.236 ************************************ 01:27:09.236 START TEST locking_overlapped_coremask 01:27:09.236 ************************************ 01:27:09.236 05:22:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 01:27:09.236 05:22:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=60458 01:27:09.236 05:22:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 60458 /var/tmp/spdk.sock 01:27:09.236 05:22:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 01:27:09.236 05:22:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60458 ']' 01:27:09.236 05:22:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:27:09.236 05:22:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 01:27:09.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:27:09.236 05:22:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:27:09.236 05:22:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 01:27:09.236 05:22:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 01:27:09.236 [2024-12-09 05:22:00.574185] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:27:09.236 [2024-12-09 05:22:00.574379] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60458 ] 01:27:09.236 [2024-12-09 05:22:00.759883] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 01:27:09.495 [2024-12-09 05:22:00.886856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:27:09.495 [2024-12-09 05:22:00.886923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:27:09.495 [2024-12-09 05:22:00.886923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:27:10.483 05:22:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:27:10.483 05:22:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 01:27:10.483 05:22:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 01:27:10.483 05:22:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=60476 01:27:10.483 05:22:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 60476 /var/tmp/spdk2.sock 01:27:10.483 05:22:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 01:27:10.483 05:22:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60476 /var/tmp/spdk2.sock 01:27:10.483 05:22:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 01:27:10.483 05:22:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:27:10.483 05:22:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 01:27:10.483 05:22:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:27:10.483 05:22:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60476 /var/tmp/spdk2.sock 01:27:10.483 05:22:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60476 ']' 01:27:10.483 05:22:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 01:27:10.483 05:22:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 01:27:10.483 05:22:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 01:27:10.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 01:27:10.483 05:22:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 01:27:10.483 05:22:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 01:27:10.483 [2024-12-09 05:22:01.842494] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:27:10.483 [2024-12-09 05:22:01.842960] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60476 ] 01:27:10.483 [2024-12-09 05:22:02.056037] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60458 has claimed it. 01:27:10.483 [2024-12-09 05:22:02.056154] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 01:27:11.052 ERROR: process (pid: 60476) is no longer running 01:27:11.052 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60476) - No such process 01:27:11.052 05:22:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:27:11.052 05:22:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 01:27:11.052 05:22:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 01:27:11.052 05:22:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:27:11.052 05:22:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:27:11.052 05:22:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:27:11.052 05:22:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 01:27:11.052 05:22:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 01:27:11.052 05:22:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 01:27:11.052 05:22:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 01:27:11.052 05:22:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 60458 01:27:11.052 05:22:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 60458 ']' 01:27:11.052 05:22:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 60458 01:27:11.052 05:22:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 01:27:11.053 05:22:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:27:11.053 05:22:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60458 01:27:11.053 killing process with pid 60458 01:27:11.053 05:22:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:27:11.053 05:22:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:27:11.053 05:22:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60458' 01:27:11.053 05:22:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 60458 01:27:11.053 05:22:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 60458 01:27:13.595 01:27:13.595 real 0m4.695s 01:27:13.595 user 0m12.787s 01:27:13.595 sys 0m0.713s 01:27:13.595 05:22:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 01:27:13.595 05:22:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 01:27:13.595 ************************************ 01:27:13.595 END TEST locking_overlapped_coremask 01:27:13.595 ************************************ 01:27:13.595 05:22:05 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 01:27:13.595 05:22:05 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:27:13.595 05:22:05 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 01:27:13.595 05:22:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 01:27:13.595 ************************************ 01:27:13.595 START TEST locking_overlapped_coremask_via_rpc 01:27:13.595 ************************************ 01:27:13.595 05:22:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 01:27:13.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:27:13.595 05:22:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=60546 01:27:13.595 05:22:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 60546 /var/tmp/spdk.sock 01:27:13.595 05:22:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 01:27:13.595 05:22:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60546 ']' 01:27:13.595 05:22:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:27:13.595 05:22:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 01:27:13.595 05:22:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:27:13.595 05:22:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 01:27:13.595 05:22:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 01:27:13.857 [2024-12-09 05:22:05.317538] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:27:13.857 [2024-12-09 05:22:05.317907] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60546 ] 01:27:14.114 [2024-12-09 05:22:05.503408] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 01:27:14.114 [2024-12-09 05:22:05.503715] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 01:27:14.114 [2024-12-09 05:22:05.628377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:27:14.114 [2024-12-09 05:22:05.628506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:27:14.114 [2024-12-09 05:22:05.628530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:27:15.049 05:22:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:27:15.049 05:22:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 01:27:15.049 05:22:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 01:27:15.049 05:22:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=60569 01:27:15.049 05:22:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 60569 /var/tmp/spdk2.sock 01:27:15.049 05:22:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60569 ']' 01:27:15.049 05:22:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 01:27:15.049 05:22:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 01:27:15.049 05:22:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 01:27:15.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 01:27:15.049 05:22:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 01:27:15.049 05:22:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 01:27:15.049 [2024-12-09 05:22:06.602211] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:27:15.049 [2024-12-09 05:22:06.603111] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60569 ] 01:27:15.307 [2024-12-09 05:22:06.806166] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 01:27:15.307 [2024-12-09 05:22:06.806259] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 01:27:15.566 [2024-12-09 05:22:07.057994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:27:15.566 [2024-12-09 05:22:07.061835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:27:15.566 [2024-12-09 05:22:07.061847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 01:27:18.099 05:22:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:27:18.100 05:22:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 01:27:18.100 05:22:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 01:27:18.100 05:22:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:18.100 05:22:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 01:27:18.100 05:22:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:18.100 05:22:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 01:27:18.100 05:22:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 01:27:18.100 05:22:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 01:27:18.100 05:22:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 01:27:18.100 05:22:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:27:18.100 05:22:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 01:27:18.100 05:22:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:27:18.100 05:22:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 01:27:18.100 05:22:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:18.100 05:22:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 01:27:18.100 [2024-12-09 05:22:09.427919] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60546 has claimed it. 01:27:18.100 request: 01:27:18.100 { 01:27:18.100 "method": "framework_enable_cpumask_locks", 01:27:18.100 "req_id": 1 01:27:18.100 } 01:27:18.100 Got JSON-RPC error response 01:27:18.100 response: 01:27:18.100 { 01:27:18.100 "code": -32603, 01:27:18.100 "message": "Failed to claim CPU core: 2" 01:27:18.100 } 01:27:18.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:27:18.100 05:22:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 01:27:18.100 05:22:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 01:27:18.100 05:22:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:27:18.100 05:22:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:27:18.100 05:22:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:27:18.100 05:22:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 60546 /var/tmp/spdk.sock 01:27:18.100 05:22:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60546 ']' 01:27:18.100 05:22:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:27:18.100 05:22:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 01:27:18.100 05:22:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:27:18.100 05:22:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 01:27:18.100 05:22:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 01:27:18.358 05:22:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:27:18.358 05:22:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 01:27:18.358 05:22:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 60569 /var/tmp/spdk2.sock 01:27:18.358 05:22:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60569 ']' 01:27:18.358 05:22:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 01:27:18.359 05:22:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 01:27:18.359 05:22:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 01:27:18.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 01:27:18.359 05:22:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 01:27:18.359 05:22:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 01:27:18.616 05:22:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:27:18.616 05:22:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 01:27:18.616 05:22:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 01:27:18.616 05:22:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 01:27:18.616 05:22:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 01:27:18.616 05:22:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 01:27:18.616 01:27:18.616 real 0m4.804s 01:27:18.616 user 0m1.734s 01:27:18.616 sys 0m0.253s 01:27:18.616 05:22:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 01:27:18.616 ************************************ 01:27:18.616 END TEST locking_overlapped_coremask_via_rpc 01:27:18.616 ************************************ 01:27:18.616 05:22:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 01:27:18.616 05:22:10 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 01:27:18.616 05:22:10 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60546 ]] 01:27:18.616 05:22:10 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60546 01:27:18.616 05:22:10 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60546 ']' 01:27:18.616 05:22:10 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60546 01:27:18.616 05:22:10 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 01:27:18.616 05:22:10 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:27:18.616 05:22:10 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60546 01:27:18.616 killing process with pid 60546 01:27:18.616 05:22:10 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:27:18.616 05:22:10 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:27:18.616 05:22:10 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60546' 01:27:18.616 05:22:10 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60546 01:27:18.616 05:22:10 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60546 01:27:21.160 05:22:12 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60569 ]] 01:27:21.160 05:22:12 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60569 01:27:21.160 05:22:12 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60569 ']' 01:27:21.160 05:22:12 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60569 01:27:21.160 05:22:12 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 01:27:21.160 05:22:12 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:27:21.160 05:22:12 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60569 01:27:21.160 killing process with pid 60569 01:27:21.160 05:22:12 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 01:27:21.160 05:22:12 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 01:27:21.160 05:22:12 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60569' 01:27:21.160 05:22:12 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60569 01:27:21.160 05:22:12 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60569 01:27:23.060 05:22:14 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 01:27:23.060 Process with pid 60546 is not found 01:27:23.060 05:22:14 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 01:27:23.060 05:22:14 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60546 ]] 01:27:23.061 05:22:14 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60546 01:27:23.061 05:22:14 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60546 ']' 01:27:23.061 05:22:14 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60546 01:27:23.061 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60546) - No such process 01:27:23.061 05:22:14 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60546 is not found' 01:27:23.061 05:22:14 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60569 ]] 01:27:23.061 05:22:14 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60569 01:27:23.061 05:22:14 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60569 ']' 01:27:23.061 Process with pid 60569 is not found 01:27:23.061 05:22:14 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60569 01:27:23.061 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60569) - No such process 01:27:23.061 05:22:14 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60569 is not found' 01:27:23.061 05:22:14 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 01:27:23.061 01:27:23.061 real 0m50.230s 01:27:23.061 user 1m27.490s 01:27:23.061 sys 0m7.676s 01:27:23.061 05:22:14 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 01:27:23.061 05:22:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 01:27:23.061 ************************************ 01:27:23.061 END TEST cpu_locks 01:27:23.061 ************************************ 01:27:23.319 ************************************ 01:27:23.319 END TEST event 01:27:23.319 ************************************ 01:27:23.319 01:27:23.319 real 1m23.208s 01:27:23.319 user 2m32.879s 01:27:23.319 sys 0m11.910s 01:27:23.319 05:22:14 event -- common/autotest_common.sh@1130 -- # xtrace_disable 01:27:23.319 05:22:14 event -- common/autotest_common.sh@10 -- # set +x 01:27:23.319 05:22:14 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 01:27:23.319 05:22:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:27:23.319 05:22:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:27:23.319 05:22:14 -- common/autotest_common.sh@10 -- # set +x 01:27:23.319 ************************************ 01:27:23.319 START TEST thread 01:27:23.319 ************************************ 01:27:23.319 05:22:14 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 01:27:23.319 * Looking for test storage... 01:27:23.319 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 01:27:23.319 05:22:14 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:27:23.319 05:22:14 thread -- common/autotest_common.sh@1693 -- # lcov --version 01:27:23.319 05:22:14 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:27:23.319 05:22:14 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:27:23.319 05:22:14 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:27:23.319 05:22:14 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 01:27:23.319 05:22:14 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 01:27:23.319 05:22:14 thread -- scripts/common.sh@336 -- # IFS=.-: 01:27:23.319 05:22:14 thread -- scripts/common.sh@336 -- # read -ra ver1 01:27:23.319 05:22:14 thread -- scripts/common.sh@337 -- # IFS=.-: 01:27:23.320 05:22:14 thread -- scripts/common.sh@337 -- # read -ra ver2 01:27:23.320 05:22:14 thread -- scripts/common.sh@338 -- # local 'op=<' 01:27:23.320 05:22:14 thread -- scripts/common.sh@340 -- # ver1_l=2 01:27:23.320 05:22:14 thread -- scripts/common.sh@341 -- # ver2_l=1 01:27:23.320 05:22:14 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:27:23.320 05:22:14 thread -- scripts/common.sh@344 -- # case "$op" in 01:27:23.320 05:22:14 thread -- scripts/common.sh@345 -- # : 1 01:27:23.320 05:22:14 thread -- scripts/common.sh@364 -- # (( v = 0 )) 01:27:23.320 05:22:14 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:27:23.320 05:22:14 thread -- scripts/common.sh@365 -- # decimal 1 01:27:23.320 05:22:14 thread -- scripts/common.sh@353 -- # local d=1 01:27:23.320 05:22:14 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:27:23.320 05:22:14 thread -- scripts/common.sh@355 -- # echo 1 01:27:23.320 05:22:14 thread -- scripts/common.sh@365 -- # ver1[v]=1 01:27:23.320 05:22:14 thread -- scripts/common.sh@366 -- # decimal 2 01:27:23.320 05:22:14 thread -- scripts/common.sh@353 -- # local d=2 01:27:23.320 05:22:14 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:27:23.320 05:22:14 thread -- scripts/common.sh@355 -- # echo 2 01:27:23.320 05:22:14 thread -- scripts/common.sh@366 -- # ver2[v]=2 01:27:23.320 05:22:14 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:27:23.320 05:22:14 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:27:23.320 05:22:14 thread -- scripts/common.sh@368 -- # return 0 01:27:23.320 05:22:14 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:27:23.320 05:22:14 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:27:23.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:27:23.320 --rc genhtml_branch_coverage=1 01:27:23.320 --rc genhtml_function_coverage=1 01:27:23.320 --rc genhtml_legend=1 01:27:23.320 --rc geninfo_all_blocks=1 01:27:23.320 --rc geninfo_unexecuted_blocks=1 01:27:23.320 01:27:23.320 ' 01:27:23.320 05:22:14 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:27:23.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:27:23.320 --rc genhtml_branch_coverage=1 01:27:23.320 --rc genhtml_function_coverage=1 01:27:23.320 --rc genhtml_legend=1 01:27:23.320 --rc geninfo_all_blocks=1 01:27:23.320 --rc geninfo_unexecuted_blocks=1 01:27:23.320 01:27:23.320 ' 01:27:23.320 05:22:14 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:27:23.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:27:23.320 --rc genhtml_branch_coverage=1 01:27:23.320 --rc genhtml_function_coverage=1 01:27:23.320 --rc genhtml_legend=1 01:27:23.320 --rc geninfo_all_blocks=1 01:27:23.320 --rc geninfo_unexecuted_blocks=1 01:27:23.320 01:27:23.320 ' 01:27:23.320 05:22:14 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:27:23.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:27:23.320 --rc genhtml_branch_coverage=1 01:27:23.320 --rc genhtml_function_coverage=1 01:27:23.320 --rc genhtml_legend=1 01:27:23.320 --rc geninfo_all_blocks=1 01:27:23.320 --rc geninfo_unexecuted_blocks=1 01:27:23.320 01:27:23.320 ' 01:27:23.320 05:22:14 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 01:27:23.320 05:22:14 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 01:27:23.320 05:22:14 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 01:27:23.320 05:22:14 thread -- common/autotest_common.sh@10 -- # set +x 01:27:23.320 ************************************ 01:27:23.320 START TEST thread_poller_perf 01:27:23.320 ************************************ 01:27:23.320 05:22:14 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 01:27:23.578 [2024-12-09 05:22:14.977468] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:27:23.578 [2024-12-09 05:22:14.978617] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60764 ] 01:27:23.578 [2024-12-09 05:22:15.157867] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:27:23.836 [2024-12-09 05:22:15.313587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:27:23.836 Running 1000 pollers for 1 seconds with 1 microseconds period. 01:27:25.216 [2024-12-09T05:22:16.833Z] ====================================== 01:27:25.216 [2024-12-09T05:22:16.833Z] busy:2212943395 (cyc) 01:27:25.216 [2024-12-09T05:22:16.833Z] total_run_count: 263000 01:27:25.216 [2024-12-09T05:22:16.833Z] tsc_hz: 2200000000 (cyc) 01:27:25.216 [2024-12-09T05:22:16.833Z] ====================================== 01:27:25.216 [2024-12-09T05:22:16.833Z] poller_cost: 8414 (cyc), 3824 (nsec) 01:27:25.216 01:27:25.216 ************************************ 01:27:25.216 END TEST thread_poller_perf 01:27:25.216 ************************************ 01:27:25.217 real 0m1.698s 01:27:25.217 user 0m1.474s 01:27:25.217 sys 0m0.112s 01:27:25.217 05:22:16 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 01:27:25.217 05:22:16 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 01:27:25.217 05:22:16 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 01:27:25.217 05:22:16 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 01:27:25.217 05:22:16 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 01:27:25.217 05:22:16 thread -- common/autotest_common.sh@10 -- # set +x 01:27:25.217 ************************************ 01:27:25.217 START TEST thread_poller_perf 01:27:25.217 ************************************ 01:27:25.217 05:22:16 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 01:27:25.217 [2024-12-09 05:22:16.731789] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:27:25.217 [2024-12-09 05:22:16.732133] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60801 ] 01:27:25.475 [2024-12-09 05:22:16.918654] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:27:25.475 Running 1000 pollers for 1 seconds with 0 microseconds period. 01:27:25.475 [2024-12-09 05:22:17.038288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:27:26.901 [2024-12-09T05:22:18.518Z] ====================================== 01:27:26.901 [2024-12-09T05:22:18.518Z] busy:2204019898 (cyc) 01:27:26.901 [2024-12-09T05:22:18.518Z] total_run_count: 4396000 01:27:26.901 [2024-12-09T05:22:18.518Z] tsc_hz: 2200000000 (cyc) 01:27:26.901 [2024-12-09T05:22:18.518Z] ====================================== 01:27:26.901 [2024-12-09T05:22:18.518Z] poller_cost: 501 (cyc), 227 (nsec) 01:27:26.901 01:27:26.901 real 0m1.619s 01:27:26.901 user 0m1.393s 01:27:26.901 sys 0m0.118s 01:27:26.901 05:22:18 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 01:27:26.901 ************************************ 01:27:26.901 END TEST thread_poller_perf 01:27:26.901 ************************************ 01:27:26.901 05:22:18 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 01:27:26.901 05:22:18 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 01:27:26.901 ************************************ 01:27:26.901 END TEST thread 01:27:26.901 ************************************ 01:27:26.901 01:27:26.901 real 0m3.576s 01:27:26.901 user 0m2.984s 01:27:26.901 sys 0m0.369s 01:27:26.901 05:22:18 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 01:27:26.901 05:22:18 thread -- common/autotest_common.sh@10 -- # set +x 01:27:26.901 05:22:18 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 01:27:26.901 05:22:18 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 01:27:26.901 05:22:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:27:26.901 05:22:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:27:26.901 05:22:18 -- common/autotest_common.sh@10 -- # set +x 01:27:26.901 ************************************ 01:27:26.901 START TEST app_cmdline 01:27:26.901 ************************************ 01:27:26.901 05:22:18 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 01:27:26.901 * Looking for test storage... 01:27:26.901 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 01:27:26.901 05:22:18 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:27:26.901 05:22:18 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 01:27:26.901 05:22:18 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:27:27.160 05:22:18 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:27:27.160 05:22:18 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:27:27.160 05:22:18 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 01:27:27.160 05:22:18 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 01:27:27.160 05:22:18 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 01:27:27.160 05:22:18 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 01:27:27.160 05:22:18 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 01:27:27.160 05:22:18 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 01:27:27.160 05:22:18 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 01:27:27.160 05:22:18 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 01:27:27.160 05:22:18 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 01:27:27.160 05:22:18 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:27:27.160 05:22:18 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 01:27:27.160 05:22:18 app_cmdline -- scripts/common.sh@345 -- # : 1 01:27:27.160 05:22:18 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 01:27:27.160 05:22:18 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:27:27.160 05:22:18 app_cmdline -- scripts/common.sh@365 -- # decimal 1 01:27:27.160 05:22:18 app_cmdline -- scripts/common.sh@353 -- # local d=1 01:27:27.160 05:22:18 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:27:27.160 05:22:18 app_cmdline -- scripts/common.sh@355 -- # echo 1 01:27:27.160 05:22:18 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 01:27:27.160 05:22:18 app_cmdline -- scripts/common.sh@366 -- # decimal 2 01:27:27.160 05:22:18 app_cmdline -- scripts/common.sh@353 -- # local d=2 01:27:27.160 05:22:18 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:27:27.160 05:22:18 app_cmdline -- scripts/common.sh@355 -- # echo 2 01:27:27.160 05:22:18 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 01:27:27.160 05:22:18 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:27:27.160 05:22:18 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:27:27.160 05:22:18 app_cmdline -- scripts/common.sh@368 -- # return 0 01:27:27.160 05:22:18 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:27:27.160 05:22:18 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:27:27.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:27:27.160 --rc genhtml_branch_coverage=1 01:27:27.160 --rc genhtml_function_coverage=1 01:27:27.160 --rc genhtml_legend=1 01:27:27.160 --rc geninfo_all_blocks=1 01:27:27.160 --rc geninfo_unexecuted_blocks=1 01:27:27.160 01:27:27.160 ' 01:27:27.160 05:22:18 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:27:27.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:27:27.160 --rc genhtml_branch_coverage=1 01:27:27.160 --rc genhtml_function_coverage=1 01:27:27.160 --rc genhtml_legend=1 01:27:27.160 --rc geninfo_all_blocks=1 01:27:27.160 --rc geninfo_unexecuted_blocks=1 01:27:27.160 01:27:27.160 ' 01:27:27.160 05:22:18 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:27:27.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:27:27.160 --rc genhtml_branch_coverage=1 01:27:27.160 --rc genhtml_function_coverage=1 01:27:27.160 --rc genhtml_legend=1 01:27:27.160 --rc geninfo_all_blocks=1 01:27:27.160 --rc geninfo_unexecuted_blocks=1 01:27:27.160 01:27:27.160 ' 01:27:27.160 05:22:18 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:27:27.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:27:27.160 --rc genhtml_branch_coverage=1 01:27:27.160 --rc genhtml_function_coverage=1 01:27:27.160 --rc genhtml_legend=1 01:27:27.160 --rc geninfo_all_blocks=1 01:27:27.160 --rc geninfo_unexecuted_blocks=1 01:27:27.160 01:27:27.161 ' 01:27:27.161 05:22:18 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 01:27:27.161 05:22:18 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=60890 01:27:27.161 05:22:18 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 01:27:27.161 05:22:18 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 60890 01:27:27.161 05:22:18 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 60890 ']' 01:27:27.161 05:22:18 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:27:27.161 05:22:18 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 01:27:27.161 05:22:18 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:27:27.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:27:27.161 05:22:18 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 01:27:27.161 05:22:18 app_cmdline -- common/autotest_common.sh@10 -- # set +x 01:27:27.161 [2024-12-09 05:22:18.717361] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:27:27.161 [2024-12-09 05:22:18.718328] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60890 ] 01:27:27.419 [2024-12-09 05:22:18.900613] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:27:27.677 [2024-12-09 05:22:19.038878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:27:28.611 05:22:19 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:27:28.611 05:22:19 app_cmdline -- common/autotest_common.sh@868 -- # return 0 01:27:28.611 05:22:19 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 01:27:28.611 { 01:27:28.611 "version": "SPDK v25.01-pre git sha1 66902d69a", 01:27:28.611 "fields": { 01:27:28.611 "major": 25, 01:27:28.611 "minor": 1, 01:27:28.611 "patch": 0, 01:27:28.611 "suffix": "-pre", 01:27:28.611 "commit": "66902d69a" 01:27:28.611 } 01:27:28.611 } 01:27:28.611 05:22:20 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 01:27:28.611 05:22:20 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 01:27:28.611 05:22:20 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 01:27:28.611 05:22:20 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 01:27:28.611 05:22:20 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 01:27:28.611 05:22:20 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:28.611 05:22:20 app_cmdline -- common/autotest_common.sh@10 -- # set +x 01:27:28.611 05:22:20 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 01:27:28.611 05:22:20 app_cmdline -- app/cmdline.sh@26 -- # sort 01:27:28.611 05:22:20 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:28.612 05:22:20 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 01:27:28.612 05:22:20 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 01:27:28.612 05:22:20 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 01:27:28.612 05:22:20 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 01:27:28.612 05:22:20 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 01:27:28.612 05:22:20 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:27:28.612 05:22:20 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:27:28.612 05:22:20 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:27:28.612 05:22:20 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:27:28.612 05:22:20 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:27:28.612 05:22:20 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:27:28.612 05:22:20 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:27:28.612 05:22:20 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 01:27:28.612 05:22:20 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 01:27:28.871 request: 01:27:28.871 { 01:27:28.871 "method": "env_dpdk_get_mem_stats", 01:27:28.871 "req_id": 1 01:27:28.871 } 01:27:28.871 Got JSON-RPC error response 01:27:28.871 response: 01:27:28.871 { 01:27:28.871 "code": -32601, 01:27:28.871 "message": "Method not found" 01:27:28.871 } 01:27:28.871 05:22:20 app_cmdline -- common/autotest_common.sh@655 -- # es=1 01:27:28.871 05:22:20 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:27:28.871 05:22:20 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:27:28.871 05:22:20 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:27:28.871 05:22:20 app_cmdline -- app/cmdline.sh@1 -- # killprocess 60890 01:27:28.871 05:22:20 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 60890 ']' 01:27:28.871 05:22:20 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 60890 01:27:28.871 05:22:20 app_cmdline -- common/autotest_common.sh@959 -- # uname 01:27:28.871 05:22:20 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:27:28.871 05:22:20 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60890 01:27:29.130 killing process with pid 60890 01:27:29.130 05:22:20 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:27:29.130 05:22:20 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:27:29.130 05:22:20 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60890' 01:27:29.130 05:22:20 app_cmdline -- common/autotest_common.sh@973 -- # kill 60890 01:27:29.130 05:22:20 app_cmdline -- common/autotest_common.sh@978 -- # wait 60890 01:27:31.033 01:27:31.033 real 0m4.245s 01:27:31.033 user 0m4.622s 01:27:31.033 sys 0m0.717s 01:27:31.033 05:22:22 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 01:27:31.033 ************************************ 01:27:31.033 END TEST app_cmdline 01:27:31.033 ************************************ 01:27:31.033 05:22:22 app_cmdline -- common/autotest_common.sh@10 -- # set +x 01:27:31.292 05:22:22 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 01:27:31.292 05:22:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:27:31.292 05:22:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:27:31.292 05:22:22 -- common/autotest_common.sh@10 -- # set +x 01:27:31.292 ************************************ 01:27:31.292 START TEST version 01:27:31.292 ************************************ 01:27:31.292 05:22:22 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 01:27:31.292 * Looking for test storage... 01:27:31.292 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 01:27:31.292 05:22:22 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:27:31.292 05:22:22 version -- common/autotest_common.sh@1693 -- # lcov --version 01:27:31.292 05:22:22 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:27:31.292 05:22:22 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:27:31.292 05:22:22 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:27:31.292 05:22:22 version -- scripts/common.sh@333 -- # local ver1 ver1_l 01:27:31.292 05:22:22 version -- scripts/common.sh@334 -- # local ver2 ver2_l 01:27:31.292 05:22:22 version -- scripts/common.sh@336 -- # IFS=.-: 01:27:31.292 05:22:22 version -- scripts/common.sh@336 -- # read -ra ver1 01:27:31.292 05:22:22 version -- scripts/common.sh@337 -- # IFS=.-: 01:27:31.292 05:22:22 version -- scripts/common.sh@337 -- # read -ra ver2 01:27:31.292 05:22:22 version -- scripts/common.sh@338 -- # local 'op=<' 01:27:31.292 05:22:22 version -- scripts/common.sh@340 -- # ver1_l=2 01:27:31.292 05:22:22 version -- scripts/common.sh@341 -- # ver2_l=1 01:27:31.292 05:22:22 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:27:31.292 05:22:22 version -- scripts/common.sh@344 -- # case "$op" in 01:27:31.292 05:22:22 version -- scripts/common.sh@345 -- # : 1 01:27:31.292 05:22:22 version -- scripts/common.sh@364 -- # (( v = 0 )) 01:27:31.292 05:22:22 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:27:31.292 05:22:22 version -- scripts/common.sh@365 -- # decimal 1 01:27:31.292 05:22:22 version -- scripts/common.sh@353 -- # local d=1 01:27:31.292 05:22:22 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:27:31.292 05:22:22 version -- scripts/common.sh@355 -- # echo 1 01:27:31.292 05:22:22 version -- scripts/common.sh@365 -- # ver1[v]=1 01:27:31.292 05:22:22 version -- scripts/common.sh@366 -- # decimal 2 01:27:31.292 05:22:22 version -- scripts/common.sh@353 -- # local d=2 01:27:31.292 05:22:22 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:27:31.292 05:22:22 version -- scripts/common.sh@355 -- # echo 2 01:27:31.292 05:22:22 version -- scripts/common.sh@366 -- # ver2[v]=2 01:27:31.292 05:22:22 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:27:31.292 05:22:22 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:27:31.292 05:22:22 version -- scripts/common.sh@368 -- # return 0 01:27:31.292 05:22:22 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:27:31.292 05:22:22 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:27:31.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:27:31.292 --rc genhtml_branch_coverage=1 01:27:31.292 --rc genhtml_function_coverage=1 01:27:31.292 --rc genhtml_legend=1 01:27:31.292 --rc geninfo_all_blocks=1 01:27:31.292 --rc geninfo_unexecuted_blocks=1 01:27:31.292 01:27:31.292 ' 01:27:31.292 05:22:22 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:27:31.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:27:31.292 --rc genhtml_branch_coverage=1 01:27:31.292 --rc genhtml_function_coverage=1 01:27:31.292 --rc genhtml_legend=1 01:27:31.292 --rc geninfo_all_blocks=1 01:27:31.292 --rc geninfo_unexecuted_blocks=1 01:27:31.292 01:27:31.292 ' 01:27:31.292 05:22:22 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:27:31.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:27:31.292 --rc genhtml_branch_coverage=1 01:27:31.292 --rc genhtml_function_coverage=1 01:27:31.292 --rc genhtml_legend=1 01:27:31.292 --rc geninfo_all_blocks=1 01:27:31.292 --rc geninfo_unexecuted_blocks=1 01:27:31.292 01:27:31.292 ' 01:27:31.292 05:22:22 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:27:31.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:27:31.292 --rc genhtml_branch_coverage=1 01:27:31.292 --rc genhtml_function_coverage=1 01:27:31.292 --rc genhtml_legend=1 01:27:31.292 --rc geninfo_all_blocks=1 01:27:31.292 --rc geninfo_unexecuted_blocks=1 01:27:31.292 01:27:31.292 ' 01:27:31.292 05:22:22 version -- app/version.sh@17 -- # get_header_version major 01:27:31.292 05:22:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 01:27:31.292 05:22:22 version -- app/version.sh@14 -- # cut -f2 01:27:31.293 05:22:22 version -- app/version.sh@14 -- # tr -d '"' 01:27:31.293 05:22:22 version -- app/version.sh@17 -- # major=25 01:27:31.293 05:22:22 version -- app/version.sh@18 -- # get_header_version minor 01:27:31.293 05:22:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 01:27:31.293 05:22:22 version -- app/version.sh@14 -- # cut -f2 01:27:31.293 05:22:22 version -- app/version.sh@14 -- # tr -d '"' 01:27:31.293 05:22:22 version -- app/version.sh@18 -- # minor=1 01:27:31.293 05:22:22 version -- app/version.sh@19 -- # get_header_version patch 01:27:31.293 05:22:22 version -- app/version.sh@14 -- # cut -f2 01:27:31.293 05:22:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 01:27:31.293 05:22:22 version -- app/version.sh@14 -- # tr -d '"' 01:27:31.293 05:22:22 version -- app/version.sh@19 -- # patch=0 01:27:31.293 05:22:22 version -- app/version.sh@20 -- # get_header_version suffix 01:27:31.293 05:22:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 01:27:31.293 05:22:22 version -- app/version.sh@14 -- # cut -f2 01:27:31.293 05:22:22 version -- app/version.sh@14 -- # tr -d '"' 01:27:31.551 05:22:22 version -- app/version.sh@20 -- # suffix=-pre 01:27:31.551 05:22:22 version -- app/version.sh@22 -- # version=25.1 01:27:31.551 05:22:22 version -- app/version.sh@25 -- # (( patch != 0 )) 01:27:31.551 05:22:22 version -- app/version.sh@28 -- # version=25.1rc0 01:27:31.551 05:22:22 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 01:27:31.551 05:22:22 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 01:27:31.551 05:22:22 version -- app/version.sh@30 -- # py_version=25.1rc0 01:27:31.551 05:22:22 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 01:27:31.551 01:27:31.551 real 0m0.256s 01:27:31.551 user 0m0.157s 01:27:31.551 sys 0m0.134s 01:27:31.551 ************************************ 01:27:31.551 END TEST version 01:27:31.551 ************************************ 01:27:31.551 05:22:22 version -- common/autotest_common.sh@1130 -- # xtrace_disable 01:27:31.551 05:22:22 version -- common/autotest_common.sh@10 -- # set +x 01:27:31.551 05:22:22 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 01:27:31.551 05:22:22 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 01:27:31.551 05:22:22 -- spdk/autotest.sh@194 -- # uname -s 01:27:31.551 05:22:22 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 01:27:31.551 05:22:22 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 01:27:31.551 05:22:22 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 01:27:31.551 05:22:22 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 01:27:31.551 05:22:22 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 01:27:31.551 05:22:22 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:27:31.551 05:22:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:27:31.551 05:22:22 -- common/autotest_common.sh@10 -- # set +x 01:27:31.551 ************************************ 01:27:31.551 START TEST blockdev_nvme 01:27:31.551 ************************************ 01:27:31.551 05:22:23 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 01:27:31.551 * Looking for test storage... 01:27:31.551 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 01:27:31.551 05:22:23 blockdev_nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:27:31.551 05:22:23 blockdev_nvme -- common/autotest_common.sh@1693 -- # lcov --version 01:27:31.551 05:22:23 blockdev_nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:27:31.812 05:22:23 blockdev_nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:27:31.812 05:22:23 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:27:31.812 05:22:23 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 01:27:31.812 05:22:23 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 01:27:31.812 05:22:23 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 01:27:31.812 05:22:23 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 01:27:31.812 05:22:23 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 01:27:31.812 05:22:23 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 01:27:31.812 05:22:23 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 01:27:31.812 05:22:23 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 01:27:31.812 05:22:23 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 01:27:31.812 05:22:23 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:27:31.812 05:22:23 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 01:27:31.812 05:22:23 blockdev_nvme -- scripts/common.sh@345 -- # : 1 01:27:31.812 05:22:23 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 01:27:31.812 05:22:23 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:27:31.812 05:22:23 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 01:27:31.812 05:22:23 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 01:27:31.812 05:22:23 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:27:31.812 05:22:23 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 01:27:31.812 05:22:23 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 01:27:31.812 05:22:23 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 01:27:31.812 05:22:23 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 01:27:31.812 05:22:23 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:27:31.812 05:22:23 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 01:27:31.812 05:22:23 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 01:27:31.812 05:22:23 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:27:31.812 05:22:23 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:27:31.812 05:22:23 blockdev_nvme -- scripts/common.sh@368 -- # return 0 01:27:31.812 05:22:23 blockdev_nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:27:31.812 05:22:23 blockdev_nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:27:31.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:27:31.812 --rc genhtml_branch_coverage=1 01:27:31.812 --rc genhtml_function_coverage=1 01:27:31.812 --rc genhtml_legend=1 01:27:31.812 --rc geninfo_all_blocks=1 01:27:31.812 --rc geninfo_unexecuted_blocks=1 01:27:31.812 01:27:31.812 ' 01:27:31.812 05:22:23 blockdev_nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:27:31.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:27:31.813 --rc genhtml_branch_coverage=1 01:27:31.813 --rc genhtml_function_coverage=1 01:27:31.813 --rc genhtml_legend=1 01:27:31.813 --rc geninfo_all_blocks=1 01:27:31.813 --rc geninfo_unexecuted_blocks=1 01:27:31.813 01:27:31.813 ' 01:27:31.813 05:22:23 blockdev_nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:27:31.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:27:31.813 --rc genhtml_branch_coverage=1 01:27:31.813 --rc genhtml_function_coverage=1 01:27:31.813 --rc genhtml_legend=1 01:27:31.813 --rc geninfo_all_blocks=1 01:27:31.813 --rc geninfo_unexecuted_blocks=1 01:27:31.813 01:27:31.813 ' 01:27:31.813 05:22:23 blockdev_nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:27:31.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:27:31.813 --rc genhtml_branch_coverage=1 01:27:31.813 --rc genhtml_function_coverage=1 01:27:31.813 --rc genhtml_legend=1 01:27:31.813 --rc geninfo_all_blocks=1 01:27:31.813 --rc geninfo_unexecuted_blocks=1 01:27:31.813 01:27:31.813 ' 01:27:31.813 05:22:23 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 01:27:31.813 05:22:23 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 01:27:31.813 05:22:23 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 01:27:31.813 05:22:23 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 01:27:31.813 05:22:23 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 01:27:31.813 05:22:23 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 01:27:31.813 05:22:23 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 01:27:31.813 05:22:23 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 01:27:31.813 05:22:23 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 01:27:31.813 05:22:23 blockdev_nvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 01:27:31.813 05:22:23 blockdev_nvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 01:27:31.813 05:22:23 blockdev_nvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 01:27:31.813 05:22:23 blockdev_nvme -- bdev/blockdev.sh@711 -- # uname -s 01:27:31.813 05:22:23 blockdev_nvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 01:27:31.813 05:22:23 blockdev_nvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 01:27:31.813 05:22:23 blockdev_nvme -- bdev/blockdev.sh@719 -- # test_type=nvme 01:27:31.813 05:22:23 blockdev_nvme -- bdev/blockdev.sh@720 -- # crypto_device= 01:27:31.813 05:22:23 blockdev_nvme -- bdev/blockdev.sh@721 -- # dek= 01:27:31.813 05:22:23 blockdev_nvme -- bdev/blockdev.sh@722 -- # env_ctx= 01:27:31.813 05:22:23 blockdev_nvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 01:27:31.813 05:22:23 blockdev_nvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 01:27:31.813 05:22:23 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == bdev ]] 01:27:31.813 05:22:23 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == crypto_* ]] 01:27:31.813 05:22:23 blockdev_nvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 01:27:31.813 05:22:23 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=61073 01:27:31.813 05:22:23 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 01:27:31.813 05:22:23 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 01:27:31.813 05:22:23 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 61073 01:27:31.813 05:22:23 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 61073 ']' 01:27:31.813 05:22:23 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:27:31.813 05:22:23 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 01:27:31.813 05:22:23 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:27:31.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:27:31.813 05:22:23 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 01:27:31.813 05:22:23 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 01:27:31.813 [2024-12-09 05:22:23.338385] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:27:31.813 [2024-12-09 05:22:23.338779] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61073 ] 01:27:32.072 [2024-12-09 05:22:23.523525] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:27:32.072 [2024-12-09 05:22:23.638810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:27:33.006 05:22:24 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:27:33.006 05:22:24 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 01:27:33.006 05:22:24 blockdev_nvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 01:27:33.006 05:22:24 blockdev_nvme -- bdev/blockdev.sh@736 -- # setup_nvme_conf 01:27:33.006 05:22:24 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 01:27:33.006 05:22:24 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 01:27:33.006 05:22:24 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 01:27:33.006 05:22:24 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 01:27:33.006 05:22:24 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:33.006 05:22:24 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 01:27:33.265 05:22:24 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:33.265 05:22:24 blockdev_nvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 01:27:33.265 05:22:24 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:33.265 05:22:24 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 01:27:33.265 05:22:24 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:33.265 05:22:24 blockdev_nvme -- bdev/blockdev.sh@777 -- # cat 01:27:33.265 05:22:24 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 01:27:33.265 05:22:24 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:33.265 05:22:24 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 01:27:33.265 05:22:24 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:33.265 05:22:24 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 01:27:33.265 05:22:24 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:33.265 05:22:24 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 01:27:33.265 05:22:24 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:33.265 05:22:24 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 01:27:33.265 05:22:24 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:33.265 05:22:24 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 01:27:33.265 05:22:24 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:33.265 05:22:24 blockdev_nvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 01:27:33.265 05:22:24 blockdev_nvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 01:27:33.265 05:22:24 blockdev_nvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 01:27:33.265 05:22:24 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:33.265 05:22:24 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 01:27:33.523 05:22:24 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:33.523 05:22:24 blockdev_nvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 01:27:33.523 05:22:24 blockdev_nvme -- bdev/blockdev.sh@786 -- # jq -r .name 01:27:33.524 05:22:24 blockdev_nvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "e57b50a6-8c1b-4f1d-bde0-395e71e83d46"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "e57b50a6-8c1b-4f1d-bde0-395e71e83d46",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "13f57d2f-84f8-4522-b0d3-76746b5f32ca"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "13f57d2f-84f8-4522-b0d3-76746b5f32ca",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "f3b1e388-854b-448d-b464-7376c54d6c29"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "f3b1e388-854b-448d-b464-7376c54d6c29",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "67ecd65c-3978-4cbb-94b1-491d864a0025"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "67ecd65c-3978-4cbb-94b1-491d864a0025",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "3365afbc-bf7a-45ef-8a39-32c400cca440"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "3365afbc-bf7a-45ef-8a39-32c400cca440",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "80718170-c70a-4984-8162-f7086c4bc711"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "80718170-c70a-4984-8162-f7086c4bc711",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 01:27:33.524 05:22:25 blockdev_nvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 01:27:33.524 05:22:25 blockdev_nvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 01:27:33.524 05:22:25 blockdev_nvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 01:27:33.524 05:22:25 blockdev_nvme -- bdev/blockdev.sh@791 -- # killprocess 61073 01:27:33.524 05:22:25 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 61073 ']' 01:27:33.524 05:22:25 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 61073 01:27:33.524 05:22:25 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 01:27:33.524 05:22:25 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:27:33.524 05:22:25 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61073 01:27:33.524 killing process with pid 61073 01:27:33.524 05:22:25 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:27:33.524 05:22:25 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:27:33.524 05:22:25 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61073' 01:27:33.524 05:22:25 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 61073 01:27:33.524 05:22:25 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 61073 01:27:36.070 05:22:27 blockdev_nvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 01:27:36.070 05:22:27 blockdev_nvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 01:27:36.070 05:22:27 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 01:27:36.070 05:22:27 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 01:27:36.070 05:22:27 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 01:27:36.070 ************************************ 01:27:36.070 START TEST bdev_hello_world 01:27:36.070 ************************************ 01:27:36.070 05:22:27 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 01:27:36.070 [2024-12-09 05:22:27.275072] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:27:36.070 [2024-12-09 05:22:27.275268] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61168 ] 01:27:36.070 [2024-12-09 05:22:27.459833] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:27:36.070 [2024-12-09 05:22:27.592356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:27:37.003 [2024-12-09 05:22:28.252149] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 01:27:37.003 [2024-12-09 05:22:28.252216] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 01:27:37.003 [2024-12-09 05:22:28.252278] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 01:27:37.003 [2024-12-09 05:22:28.255871] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 01:27:37.003 [2024-12-09 05:22:28.256516] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 01:27:37.003 [2024-12-09 05:22:28.256577] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 01:27:37.003 [2024-12-09 05:22:28.256778] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 01:27:37.003 01:27:37.003 [2024-12-09 05:22:28.256819] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 01:27:37.937 01:27:37.937 real 0m2.199s 01:27:37.937 user 0m1.804s 01:27:37.937 sys 0m0.283s 01:27:37.937 ************************************ 01:27:37.937 END TEST bdev_hello_world 01:27:37.937 ************************************ 01:27:37.937 05:22:29 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 01:27:37.937 05:22:29 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 01:27:37.937 05:22:29 blockdev_nvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 01:27:37.937 05:22:29 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:27:37.937 05:22:29 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 01:27:37.937 05:22:29 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 01:27:37.937 ************************************ 01:27:37.937 START TEST bdev_bounds 01:27:37.937 ************************************ 01:27:37.937 05:22:29 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 01:27:37.937 Process bdevio pid: 61210 01:27:37.937 05:22:29 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61210 01:27:37.937 05:22:29 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 01:27:37.937 05:22:29 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61210' 01:27:37.937 05:22:29 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 01:27:37.937 05:22:29 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61210 01:27:37.937 05:22:29 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 61210 ']' 01:27:37.937 05:22:29 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:27:37.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:27:37.937 05:22:29 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 01:27:37.937 05:22:29 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:27:37.937 05:22:29 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 01:27:37.937 05:22:29 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 01:27:37.937 [2024-12-09 05:22:29.535548] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:27:37.937 [2024-12-09 05:22:29.536109] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61210 ] 01:27:38.195 [2024-12-09 05:22:29.724073] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 01:27:38.454 [2024-12-09 05:22:29.853779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:27:38.454 [2024-12-09 05:22:29.853896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:27:38.454 [2024-12-09 05:22:29.853917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:27:39.020 05:22:30 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:27:39.020 05:22:30 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 01:27:39.020 05:22:30 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 01:27:39.279 I/O targets: 01:27:39.279 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 01:27:39.279 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 01:27:39.279 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 01:27:39.279 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 01:27:39.279 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 01:27:39.279 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 01:27:39.279 01:27:39.279 01:27:39.279 CUnit - A unit testing framework for C - Version 2.1-3 01:27:39.279 http://cunit.sourceforge.net/ 01:27:39.279 01:27:39.279 01:27:39.279 Suite: bdevio tests on: Nvme3n1 01:27:39.279 Test: blockdev write read block ...passed 01:27:39.279 Test: blockdev write zeroes read block ...passed 01:27:39.279 Test: blockdev write zeroes read no split ...passed 01:27:39.279 Test: blockdev write zeroes read split ...passed 01:27:39.279 Test: blockdev write zeroes read split partial ...passed 01:27:39.279 Test: blockdev reset ...[2024-12-09 05:22:30.735709] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 01:27:39.279 passed 01:27:39.279 Test: blockdev write read 8 blocks ...[2024-12-09 05:22:30.739807] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 01:27:39.279 passed 01:27:39.279 Test: blockdev write read size > 128k ...passed 01:27:39.279 Test: blockdev write read invalid size ...passed 01:27:39.279 Test: blockdev write read offset + nbytes == size of blockdev ...passed 01:27:39.279 Test: blockdev write read offset + nbytes > size of blockdev ...passed 01:27:39.279 Test: blockdev write read max offset ...passed 01:27:39.279 Test: blockdev write read 2 blocks on overlapped address offset ...passed 01:27:39.279 Test: blockdev writev readv 8 blocks ...passed 01:27:39.279 Test: blockdev writev readv 30 x 1block ...passed 01:27:39.279 Test: blockdev writev readv block ...passed 01:27:39.279 Test: blockdev writev readv size > 128k ...passed 01:27:39.279 Test: blockdev writev readv size > 128k in two iovs ...passed 01:27:39.279 Test: blockdev comparev and writev ...[2024-12-09 05:22:30.748604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 passed 01:27:39.279 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2be60a000 len:0x1000 01:27:39.279 [2024-12-09 05:22:30.748866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 01:27:39.279 passed 01:27:39.279 Test: blockdev nvme passthru vendor specific ...[2024-12-09 05:22:30.749684] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 01:27:39.279 passed 01:27:39.279 Test: blockdev nvme admin passthru ...[2024-12-09 05:22:30.749728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 01:27:39.279 passed 01:27:39.279 Test: blockdev copy ...passed 01:27:39.279 Suite: bdevio tests on: Nvme2n3 01:27:39.279 Test: blockdev write read block ...passed 01:27:39.279 Test: blockdev write zeroes read block ...passed 01:27:39.279 Test: blockdev write zeroes read no split ...passed 01:27:39.279 Test: blockdev write zeroes read split ...passed 01:27:39.279 Test: blockdev write zeroes read split partial ...passed 01:27:39.279 Test: blockdev reset ...[2024-12-09 05:22:30.814750] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 01:27:39.279 [2024-12-09 05:22:30.820739] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 01:27:39.279 passed 01:27:39.279 Test: blockdev write read 8 blocks ...passed 01:27:39.279 Test: blockdev write read size > 128k ...passed 01:27:39.279 Test: blockdev write read invalid size ...passed 01:27:39.279 Test: blockdev write read offset + nbytes == size of blockdev ...passed 01:27:39.279 Test: blockdev write read offset + nbytes > size of blockdev ...passed 01:27:39.279 Test: blockdev write read max offset ...passed 01:27:39.279 Test: blockdev write read 2 blocks on overlapped address offset ...passed 01:27:39.279 Test: blockdev writev readv 8 blocks ...passed 01:27:39.279 Test: blockdev writev readv 30 x 1block ...passed 01:27:39.279 Test: blockdev writev readv block ...passed 01:27:39.279 Test: blockdev writev readv size > 128k ...passed 01:27:39.279 Test: blockdev writev readv size > 128k in two iovs ...passed 01:27:39.279 Test: blockdev comparev and writev ...[2024-12-09 05:22:30.832119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2a1006000 len:0x1000 01:27:39.279 [2024-12-09 05:22:30.832240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 01:27:39.279 passed 01:27:39.279 Test: blockdev nvme passthru rw ...passed 01:27:39.279 Test: blockdev nvme passthru vendor specific ...passed 01:27:39.279 Test: blockdev nvme admin passthru ...[2024-12-09 05:22:30.833488] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 01:27:39.279 [2024-12-09 05:22:30.833575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 01:27:39.279 passed 01:27:39.279 Test: blockdev copy ...passed 01:27:39.279 Suite: bdevio tests on: Nvme2n2 01:27:39.279 Test: blockdev write read block ...passed 01:27:39.279 Test: blockdev write zeroes read block ...passed 01:27:39.279 Test: blockdev write zeroes read no split ...passed 01:27:39.279 Test: blockdev write zeroes read split ...passed 01:27:39.537 Test: blockdev write zeroes read split partial ...passed 01:27:39.537 Test: blockdev reset ...[2024-12-09 05:22:30.900473] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 01:27:39.537 passed 01:27:39.537 Test: blockdev write read 8 blocks ...[2024-12-09 05:22:30.905102] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 01:27:39.537 passed 01:27:39.537 Test: blockdev write read size > 128k ...passed 01:27:39.537 Test: blockdev write read invalid size ...passed 01:27:39.537 Test: blockdev write read offset + nbytes == size of blockdev ...passed 01:27:39.537 Test: blockdev write read offset + nbytes > size of blockdev ...passed 01:27:39.537 Test: blockdev write read max offset ...passed 01:27:39.537 Test: blockdev write read 2 blocks on overlapped address offset ...passed 01:27:39.537 Test: blockdev writev readv 8 blocks ...passed 01:27:39.537 Test: blockdev writev readv 30 x 1block ...passed 01:27:39.537 Test: blockdev writev readv block ...passed 01:27:39.537 Test: blockdev writev readv size > 128k ...passed 01:27:39.538 Test: blockdev writev readv size > 128k in two iovs ...passed 01:27:39.538 Test: blockdev comparev and writev ...[2024-12-09 05:22:30.913242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ce63c000 len:0x1000 01:27:39.538 [2024-12-09 05:22:30.913303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 01:27:39.538 passed 01:27:39.538 Test: blockdev nvme passthru rw ...passed 01:27:39.538 Test: blockdev nvme passthru vendor specific ...passed 01:27:39.538 Test: blockdev nvme admin passthru ...[2024-12-09 05:22:30.914138] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 01:27:39.538 [2024-12-09 05:22:30.914187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 01:27:39.538 passed 01:27:39.538 Test: blockdev copy ...passed 01:27:39.538 Suite: bdevio tests on: Nvme2n1 01:27:39.538 Test: blockdev write read block ...passed 01:27:39.538 Test: blockdev write zeroes read block ...passed 01:27:39.538 Test: blockdev write zeroes read no split ...passed 01:27:39.538 Test: blockdev write zeroes read split ...passed 01:27:39.538 Test: blockdev write zeroes read split partial ...passed 01:27:39.538 Test: blockdev reset ...[2024-12-09 05:22:30.978636] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 01:27:39.538 [2024-12-09 05:22:30.983391] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 01:27:39.538 passed 01:27:39.538 Test: blockdev write read 8 blocks ...passed 01:27:39.538 Test: blockdev write read size > 128k ...passed 01:27:39.538 Test: blockdev write read invalid size ...passed 01:27:39.538 Test: blockdev write read offset + nbytes == size of blockdev ...passed 01:27:39.538 Test: blockdev write read offset + nbytes > size of blockdev ...passed 01:27:39.538 Test: blockdev write read max offset ...passed 01:27:39.538 Test: blockdev write read 2 blocks on overlapped address offset ...passed 01:27:39.538 Test: blockdev writev readv 8 blocks ...passed 01:27:39.538 Test: blockdev writev readv 30 x 1block ...passed 01:27:39.538 Test: blockdev writev readv block ...passed 01:27:39.538 Test: blockdev writev readv size > 128k ...passed 01:27:39.538 Test: blockdev writev readv size > 128k in two iovs ...passed 01:27:39.538 Test: blockdev comparev and writev ...[2024-12-09 05:22:30.992918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 passed 01:27:39.538 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2ce638000 len:0x1000 01:27:39.538 [2024-12-09 05:22:30.993174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 01:27:39.538 passed 01:27:39.538 Test: blockdev nvme passthru vendor specific ...passed 01:27:39.538 Test: blockdev nvme admin passthru ...[2024-12-09 05:22:30.993999] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 01:27:39.538 [2024-12-09 05:22:30.994049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 01:27:39.538 passed 01:27:39.538 Test: blockdev copy ...passed 01:27:39.538 Suite: bdevio tests on: Nvme1n1 01:27:39.538 Test: blockdev write read block ...passed 01:27:39.538 Test: blockdev write zeroes read block ...passed 01:27:39.538 Test: blockdev write zeroes read no split ...passed 01:27:39.538 Test: blockdev write zeroes read split ...passed 01:27:39.538 Test: blockdev write zeroes read split partial ...passed 01:27:39.538 Test: blockdev reset ...[2024-12-09 05:22:31.056735] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 01:27:39.538 [2024-12-09 05:22:31.060612] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller spassed 01:27:39.538 Test: blockdev write read 8 blocks ...passed 01:27:39.538 Test: blockdev write read size > 128k ...uccessful. 01:27:39.538 passed 01:27:39.538 Test: blockdev write read invalid size ...passed 01:27:39.538 Test: blockdev write read offset + nbytes == size of blockdev ...passed 01:27:39.538 Test: blockdev write read offset + nbytes > size of blockdev ...passed 01:27:39.538 Test: blockdev write read max offset ...passed 01:27:39.538 Test: blockdev write read 2 blocks on overlapped address offset ...passed 01:27:39.538 Test: blockdev writev readv 8 blocks ...passed 01:27:39.538 Test: blockdev writev readv 30 x 1block ...passed 01:27:39.538 Test: blockdev writev readv block ...passed 01:27:39.538 Test: blockdev writev readv size > 128k ...passed 01:27:39.538 Test: blockdev writev readv size > 128k in two iovs ...passed 01:27:39.538 Test: blockdev comparev and writev ...[2024-12-09 05:22:31.069086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ce634000 len:0x1000 01:27:39.538 [2024-12-09 05:22:31.069148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 01:27:39.538 passed 01:27:39.538 Test: blockdev nvme passthru rw ...passed 01:27:39.538 Test: blockdev nvme passthru vendor specific ...passed 01:27:39.538 Test: blockdev nvme admin passthru ...[2024-12-09 05:22:31.070004] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 01:27:39.538 [2024-12-09 05:22:31.070053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 01:27:39.538 passed 01:27:39.538 Test: blockdev copy ...passed 01:27:39.538 Suite: bdevio tests on: Nvme0n1 01:27:39.538 Test: blockdev write read block ...passed 01:27:39.538 Test: blockdev write zeroes read block ...passed 01:27:39.538 Test: blockdev write zeroes read no split ...passed 01:27:39.538 Test: blockdev write zeroes read split ...passed 01:27:39.538 Test: blockdev write zeroes read split partial ...passed 01:27:39.538 Test: blockdev reset ...[2024-12-09 05:22:31.134280] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 01:27:39.538 [2024-12-09 05:22:31.138464] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 01:27:39.538 passed 01:27:39.538 Test: blockdev write read 8 blocks ...passed 01:27:39.538 Test: blockdev write read size > 128k ...passed 01:27:39.538 Test: blockdev write read invalid size ...passed 01:27:39.538 Test: blockdev write read offset + nbytes == size of blockdev ...passed 01:27:39.538 Test: blockdev write read offset + nbytes > size of blockdev ...passed 01:27:39.538 Test: blockdev write read max offset ...passed 01:27:39.538 Test: blockdev write read 2 blocks on overlapped address offset ...passed 01:27:39.538 Test: blockdev writev readv 8 blocks ...passed 01:27:39.538 Test: blockdev writev readv 30 x 1block ...passed 01:27:39.538 Test: blockdev writev readv block ...passed 01:27:39.538 Test: blockdev writev readv size > 128k ...passed 01:27:39.538 Test: blockdev writev readv size > 128k in two iovs ...passed 01:27:39.538 Test: blockdev comparev and writev ...passed 01:27:39.538 Test: blockdev nvme passthru rw ...[2024-12-09 05:22:31.146622] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 01:27:39.538 separate metadata which is not supported yet. 01:27:39.538 passed 01:27:39.538 Test: blockdev nvme passthru vendor specific ...[2024-12-09 05:22:31.147207] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 Ppassed 01:27:39.538 Test: blockdev nvme admin passthru ...RP2 0x0 01:27:39.538 [2024-12-09 05:22:31.147380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 01:27:39.864 passed 01:27:39.864 Test: blockdev copy ...passed 01:27:39.864 01:27:39.864 Run Summary: Type Total Ran Passed Failed Inactive 01:27:39.864 suites 6 6 n/a 0 0 01:27:39.864 tests 138 138 138 0 0 01:27:39.864 asserts 893 893 893 0 n/a 01:27:39.864 01:27:39.864 Elapsed time = 1.265 seconds 01:27:39.864 0 01:27:39.864 05:22:31 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61210 01:27:39.864 05:22:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 61210 ']' 01:27:39.864 05:22:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 61210 01:27:39.864 05:22:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 01:27:39.864 05:22:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:27:39.864 05:22:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61210 01:27:39.864 killing process with pid 61210 01:27:39.864 05:22:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:27:39.864 05:22:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:27:39.864 05:22:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61210' 01:27:39.864 05:22:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 61210 01:27:39.864 05:22:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 61210 01:27:40.806 05:22:32 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 01:27:40.806 01:27:40.806 real 0m2.854s 01:27:40.806 user 0m7.136s 01:27:40.806 sys 0m0.467s 01:27:40.806 05:22:32 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 01:27:40.806 05:22:32 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 01:27:40.806 ************************************ 01:27:40.806 END TEST bdev_bounds 01:27:40.806 ************************************ 01:27:40.806 05:22:32 blockdev_nvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 01:27:40.806 05:22:32 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 01:27:40.806 05:22:32 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 01:27:40.806 05:22:32 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 01:27:40.806 ************************************ 01:27:40.806 START TEST bdev_nbd 01:27:40.806 ************************************ 01:27:40.806 05:22:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 01:27:40.806 05:22:32 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 01:27:40.806 05:22:32 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 01:27:40.806 05:22:32 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:27:40.806 05:22:32 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 01:27:40.806 05:22:32 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 01:27:40.806 05:22:32 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 01:27:40.806 05:22:32 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 01:27:40.806 05:22:32 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 01:27:40.806 05:22:32 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 01:27:40.806 05:22:32 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 01:27:40.806 05:22:32 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 01:27:40.806 05:22:32 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 01:27:40.806 05:22:32 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 01:27:40.806 05:22:32 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 01:27:40.806 05:22:32 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 01:27:40.806 05:22:32 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61275 01:27:40.806 05:22:32 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 01:27:40.806 05:22:32 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 01:27:40.806 05:22:32 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61275 /var/tmp/spdk-nbd.sock 01:27:40.806 05:22:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 61275 ']' 01:27:40.806 05:22:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 01:27:40.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 01:27:40.806 05:22:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 01:27:40.806 05:22:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 01:27:40.806 05:22:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 01:27:40.806 05:22:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 01:27:41.064 [2024-12-09 05:22:32.463849] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:27:41.065 [2024-12-09 05:22:32.464441] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:27:41.065 [2024-12-09 05:22:32.653102] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:27:41.323 [2024-12-09 05:22:32.788028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:27:41.891 05:22:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:27:41.891 05:22:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 01:27:41.891 05:22:33 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 01:27:41.892 05:22:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:27:41.892 05:22:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 01:27:41.892 05:22:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 01:27:41.892 05:22:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 01:27:41.892 05:22:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:27:41.892 05:22:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 01:27:41.892 05:22:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 01:27:41.892 05:22:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 01:27:41.892 05:22:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 01:27:41.892 05:22:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 01:27:41.892 05:22:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 01:27:41.892 05:22:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 01:27:42.459 05:22:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 01:27:42.459 05:22:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 01:27:42.459 05:22:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 01:27:42.459 05:22:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 01:27:42.459 05:22:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 01:27:42.459 05:22:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:27:42.459 05:22:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:27:42.459 05:22:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 01:27:42.459 05:22:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 01:27:42.459 05:22:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:27:42.459 05:22:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:27:42.459 05:22:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:27:42.459 1+0 records in 01:27:42.459 1+0 records out 01:27:42.459 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000554354 s, 7.4 MB/s 01:27:42.459 05:22:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:27:42.459 05:22:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 01:27:42.459 05:22:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:27:42.459 05:22:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:27:42.459 05:22:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 01:27:42.459 05:22:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 01:27:42.459 05:22:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 01:27:42.459 05:22:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 01:27:42.718 05:22:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 01:27:42.718 05:22:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 01:27:42.718 05:22:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 01:27:42.718 05:22:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 01:27:42.718 05:22:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 01:27:42.718 05:22:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:27:42.718 05:22:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:27:42.718 05:22:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 01:27:42.718 05:22:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 01:27:42.718 05:22:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:27:42.718 05:22:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:27:42.718 05:22:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:27:42.718 1+0 records in 01:27:42.718 1+0 records out 01:27:42.718 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000745701 s, 5.5 MB/s 01:27:42.718 05:22:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:27:42.718 05:22:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 01:27:42.718 05:22:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:27:42.718 05:22:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:27:42.718 05:22:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 01:27:42.718 05:22:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 01:27:42.718 05:22:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 01:27:42.718 05:22:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 01:27:42.977 05:22:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 01:27:42.977 05:22:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 01:27:42.977 05:22:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 01:27:42.977 05:22:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 01:27:42.977 05:22:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 01:27:42.977 05:22:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:27:42.977 05:22:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:27:42.977 05:22:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 01:27:42.977 05:22:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 01:27:42.977 05:22:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:27:42.977 05:22:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:27:42.977 05:22:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:27:42.977 1+0 records in 01:27:42.977 1+0 records out 01:27:42.977 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00072237 s, 5.7 MB/s 01:27:42.977 05:22:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:27:42.977 05:22:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 01:27:42.977 05:22:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:27:42.977 05:22:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:27:42.977 05:22:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 01:27:42.977 05:22:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 01:27:42.977 05:22:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 01:27:42.977 05:22:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 01:27:43.236 05:22:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 01:27:43.236 05:22:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 01:27:43.236 05:22:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 01:27:43.236 05:22:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 01:27:43.236 05:22:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 01:27:43.236 05:22:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:27:43.236 05:22:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:27:43.236 05:22:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 01:27:43.236 05:22:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 01:27:43.236 05:22:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:27:43.237 05:22:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:27:43.237 05:22:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:27:43.237 1+0 records in 01:27:43.237 1+0 records out 01:27:43.237 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000717343 s, 5.7 MB/s 01:27:43.237 05:22:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:27:43.237 05:22:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 01:27:43.237 05:22:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:27:43.237 05:22:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:27:43.237 05:22:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 01:27:43.237 05:22:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 01:27:43.237 05:22:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 01:27:43.237 05:22:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 01:27:43.496 05:22:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 01:27:43.496 05:22:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 01:27:43.496 05:22:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 01:27:43.496 05:22:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 01:27:43.496 05:22:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 01:27:43.496 05:22:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:27:43.496 05:22:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:27:43.496 05:22:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 01:27:43.496 05:22:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 01:27:43.496 05:22:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:27:43.496 05:22:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:27:43.496 05:22:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:27:43.496 1+0 records in 01:27:43.496 1+0 records out 01:27:43.496 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000843368 s, 4.9 MB/s 01:27:43.496 05:22:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:27:43.496 05:22:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 01:27:43.496 05:22:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:27:43.496 05:22:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:27:43.496 05:22:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 01:27:43.496 05:22:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 01:27:43.496 05:22:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 01:27:43.496 05:22:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 01:27:43.754 05:22:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 01:27:43.754 05:22:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 01:27:43.754 05:22:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 01:27:43.754 05:22:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 01:27:43.754 05:22:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 01:27:43.754 05:22:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:27:43.754 05:22:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:27:43.754 05:22:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 01:27:43.754 05:22:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 01:27:43.754 05:22:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:27:43.754 05:22:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:27:43.754 05:22:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:27:43.754 1+0 records in 01:27:43.754 1+0 records out 01:27:43.754 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000557733 s, 7.3 MB/s 01:27:43.754 05:22:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:27:43.754 05:22:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 01:27:43.754 05:22:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:27:43.754 05:22:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:27:43.754 05:22:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 01:27:43.754 05:22:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 01:27:43.754 05:22:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 01:27:43.754 05:22:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 01:27:44.013 05:22:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 01:27:44.013 { 01:27:44.013 "nbd_device": "/dev/nbd0", 01:27:44.013 "bdev_name": "Nvme0n1" 01:27:44.013 }, 01:27:44.013 { 01:27:44.013 "nbd_device": "/dev/nbd1", 01:27:44.013 "bdev_name": "Nvme1n1" 01:27:44.013 }, 01:27:44.013 { 01:27:44.013 "nbd_device": "/dev/nbd2", 01:27:44.013 "bdev_name": "Nvme2n1" 01:27:44.013 }, 01:27:44.013 { 01:27:44.013 "nbd_device": "/dev/nbd3", 01:27:44.013 "bdev_name": "Nvme2n2" 01:27:44.013 }, 01:27:44.013 { 01:27:44.013 "nbd_device": "/dev/nbd4", 01:27:44.013 "bdev_name": "Nvme2n3" 01:27:44.013 }, 01:27:44.013 { 01:27:44.013 "nbd_device": "/dev/nbd5", 01:27:44.013 "bdev_name": "Nvme3n1" 01:27:44.013 } 01:27:44.013 ]' 01:27:44.013 05:22:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 01:27:44.013 05:22:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 01:27:44.013 { 01:27:44.013 "nbd_device": "/dev/nbd0", 01:27:44.013 "bdev_name": "Nvme0n1" 01:27:44.013 }, 01:27:44.013 { 01:27:44.013 "nbd_device": "/dev/nbd1", 01:27:44.013 "bdev_name": "Nvme1n1" 01:27:44.013 }, 01:27:44.013 { 01:27:44.013 "nbd_device": "/dev/nbd2", 01:27:44.013 "bdev_name": "Nvme2n1" 01:27:44.013 }, 01:27:44.013 { 01:27:44.013 "nbd_device": "/dev/nbd3", 01:27:44.013 "bdev_name": "Nvme2n2" 01:27:44.013 }, 01:27:44.013 { 01:27:44.013 "nbd_device": "/dev/nbd4", 01:27:44.013 "bdev_name": "Nvme2n3" 01:27:44.013 }, 01:27:44.013 { 01:27:44.013 "nbd_device": "/dev/nbd5", 01:27:44.013 "bdev_name": "Nvme3n1" 01:27:44.013 } 01:27:44.013 ]' 01:27:44.013 05:22:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 01:27:44.271 05:22:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 01:27:44.271 05:22:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:27:44.271 05:22:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 01:27:44.271 05:22:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 01:27:44.271 05:22:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 01:27:44.271 05:22:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:27:44.271 05:22:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 01:27:44.529 05:22:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 01:27:44.529 05:22:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 01:27:44.529 05:22:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 01:27:44.529 05:22:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:27:44.529 05:22:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:27:44.529 05:22:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 01:27:44.529 05:22:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 01:27:44.529 05:22:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 01:27:44.529 05:22:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:27:44.529 05:22:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 01:27:44.787 05:22:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 01:27:44.787 05:22:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 01:27:44.787 05:22:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 01:27:44.787 05:22:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:27:44.787 05:22:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:27:44.787 05:22:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 01:27:44.787 05:22:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 01:27:44.787 05:22:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 01:27:44.787 05:22:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:27:44.787 05:22:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 01:27:45.044 05:22:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 01:27:45.044 05:22:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 01:27:45.044 05:22:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 01:27:45.044 05:22:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:27:45.044 05:22:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:27:45.044 05:22:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 01:27:45.044 05:22:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 01:27:45.044 05:22:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 01:27:45.044 05:22:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:27:45.044 05:22:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 01:27:45.302 05:22:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 01:27:45.302 05:22:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 01:27:45.302 05:22:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 01:27:45.302 05:22:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:27:45.302 05:22:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:27:45.302 05:22:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 01:27:45.302 05:22:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 01:27:45.302 05:22:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 01:27:45.302 05:22:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:27:45.302 05:22:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 01:27:45.560 05:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 01:27:45.560 05:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 01:27:45.560 05:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 01:27:45.560 05:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:27:45.560 05:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:27:45.560 05:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 01:27:45.560 05:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 01:27:45.560 05:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 01:27:45.560 05:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:27:45.560 05:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 01:27:45.819 05:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 01:27:45.819 05:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 01:27:45.819 05:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 01:27:45.819 05:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:27:45.819 05:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:27:45.819 05:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 01:27:45.819 05:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 01:27:45.819 05:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 01:27:45.819 05:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 01:27:45.819 05:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:27:45.819 05:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 01:27:46.078 05:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 01:27:46.078 05:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 01:27:46.078 05:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 01:27:46.078 05:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 01:27:46.336 05:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 01:27:46.336 05:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 01:27:46.336 05:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 01:27:46.336 05:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 01:27:46.336 05:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 01:27:46.336 05:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 01:27:46.336 05:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 01:27:46.336 05:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 01:27:46.336 05:22:37 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 01:27:46.336 05:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:27:46.336 05:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 01:27:46.336 05:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 01:27:46.336 05:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 01:27:46.337 05:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 01:27:46.337 05:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 01:27:46.337 05:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:27:46.337 05:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 01:27:46.337 05:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 01:27:46.337 05:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 01:27:46.337 05:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 01:27:46.337 05:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 01:27:46.337 05:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 01:27:46.337 05:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 01:27:46.337 05:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 01:27:46.595 /dev/nbd0 01:27:46.595 05:22:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 01:27:46.595 05:22:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 01:27:46.595 05:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 01:27:46.595 05:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 01:27:46.595 05:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:27:46.595 05:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:27:46.595 05:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 01:27:46.595 05:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 01:27:46.595 05:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:27:46.595 05:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:27:46.595 05:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:27:46.595 1+0 records in 01:27:46.595 1+0 records out 01:27:46.595 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000637932 s, 6.4 MB/s 01:27:46.595 05:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:27:46.595 05:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 01:27:46.595 05:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:27:46.595 05:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:27:46.595 05:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 01:27:46.595 05:22:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:27:46.595 05:22:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 01:27:46.596 05:22:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 01:27:46.854 /dev/nbd1 01:27:46.854 05:22:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 01:27:46.854 05:22:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 01:27:46.854 05:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 01:27:46.854 05:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 01:27:46.854 05:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:27:46.855 05:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:27:46.855 05:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 01:27:46.855 05:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 01:27:46.855 05:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:27:46.855 05:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:27:46.855 05:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:27:46.855 1+0 records in 01:27:46.855 1+0 records out 01:27:46.855 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000579715 s, 7.1 MB/s 01:27:46.855 05:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:27:46.855 05:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 01:27:46.855 05:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:27:46.855 05:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:27:46.855 05:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 01:27:46.855 05:22:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:27:46.855 05:22:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 01:27:46.855 05:22:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 01:27:47.113 /dev/nbd10 01:27:47.113 05:22:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 01:27:47.113 05:22:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 01:27:47.113 05:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 01:27:47.113 05:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 01:27:47.113 05:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:27:47.113 05:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:27:47.113 05:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 01:27:47.113 05:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 01:27:47.113 05:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:27:47.113 05:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:27:47.113 05:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:27:47.113 1+0 records in 01:27:47.113 1+0 records out 01:27:47.113 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000829641 s, 4.9 MB/s 01:27:47.113 05:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:27:47.113 05:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 01:27:47.113 05:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:27:47.113 05:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:27:47.113 05:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 01:27:47.113 05:22:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:27:47.113 05:22:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 01:27:47.113 05:22:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 01:27:47.371 /dev/nbd11 01:27:47.371 05:22:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 01:27:47.371 05:22:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 01:27:47.371 05:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 01:27:47.371 05:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 01:27:47.371 05:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:27:47.371 05:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:27:47.371 05:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 01:27:47.371 05:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 01:27:47.371 05:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:27:47.371 05:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:27:47.371 05:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:27:47.371 1+0 records in 01:27:47.371 1+0 records out 01:27:47.371 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000873335 s, 4.7 MB/s 01:27:47.371 05:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:27:47.371 05:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 01:27:47.371 05:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:27:47.371 05:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:27:47.371 05:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 01:27:47.371 05:22:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:27:47.371 05:22:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 01:27:47.371 05:22:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 01:27:47.938 /dev/nbd12 01:27:47.938 05:22:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 01:27:47.938 05:22:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 01:27:47.938 05:22:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 01:27:47.938 05:22:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 01:27:47.938 05:22:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:27:47.938 05:22:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:27:47.938 05:22:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 01:27:47.938 05:22:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 01:27:47.938 05:22:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:27:47.938 05:22:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:27:47.938 05:22:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:27:47.938 1+0 records in 01:27:47.938 1+0 records out 01:27:47.938 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000722013 s, 5.7 MB/s 01:27:47.938 05:22:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:27:47.938 05:22:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 01:27:47.938 05:22:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:27:47.938 05:22:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:27:47.938 05:22:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 01:27:47.938 05:22:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:27:47.938 05:22:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 01:27:47.938 05:22:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 01:27:47.938 /dev/nbd13 01:27:48.196 05:22:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 01:27:48.197 05:22:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 01:27:48.197 05:22:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 01:27:48.197 05:22:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 01:27:48.197 05:22:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:27:48.197 05:22:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:27:48.197 05:22:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 01:27:48.197 05:22:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 01:27:48.197 05:22:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:27:48.197 05:22:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:27:48.197 05:22:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:27:48.197 1+0 records in 01:27:48.197 1+0 records out 01:27:48.197 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000938834 s, 4.4 MB/s 01:27:48.197 05:22:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:27:48.197 05:22:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 01:27:48.197 05:22:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:27:48.197 05:22:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:27:48.197 05:22:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 01:27:48.197 05:22:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:27:48.197 05:22:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 01:27:48.197 05:22:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 01:27:48.197 05:22:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:27:48.197 05:22:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 01:27:48.455 05:22:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 01:27:48.455 { 01:27:48.455 "nbd_device": "/dev/nbd0", 01:27:48.455 "bdev_name": "Nvme0n1" 01:27:48.455 }, 01:27:48.455 { 01:27:48.455 "nbd_device": "/dev/nbd1", 01:27:48.455 "bdev_name": "Nvme1n1" 01:27:48.455 }, 01:27:48.455 { 01:27:48.455 "nbd_device": "/dev/nbd10", 01:27:48.455 "bdev_name": "Nvme2n1" 01:27:48.455 }, 01:27:48.455 { 01:27:48.455 "nbd_device": "/dev/nbd11", 01:27:48.455 "bdev_name": "Nvme2n2" 01:27:48.455 }, 01:27:48.455 { 01:27:48.455 "nbd_device": "/dev/nbd12", 01:27:48.455 "bdev_name": "Nvme2n3" 01:27:48.455 }, 01:27:48.455 { 01:27:48.455 "nbd_device": "/dev/nbd13", 01:27:48.455 "bdev_name": "Nvme3n1" 01:27:48.455 } 01:27:48.455 ]' 01:27:48.455 05:22:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 01:27:48.455 { 01:27:48.455 "nbd_device": "/dev/nbd0", 01:27:48.455 "bdev_name": "Nvme0n1" 01:27:48.455 }, 01:27:48.455 { 01:27:48.455 "nbd_device": "/dev/nbd1", 01:27:48.456 "bdev_name": "Nvme1n1" 01:27:48.456 }, 01:27:48.456 { 01:27:48.456 "nbd_device": "/dev/nbd10", 01:27:48.456 "bdev_name": "Nvme2n1" 01:27:48.456 }, 01:27:48.456 { 01:27:48.456 "nbd_device": "/dev/nbd11", 01:27:48.456 "bdev_name": "Nvme2n2" 01:27:48.456 }, 01:27:48.456 { 01:27:48.456 "nbd_device": "/dev/nbd12", 01:27:48.456 "bdev_name": "Nvme2n3" 01:27:48.456 }, 01:27:48.456 { 01:27:48.456 "nbd_device": "/dev/nbd13", 01:27:48.456 "bdev_name": "Nvme3n1" 01:27:48.456 } 01:27:48.456 ]' 01:27:48.456 05:22:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 01:27:48.456 05:22:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 01:27:48.456 /dev/nbd1 01:27:48.456 /dev/nbd10 01:27:48.456 /dev/nbd11 01:27:48.456 /dev/nbd12 01:27:48.456 /dev/nbd13' 01:27:48.456 05:22:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 01:27:48.456 /dev/nbd1 01:27:48.456 /dev/nbd10 01:27:48.456 /dev/nbd11 01:27:48.456 /dev/nbd12 01:27:48.456 /dev/nbd13' 01:27:48.456 05:22:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 01:27:48.456 05:22:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 01:27:48.456 05:22:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 01:27:48.456 05:22:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 01:27:48.456 05:22:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 01:27:48.456 05:22:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 01:27:48.456 05:22:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 01:27:48.456 05:22:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 01:27:48.456 05:22:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 01:27:48.456 05:22:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 01:27:48.456 05:22:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 01:27:48.456 05:22:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 01:27:48.456 256+0 records in 01:27:48.456 256+0 records out 01:27:48.456 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.010977 s, 95.5 MB/s 01:27:48.456 05:22:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 01:27:48.456 05:22:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 01:27:48.714 256+0 records in 01:27:48.714 256+0 records out 01:27:48.714 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.156734 s, 6.7 MB/s 01:27:48.714 05:22:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 01:27:48.714 05:22:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 01:27:48.714 256+0 records in 01:27:48.714 256+0 records out 01:27:48.714 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.167779 s, 6.2 MB/s 01:27:48.714 05:22:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 01:27:48.714 05:22:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 01:27:48.972 256+0 records in 01:27:48.972 256+0 records out 01:27:48.972 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.155761 s, 6.7 MB/s 01:27:48.972 05:22:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 01:27:48.972 05:22:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 01:27:49.230 256+0 records in 01:27:49.230 256+0 records out 01:27:49.230 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.172971 s, 6.1 MB/s 01:27:49.230 05:22:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 01:27:49.230 05:22:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 01:27:49.230 256+0 records in 01:27:49.230 256+0 records out 01:27:49.230 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.167106 s, 6.3 MB/s 01:27:49.230 05:22:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 01:27:49.230 05:22:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 01:27:49.489 256+0 records in 01:27:49.489 256+0 records out 01:27:49.489 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.175006 s, 6.0 MB/s 01:27:49.489 05:22:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 01:27:49.489 05:22:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 01:27:49.489 05:22:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 01:27:49.489 05:22:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 01:27:49.489 05:22:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 01:27:49.489 05:22:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 01:27:49.489 05:22:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 01:27:49.489 05:22:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 01:27:49.489 05:22:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 01:27:49.489 05:22:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 01:27:49.489 05:22:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 01:27:49.489 05:22:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 01:27:49.489 05:22:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 01:27:49.489 05:22:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 01:27:49.489 05:22:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 01:27:49.489 05:22:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 01:27:49.489 05:22:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 01:27:49.489 05:22:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 01:27:49.489 05:22:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 01:27:49.489 05:22:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 01:27:49.489 05:22:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 01:27:49.489 05:22:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:27:49.489 05:22:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 01:27:49.489 05:22:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 01:27:49.489 05:22:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 01:27:49.489 05:22:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:27:49.489 05:22:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 01:27:50.057 05:22:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 01:27:50.057 05:22:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 01:27:50.057 05:22:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 01:27:50.057 05:22:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:27:50.057 05:22:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:27:50.057 05:22:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 01:27:50.057 05:22:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 01:27:50.057 05:22:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 01:27:50.057 05:22:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:27:50.057 05:22:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 01:27:50.316 05:22:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 01:27:50.316 05:22:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 01:27:50.316 05:22:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 01:27:50.316 05:22:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:27:50.316 05:22:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:27:50.316 05:22:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 01:27:50.316 05:22:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 01:27:50.316 05:22:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 01:27:50.316 05:22:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:27:50.316 05:22:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 01:27:50.575 05:22:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 01:27:50.575 05:22:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 01:27:50.575 05:22:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 01:27:50.575 05:22:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:27:50.575 05:22:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:27:50.575 05:22:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 01:27:50.575 05:22:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 01:27:50.575 05:22:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 01:27:50.575 05:22:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:27:50.575 05:22:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 01:27:50.832 05:22:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 01:27:50.832 05:22:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 01:27:50.832 05:22:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 01:27:50.832 05:22:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:27:50.832 05:22:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:27:50.832 05:22:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 01:27:50.833 05:22:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 01:27:50.833 05:22:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 01:27:50.833 05:22:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:27:50.833 05:22:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 01:27:51.091 05:22:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 01:27:51.091 05:22:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 01:27:51.091 05:22:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 01:27:51.091 05:22:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:27:51.091 05:22:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:27:51.091 05:22:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 01:27:51.091 05:22:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 01:27:51.091 05:22:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 01:27:51.091 05:22:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:27:51.091 05:22:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 01:27:51.657 05:22:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 01:27:51.657 05:22:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 01:27:51.657 05:22:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 01:27:51.657 05:22:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:27:51.657 05:22:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:27:51.657 05:22:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 01:27:51.657 05:22:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 01:27:51.657 05:22:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 01:27:51.657 05:22:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 01:27:51.657 05:22:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:27:51.657 05:22:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 01:27:51.915 05:22:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 01:27:51.915 05:22:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 01:27:51.915 05:22:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 01:27:51.915 05:22:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 01:27:51.915 05:22:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 01:27:51.915 05:22:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 01:27:51.915 05:22:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 01:27:51.915 05:22:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 01:27:51.915 05:22:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 01:27:51.915 05:22:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 01:27:51.915 05:22:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 01:27:51.915 05:22:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 01:27:51.915 05:22:43 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 01:27:51.915 05:22:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:27:51.915 05:22:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 01:27:51.915 05:22:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 01:27:52.174 malloc_lvol_verify 01:27:52.174 05:22:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 01:27:52.433 6ed2fe07-17d8-46cf-aa94-7d898a047fdc 01:27:52.433 05:22:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 01:27:52.690 9a404f78-b291-4d5d-901d-b7431bfd4b77 01:27:52.690 05:22:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 01:27:52.947 /dev/nbd0 01:27:52.947 05:22:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 01:27:52.947 05:22:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 01:27:52.947 05:22:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 01:27:52.947 05:22:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 01:27:52.947 05:22:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 01:27:52.947 mke2fs 1.47.0 (5-Feb-2023) 01:27:52.947 Discarding device blocks: 0/4096 done 01:27:52.947 Creating filesystem with 4096 1k blocks and 1024 inodes 01:27:52.947 01:27:52.947 Allocating group tables: 0/1 done 01:27:52.947 Writing inode tables: 0/1 done 01:27:52.947 Creating journal (1024 blocks): done 01:27:52.947 Writing superblocks and filesystem accounting information: 0/1 done 01:27:52.947 01:27:52.947 05:22:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 01:27:52.947 05:22:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:27:52.947 05:22:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 01:27:52.947 05:22:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 01:27:52.947 05:22:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 01:27:52.947 05:22:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:27:52.947 05:22:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 01:27:53.204 05:22:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 01:27:53.204 05:22:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 01:27:53.204 05:22:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 01:27:53.204 05:22:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:27:53.204 05:22:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:27:53.204 05:22:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 01:27:53.204 05:22:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 01:27:53.204 05:22:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 01:27:53.204 05:22:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61275 01:27:53.204 05:22:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 61275 ']' 01:27:53.204 05:22:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 61275 01:27:53.204 05:22:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 01:27:53.204 05:22:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:27:53.204 05:22:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61275 01:27:53.462 killing process with pid 61275 01:27:53.462 05:22:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:27:53.462 05:22:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:27:53.462 05:22:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61275' 01:27:53.462 05:22:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 61275 01:27:53.462 05:22:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 61275 01:27:54.393 05:22:45 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 01:27:54.394 01:27:54.394 real 0m13.623s 01:27:54.394 user 0m19.379s 01:27:54.394 sys 0m4.380s 01:27:54.394 05:22:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 01:27:54.394 05:22:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 01:27:54.394 ************************************ 01:27:54.394 END TEST bdev_nbd 01:27:54.394 ************************************ 01:27:54.394 05:22:45 blockdev_nvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 01:27:54.394 05:22:45 blockdev_nvme -- bdev/blockdev.sh@801 -- # '[' nvme = nvme ']' 01:27:54.394 skipping fio tests on NVMe due to multi-ns failures. 01:27:54.394 05:22:45 blockdev_nvme -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 01:27:54.394 05:22:45 blockdev_nvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 01:27:54.394 05:22:45 blockdev_nvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 01:27:54.394 05:22:45 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 01:27:54.394 05:22:45 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 01:27:54.394 05:22:45 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 01:27:54.663 ************************************ 01:27:54.663 START TEST bdev_verify 01:27:54.663 ************************************ 01:27:54.663 05:22:46 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 01:27:54.663 [2024-12-09 05:22:46.120900] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:27:54.663 [2024-12-09 05:22:46.121102] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61696 ] 01:27:54.921 [2024-12-09 05:22:46.306826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 01:27:54.921 [2024-12-09 05:22:46.432871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:27:54.921 [2024-12-09 05:22:46.432891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:27:55.855 Running I/O for 5 seconds... 01:27:57.728 19968.00 IOPS, 78.00 MiB/s [2024-12-09T05:22:50.302Z] 18624.00 IOPS, 72.75 MiB/s [2024-12-09T05:22:51.687Z] 18432.00 IOPS, 72.00 MiB/s [2024-12-09T05:22:52.619Z] 18880.00 IOPS, 73.75 MiB/s [2024-12-09T05:22:52.619Z] 18982.40 IOPS, 74.15 MiB/s 01:28:01.002 Latency(us) 01:28:01.002 [2024-12-09T05:22:52.619Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:28:01.002 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 01:28:01.003 Verification LBA range: start 0x0 length 0xbd0bd 01:28:01.003 Nvme0n1 : 5.06 1569.07 6.13 0.00 0.00 81327.88 17635.14 73876.95 01:28:01.003 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 01:28:01.003 Verification LBA range: start 0xbd0bd length 0xbd0bd 01:28:01.003 Nvme0n1 : 5.06 1568.29 6.13 0.00 0.00 81451.88 12332.68 73876.95 01:28:01.003 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 01:28:01.003 Verification LBA range: start 0x0 length 0xa0000 01:28:01.003 Nvme1n1 : 5.06 1567.80 6.12 0.00 0.00 81221.73 20137.43 72447.07 01:28:01.003 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 01:28:01.003 Verification LBA range: start 0xa0000 length 0xa0000 01:28:01.003 Nvme1n1 : 5.06 1567.39 6.12 0.00 0.00 81328.87 13583.83 71017.19 01:28:01.003 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 01:28:01.003 Verification LBA range: start 0x0 length 0x80000 01:28:01.003 Nvme2n1 : 5.07 1566.45 6.12 0.00 0.00 81120.06 21924.77 68634.07 01:28:01.003 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 01:28:01.003 Verification LBA range: start 0x80000 length 0x80000 01:28:01.003 Nvme2n1 : 5.06 1567.01 6.12 0.00 0.00 81139.91 13166.78 67680.81 01:28:01.003 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 01:28:01.003 Verification LBA range: start 0x0 length 0x80000 01:28:01.003 Nvme2n2 : 5.07 1565.44 6.12 0.00 0.00 81006.88 22163.08 67680.81 01:28:01.003 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 01:28:01.003 Verification LBA range: start 0x80000 length 0x80000 01:28:01.003 Nvme2n2 : 5.07 1566.51 6.12 0.00 0.00 80998.04 12988.04 65774.31 01:28:01.003 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 01:28:01.003 Verification LBA range: start 0x0 length 0x80000 01:28:01.003 Nvme2n3 : 5.08 1573.75 6.15 0.00 0.00 80521.15 4796.04 70540.57 01:28:01.003 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 01:28:01.003 Verification LBA range: start 0x80000 length 0x80000 01:28:01.003 Nvme2n3 : 5.07 1565.68 6.12 0.00 0.00 80863.61 14239.19 69110.69 01:28:01.003 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 01:28:01.003 Verification LBA range: start 0x0 length 0x20000 01:28:01.003 Nvme3n1 : 5.09 1573.18 6.15 0.00 0.00 80390.62 5064.15 74353.57 01:28:01.003 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 01:28:01.003 Verification LBA range: start 0x20000 length 0x20000 01:28:01.003 Nvme3n1 : 5.07 1565.25 6.11 0.00 0.00 80734.21 9651.67 73400.32 01:28:01.003 [2024-12-09T05:22:52.620Z] =================================================================================================================== 01:28:01.003 [2024-12-09T05:22:52.620Z] Total : 18815.82 73.50 0.00 0.00 81008.00 4796.04 74353.57 01:28:02.376 01:28:02.376 real 0m7.665s 01:28:02.376 user 0m14.024s 01:28:02.376 sys 0m0.357s 01:28:02.376 05:22:53 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 01:28:02.376 05:22:53 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 01:28:02.376 ************************************ 01:28:02.376 END TEST bdev_verify 01:28:02.376 ************************************ 01:28:02.376 05:22:53 blockdev_nvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 01:28:02.376 05:22:53 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 01:28:02.376 05:22:53 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 01:28:02.376 05:22:53 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 01:28:02.376 ************************************ 01:28:02.376 START TEST bdev_verify_big_io 01:28:02.376 ************************************ 01:28:02.376 05:22:53 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 01:28:02.376 [2024-12-09 05:22:53.845248] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:28:02.376 [2024-12-09 05:22:53.845470] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61794 ] 01:28:02.634 [2024-12-09 05:22:54.042761] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 01:28:02.634 [2024-12-09 05:22:54.210211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:28:02.634 [2024-12-09 05:22:54.210231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:28:03.575 Running I/O for 5 seconds... 01:28:08.769 1510.00 IOPS, 94.38 MiB/s [2024-12-09T05:23:00.951Z] 2751.50 IOPS, 171.97 MiB/s [2024-12-09T05:23:00.952Z] 3198.00 IOPS, 199.88 MiB/s 01:28:09.335 Latency(us) 01:28:09.335 [2024-12-09T05:23:00.952Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:28:09.335 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 01:28:09.335 Verification LBA range: start 0x0 length 0xbd0b 01:28:09.335 Nvme0n1 : 5.70 133.63 8.35 0.00 0.00 933144.91 20733.21 884616.84 01:28:09.335 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 01:28:09.335 Verification LBA range: start 0xbd0b length 0xbd0b 01:28:09.335 Nvme0n1 : 5.58 137.59 8.60 0.00 0.00 899153.69 26333.56 876990.84 01:28:09.335 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 01:28:09.335 Verification LBA range: start 0x0 length 0xa000 01:28:09.335 Nvme1n1 : 5.71 134.54 8.41 0.00 0.00 900450.99 78166.57 854112.81 01:28:09.335 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 01:28:09.335 Verification LBA range: start 0xa000 length 0xa000 01:28:09.335 Nvme1n1 : 5.58 137.51 8.59 0.00 0.00 874916.46 90082.21 831234.79 01:28:09.335 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 01:28:09.335 Verification LBA range: start 0x0 length 0x8000 01:28:09.335 Nvme2n1 : 5.74 134.02 8.38 0.00 0.00 878115.77 78643.20 896055.85 01:28:09.335 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 01:28:09.335 Verification LBA range: start 0x8000 length 0x8000 01:28:09.335 Nvme2n1 : 5.71 145.80 9.11 0.00 0.00 815107.72 30027.40 827421.79 01:28:09.335 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 01:28:09.335 Verification LBA range: start 0x0 length 0x8000 01:28:09.335 Nvme2n2 : 5.74 136.94 8.56 0.00 0.00 841591.69 26571.87 857925.82 01:28:09.335 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 01:28:09.335 Verification LBA range: start 0x8000 length 0x8000 01:28:09.335 Nvme2n2 : 5.71 145.72 9.11 0.00 0.00 793444.32 31457.28 819795.78 01:28:09.335 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 01:28:09.335 Verification LBA range: start 0x0 length 0x8000 01:28:09.335 Nvme2n3 : 5.77 141.31 8.83 0.00 0.00 797075.17 30980.65 1182031.13 01:28:09.335 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 01:28:09.335 Verification LBA range: start 0x8000 length 0x8000 01:28:09.335 Nvme2n3 : 5.78 151.73 9.48 0.00 0.00 742017.18 29074.15 827421.79 01:28:09.335 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 01:28:09.335 Verification LBA range: start 0x0 length 0x2000 01:28:09.335 Nvme3n1 : 5.79 144.26 9.02 0.00 0.00 760984.38 9770.82 1639591.56 01:28:09.335 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 01:28:09.335 Verification LBA range: start 0x2000 length 0x2000 01:28:09.335 Nvme3n1 : 5.79 165.78 10.36 0.00 0.00 665641.61 1429.88 869364.83 01:28:09.335 [2024-12-09T05:23:00.952Z] =================================================================================================================== 01:28:09.335 [2024-12-09T05:23:00.952Z] Total : 1708.84 106.80 0.00 0.00 820189.01 1429.88 1639591.56 01:28:11.233 01:28:11.233 real 0m8.805s 01:28:11.233 user 0m16.195s 01:28:11.233 sys 0m0.386s 01:28:11.233 05:23:02 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 01:28:11.233 ************************************ 01:28:11.233 END TEST bdev_verify_big_io 01:28:11.233 05:23:02 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 01:28:11.233 ************************************ 01:28:11.233 05:23:02 blockdev_nvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 01:28:11.233 05:23:02 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 01:28:11.233 05:23:02 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 01:28:11.233 05:23:02 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 01:28:11.233 ************************************ 01:28:11.233 START TEST bdev_write_zeroes 01:28:11.233 ************************************ 01:28:11.233 05:23:02 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 01:28:11.233 [2024-12-09 05:23:02.703940] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:28:11.233 [2024-12-09 05:23:02.704142] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61914 ] 01:28:11.491 [2024-12-09 05:23:02.899764] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:28:11.491 [2024-12-09 05:23:03.058590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:28:12.427 Running I/O for 1 seconds... 01:28:13.359 57216.00 IOPS, 223.50 MiB/s 01:28:13.359 Latency(us) 01:28:13.359 [2024-12-09T05:23:04.976Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:28:13.359 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 01:28:13.359 Nvme0n1 : 1.03 9489.38 37.07 0.00 0.00 13456.00 11677.32 24188.74 01:28:13.359 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 01:28:13.359 Nvme1n1 : 1.03 9479.70 37.03 0.00 0.00 13449.70 12332.68 23831.27 01:28:13.359 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 01:28:13.359 Nvme2n1 : 1.03 9469.90 36.99 0.00 0.00 13400.47 9353.77 22997.18 01:28:13.359 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 01:28:13.360 Nvme2n2 : 1.03 9460.41 36.95 0.00 0.00 13393.89 9055.88 22282.24 01:28:13.360 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 01:28:13.360 Nvme2n3 : 1.03 9449.71 36.91 0.00 0.00 13382.59 8638.84 22639.71 01:28:13.360 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 01:28:13.360 Nvme3n1 : 1.03 9439.76 36.87 0.00 0.00 13371.11 8221.79 24427.05 01:28:13.360 [2024-12-09T05:23:04.977Z] =================================================================================================================== 01:28:13.360 [2024-12-09T05:23:04.977Z] Total : 56788.85 221.83 0.00 0.00 13408.96 8221.79 24427.05 01:28:14.292 01:28:14.292 real 0m3.304s 01:28:14.292 user 0m2.851s 01:28:14.292 sys 0m0.327s 01:28:14.292 05:23:05 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 01:28:14.292 05:23:05 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 01:28:14.292 ************************************ 01:28:14.292 END TEST bdev_write_zeroes 01:28:14.292 ************************************ 01:28:14.549 05:23:05 blockdev_nvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 01:28:14.549 05:23:05 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 01:28:14.550 05:23:05 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 01:28:14.550 05:23:05 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 01:28:14.550 ************************************ 01:28:14.550 START TEST bdev_json_nonenclosed 01:28:14.550 ************************************ 01:28:14.550 05:23:05 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 01:28:14.550 [2024-12-09 05:23:06.040310] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:28:14.550 [2024-12-09 05:23:06.040463] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61967 ] 01:28:14.807 [2024-12-09 05:23:06.213474] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:28:14.807 [2024-12-09 05:23:06.332597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:28:14.807 [2024-12-09 05:23:06.332782] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 01:28:14.807 [2024-12-09 05:23:06.332816] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 01:28:14.807 [2024-12-09 05:23:06.332833] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 01:28:15.065 01:28:15.066 real 0m0.707s 01:28:15.066 user 0m0.461s 01:28:15.066 sys 0m0.141s 01:28:15.066 05:23:06 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 01:28:15.066 05:23:06 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 01:28:15.066 ************************************ 01:28:15.066 END TEST bdev_json_nonenclosed 01:28:15.066 ************************************ 01:28:15.334 05:23:06 blockdev_nvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 01:28:15.334 05:23:06 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 01:28:15.334 05:23:06 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 01:28:15.334 05:23:06 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 01:28:15.334 ************************************ 01:28:15.334 START TEST bdev_json_nonarray 01:28:15.334 ************************************ 01:28:15.334 05:23:06 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 01:28:15.334 [2024-12-09 05:23:06.837700] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:28:15.334 [2024-12-09 05:23:06.837941] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61993 ] 01:28:15.605 [2024-12-09 05:23:07.032952] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:28:15.605 [2024-12-09 05:23:07.161940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:28:15.605 [2024-12-09 05:23:07.162092] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 01:28:15.605 [2024-12-09 05:23:07.162136] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 01:28:15.605 [2024-12-09 05:23:07.162153] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 01:28:16.172 01:28:16.172 real 0m0.768s 01:28:16.172 user 0m0.498s 01:28:16.172 sys 0m0.163s 01:28:16.172 05:23:07 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 01:28:16.172 05:23:07 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 01:28:16.172 ************************************ 01:28:16.172 END TEST bdev_json_nonarray 01:28:16.172 ************************************ 01:28:16.172 05:23:07 blockdev_nvme -- bdev/blockdev.sh@824 -- # [[ nvme == bdev ]] 01:28:16.172 05:23:07 blockdev_nvme -- bdev/blockdev.sh@832 -- # [[ nvme == gpt ]] 01:28:16.172 05:23:07 blockdev_nvme -- bdev/blockdev.sh@836 -- # [[ nvme == crypto_sw ]] 01:28:16.172 05:23:07 blockdev_nvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 01:28:16.172 05:23:07 blockdev_nvme -- bdev/blockdev.sh@849 -- # cleanup 01:28:16.172 05:23:07 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 01:28:16.172 05:23:07 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 01:28:16.172 05:23:07 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 01:28:16.172 05:23:07 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 01:28:16.172 05:23:07 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 01:28:16.172 05:23:07 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 01:28:16.172 01:28:16.172 real 0m44.534s 01:28:16.172 user 1m6.669s 01:28:16.172 sys 0m7.555s 01:28:16.172 05:23:07 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 01:28:16.172 05:23:07 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 01:28:16.172 ************************************ 01:28:16.172 END TEST blockdev_nvme 01:28:16.172 ************************************ 01:28:16.172 05:23:07 -- spdk/autotest.sh@209 -- # uname -s 01:28:16.172 05:23:07 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 01:28:16.172 05:23:07 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 01:28:16.172 05:23:07 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:28:16.172 05:23:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:28:16.172 05:23:07 -- common/autotest_common.sh@10 -- # set +x 01:28:16.172 ************************************ 01:28:16.172 START TEST blockdev_nvme_gpt 01:28:16.172 ************************************ 01:28:16.172 05:23:07 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 01:28:16.172 * Looking for test storage... 01:28:16.172 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 01:28:16.172 05:23:07 blockdev_nvme_gpt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:28:16.172 05:23:07 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lcov --version 01:28:16.172 05:23:07 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:28:16.172 05:23:07 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:28:16.172 05:23:07 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:28:16.172 05:23:07 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 01:28:16.172 05:23:07 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 01:28:16.172 05:23:07 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 01:28:16.172 05:23:07 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 01:28:16.172 05:23:07 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 01:28:16.172 05:23:07 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 01:28:16.172 05:23:07 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 01:28:16.172 05:23:07 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 01:28:16.172 05:23:07 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 01:28:16.172 05:23:07 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:28:16.172 05:23:07 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 01:28:16.172 05:23:07 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 01:28:16.172 05:23:07 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 01:28:16.172 05:23:07 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:28:16.172 05:23:07 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 01:28:16.172 05:23:07 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 01:28:16.172 05:23:07 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:28:16.172 05:23:07 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 01:28:16.431 05:23:07 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 01:28:16.431 05:23:07 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 01:28:16.431 05:23:07 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 01:28:16.431 05:23:07 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:28:16.431 05:23:07 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 01:28:16.431 05:23:07 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 01:28:16.431 05:23:07 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:28:16.431 05:23:07 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:28:16.431 05:23:07 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 01:28:16.431 05:23:07 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:28:16.431 05:23:07 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:28:16.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:28:16.431 --rc genhtml_branch_coverage=1 01:28:16.431 --rc genhtml_function_coverage=1 01:28:16.431 --rc genhtml_legend=1 01:28:16.431 --rc geninfo_all_blocks=1 01:28:16.431 --rc geninfo_unexecuted_blocks=1 01:28:16.431 01:28:16.431 ' 01:28:16.431 05:23:07 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:28:16.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:28:16.431 --rc genhtml_branch_coverage=1 01:28:16.431 --rc genhtml_function_coverage=1 01:28:16.431 --rc genhtml_legend=1 01:28:16.431 --rc geninfo_all_blocks=1 01:28:16.431 --rc geninfo_unexecuted_blocks=1 01:28:16.431 01:28:16.431 ' 01:28:16.431 05:23:07 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:28:16.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:28:16.431 --rc genhtml_branch_coverage=1 01:28:16.431 --rc genhtml_function_coverage=1 01:28:16.431 --rc genhtml_legend=1 01:28:16.431 --rc geninfo_all_blocks=1 01:28:16.431 --rc geninfo_unexecuted_blocks=1 01:28:16.431 01:28:16.431 ' 01:28:16.431 05:23:07 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:28:16.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:28:16.431 --rc genhtml_branch_coverage=1 01:28:16.431 --rc genhtml_function_coverage=1 01:28:16.431 --rc genhtml_legend=1 01:28:16.431 --rc geninfo_all_blocks=1 01:28:16.431 --rc geninfo_unexecuted_blocks=1 01:28:16.431 01:28:16.431 ' 01:28:16.431 05:23:07 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 01:28:16.431 05:23:07 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 01:28:16.431 05:23:07 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 01:28:16.431 05:23:07 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 01:28:16.431 05:23:07 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 01:28:16.431 05:23:07 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 01:28:16.431 05:23:07 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 01:28:16.431 05:23:07 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 01:28:16.431 05:23:07 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 01:28:16.431 05:23:07 blockdev_nvme_gpt -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 01:28:16.431 05:23:07 blockdev_nvme_gpt -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 01:28:16.431 05:23:07 blockdev_nvme_gpt -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 01:28:16.431 05:23:07 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # uname -s 01:28:16.431 05:23:07 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 01:28:16.431 05:23:07 blockdev_nvme_gpt -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 01:28:16.431 05:23:07 blockdev_nvme_gpt -- bdev/blockdev.sh@719 -- # test_type=gpt 01:28:16.431 05:23:07 blockdev_nvme_gpt -- bdev/blockdev.sh@720 -- # crypto_device= 01:28:16.431 05:23:07 blockdev_nvme_gpt -- bdev/blockdev.sh@721 -- # dek= 01:28:16.431 05:23:07 blockdev_nvme_gpt -- bdev/blockdev.sh@722 -- # env_ctx= 01:28:16.431 05:23:07 blockdev_nvme_gpt -- bdev/blockdev.sh@723 -- # wait_for_rpc= 01:28:16.431 05:23:07 blockdev_nvme_gpt -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 01:28:16.431 05:23:07 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == bdev ]] 01:28:16.431 05:23:07 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == crypto_* ]] 01:28:16.431 05:23:07 blockdev_nvme_gpt -- bdev/blockdev.sh@730 -- # start_spdk_tgt 01:28:16.431 05:23:07 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62078 01:28:16.431 05:23:07 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 01:28:16.431 05:23:07 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 62078 01:28:16.431 05:23:07 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 62078 ']' 01:28:16.431 05:23:07 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 01:28:16.431 05:23:07 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:28:16.431 05:23:07 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 01:28:16.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:28:16.431 05:23:07 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:28:16.431 05:23:07 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 01:28:16.431 05:23:07 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 01:28:16.431 [2024-12-09 05:23:07.957830] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:28:16.431 [2024-12-09 05:23:07.958016] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62078 ] 01:28:16.689 [2024-12-09 05:23:08.145506] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:28:16.689 [2024-12-09 05:23:08.275924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:28:17.626 05:23:09 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:28:17.626 05:23:09 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 01:28:17.626 05:23:09 blockdev_nvme_gpt -- bdev/blockdev.sh@731 -- # case "$test_type" in 01:28:17.626 05:23:09 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # setup_gpt_conf 01:28:17.626 05:23:09 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 01:28:17.884 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:28:18.142 Waiting for block devices as requested 01:28:18.143 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 01:28:18.401 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 01:28:18.401 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 01:28:18.401 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 01:28:23.718 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 01:28:23.718 05:23:15 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 01:28:23.718 05:23:15 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 01:28:23.718 05:23:15 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 01:28:23.718 05:23:15 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local nvme bdf 01:28:23.718 05:23:15 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 01:28:23.718 05:23:15 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 01:28:23.718 05:23:15 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 01:28:23.718 05:23:15 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 01:28:23.718 05:23:15 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:28:23.718 05:23:15 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 01:28:23.718 05:23:15 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 01:28:23.718 05:23:15 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 01:28:23.718 05:23:15 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 01:28:23.718 05:23:15 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:28:23.718 05:23:15 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 01:28:23.718 05:23:15 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 01:28:23.718 05:23:15 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 01:28:23.718 05:23:15 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 01:28:23.718 05:23:15 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:28:23.718 05:23:15 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 01:28:23.718 05:23:15 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 01:28:23.718 05:23:15 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 01:28:23.718 05:23:15 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 01:28:23.718 05:23:15 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:28:23.718 05:23:15 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 01:28:23.718 05:23:15 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 01:28:23.718 05:23:15 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 01:28:23.718 05:23:15 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 01:28:23.718 05:23:15 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:28:23.718 05:23:15 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 01:28:23.718 05:23:15 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 01:28:23.718 05:23:15 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 01:28:23.718 05:23:15 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 01:28:23.718 05:23:15 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:28:23.718 05:23:15 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 01:28:23.718 05:23:15 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 01:28:23.718 05:23:15 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3n1 01:28:23.718 05:23:15 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 01:28:23.718 05:23:15 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:28:23.718 05:23:15 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 01:28:23.718 05:23:15 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 01:28:23.718 05:23:15 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 01:28:23.718 05:23:15 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 01:28:23.718 05:23:15 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 01:28:23.718 05:23:15 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 01:28:23.718 05:23:15 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 01:28:23.718 05:23:15 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 01:28:23.718 BYT; 01:28:23.719 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 01:28:23.719 05:23:15 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 01:28:23.719 BYT; 01:28:23.719 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 01:28:23.719 05:23:15 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 01:28:23.719 05:23:15 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 01:28:23.719 05:23:15 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 01:28:23.719 05:23:15 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 01:28:23.719 05:23:15 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 01:28:23.719 05:23:15 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 01:28:23.719 05:23:15 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 01:28:23.719 05:23:15 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 01:28:23.719 05:23:15 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 01:28:23.719 05:23:15 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 01:28:23.719 05:23:15 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 01:28:23.719 05:23:15 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 01:28:23.719 05:23:15 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 01:28:23.719 05:23:15 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 01:28:23.719 05:23:15 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 01:28:23.719 05:23:15 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 01:28:23.719 05:23:15 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 01:28:23.719 05:23:15 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 01:28:23.719 05:23:15 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 01:28:23.719 05:23:15 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 01:28:23.719 05:23:15 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 01:28:23.719 05:23:15 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 01:28:23.719 05:23:15 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 01:28:23.719 05:23:15 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 01:28:23.719 05:23:15 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 01:28:23.719 05:23:15 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 01:28:23.719 05:23:15 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 01:28:23.719 05:23:15 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 01:28:23.719 05:23:15 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 01:28:24.650 The operation has completed successfully. 01:28:24.650 05:23:16 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 01:28:26.026 The operation has completed successfully. 01:28:26.026 05:23:17 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 01:28:26.284 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:28:26.850 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 01:28:26.850 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 01:28:26.850 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 01:28:26.850 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 01:28:27.109 05:23:18 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 01:28:27.109 05:23:18 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:27.109 05:23:18 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 01:28:27.109 [] 01:28:27.109 05:23:18 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:27.109 05:23:18 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 01:28:27.109 05:23:18 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 01:28:27.109 05:23:18 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 01:28:27.109 05:23:18 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 01:28:27.109 05:23:18 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 01:28:27.109 05:23:18 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:27.109 05:23:18 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 01:28:27.367 05:23:18 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:27.367 05:23:18 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 01:28:27.367 05:23:18 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:27.367 05:23:18 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 01:28:27.367 05:23:18 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:27.367 05:23:18 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # cat 01:28:27.367 05:23:18 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 01:28:27.367 05:23:18 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:27.367 05:23:18 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 01:28:27.367 05:23:18 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:27.367 05:23:18 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 01:28:27.367 05:23:18 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:27.367 05:23:18 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 01:28:27.367 05:23:18 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:27.367 05:23:18 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 01:28:27.367 05:23:18 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:27.367 05:23:18 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 01:28:27.367 05:23:18 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:27.367 05:23:18 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 01:28:27.367 05:23:18 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 01:28:27.367 05:23:18 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 01:28:27.367 05:23:18 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:27.367 05:23:18 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 01:28:27.626 05:23:19 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:27.626 05:23:19 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 01:28:27.626 05:23:19 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # jq -r .name 01:28:27.627 05:23:19 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "e6fc81be-5060-4979-9163-6e9ff4eed269"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "e6fc81be-5060-4979-9163-6e9ff4eed269",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "d0b13335-0ea5-42bc-8641-4a806bf799ac"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "d0b13335-0ea5-42bc-8641-4a806bf799ac",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "c53162e3-a8b5-4f2b-b5bd-9d3d4c6f23fc"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c53162e3-a8b5-4f2b-b5bd-9d3d4c6f23fc",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "390354ad-cfac-4bed-9b66-b2c0c4f12b92"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "390354ad-cfac-4bed-9b66-b2c0c4f12b92",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "ed430025-c7c6-443d-a4e7-e860895b3fa5"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "ed430025-c7c6-443d-a4e7-e860895b3fa5",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 01:28:27.627 05:23:19 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 01:28:27.627 05:23:19 blockdev_nvme_gpt -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 01:28:27.627 05:23:19 blockdev_nvme_gpt -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 01:28:27.627 05:23:19 blockdev_nvme_gpt -- bdev/blockdev.sh@791 -- # killprocess 62078 01:28:27.627 05:23:19 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 62078 ']' 01:28:27.627 05:23:19 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 62078 01:28:27.627 05:23:19 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 01:28:27.627 05:23:19 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:28:27.627 05:23:19 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62078 01:28:27.627 05:23:19 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:28:27.627 05:23:19 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:28:27.627 killing process with pid 62078 01:28:27.628 05:23:19 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62078' 01:28:27.628 05:23:19 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 62078 01:28:27.628 05:23:19 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 62078 01:28:30.157 05:23:21 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 01:28:30.157 05:23:21 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 01:28:30.157 05:23:21 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 01:28:30.157 05:23:21 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 01:28:30.157 05:23:21 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 01:28:30.157 ************************************ 01:28:30.157 START TEST bdev_hello_world 01:28:30.157 ************************************ 01:28:30.157 05:23:21 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 01:28:30.157 [2024-12-09 05:23:21.399461] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:28:30.157 [2024-12-09 05:23:21.399640] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62715 ] 01:28:30.157 [2024-12-09 05:23:21.586618] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:28:30.157 [2024-12-09 05:23:21.728397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:28:31.092 [2024-12-09 05:23:22.427896] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 01:28:31.092 [2024-12-09 05:23:22.427965] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 01:28:31.093 [2024-12-09 05:23:22.428004] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 01:28:31.093 [2024-12-09 05:23:22.431352] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 01:28:31.093 [2024-12-09 05:23:22.431960] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 01:28:31.093 [2024-12-09 05:23:22.432000] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 01:28:31.093 [2024-12-09 05:23:22.432230] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 01:28:31.093 01:28:31.093 [2024-12-09 05:23:22.432267] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 01:28:32.494 01:28:32.494 real 0m2.383s 01:28:32.494 user 0m1.950s 01:28:32.494 sys 0m0.315s 01:28:32.494 05:23:23 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 01:28:32.494 ************************************ 01:28:32.494 05:23:23 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 01:28:32.494 END TEST bdev_hello_world 01:28:32.494 ************************************ 01:28:32.494 05:23:23 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 01:28:32.494 05:23:23 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:28:32.494 05:23:23 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 01:28:32.494 05:23:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 01:28:32.494 ************************************ 01:28:32.494 START TEST bdev_bounds 01:28:32.494 ************************************ 01:28:32.494 05:23:23 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 01:28:32.494 05:23:23 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=62757 01:28:32.494 05:23:23 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 01:28:32.494 Process bdevio pid: 62757 01:28:32.494 05:23:23 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 01:28:32.494 05:23:23 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 62757' 01:28:32.494 05:23:23 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 62757 01:28:32.494 05:23:23 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 62757 ']' 01:28:32.494 05:23:23 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:28:32.494 05:23:23 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 01:28:32.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:28:32.494 05:23:23 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:28:32.494 05:23:23 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 01:28:32.494 05:23:23 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 01:28:32.494 [2024-12-09 05:23:23.851477] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:28:32.495 [2024-12-09 05:23:23.851636] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62757 ] 01:28:32.495 [2024-12-09 05:23:24.042653] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 01:28:32.753 [2024-12-09 05:23:24.197340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:28:32.753 [2024-12-09 05:23:24.197563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:28:32.753 [2024-12-09 05:23:24.197568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:28:33.688 05:23:24 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:28:33.688 05:23:24 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 01:28:33.688 05:23:24 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 01:28:33.688 I/O targets: 01:28:33.688 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 01:28:33.688 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 01:28:33.688 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 01:28:33.688 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 01:28:33.688 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 01:28:33.688 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 01:28:33.688 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 01:28:33.688 01:28:33.688 01:28:33.688 CUnit - A unit testing framework for C - Version 2.1-3 01:28:33.688 http://cunit.sourceforge.net/ 01:28:33.688 01:28:33.688 01:28:33.688 Suite: bdevio tests on: Nvme3n1 01:28:33.688 Test: blockdev write read block ...passed 01:28:33.688 Test: blockdev write zeroes read block ...passed 01:28:33.688 Test: blockdev write zeroes read no split ...passed 01:28:33.688 Test: blockdev write zeroes read split ...passed 01:28:33.688 Test: blockdev write zeroes read split partial ...passed 01:28:33.688 Test: blockdev reset ...[2024-12-09 05:23:25.156165] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 01:28:33.688 [2024-12-09 05:23:25.160433] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 01:28:33.688 passed 01:28:33.688 Test: blockdev write read 8 blocks ...passed 01:28:33.688 Test: blockdev write read size > 128k ...passed 01:28:33.688 Test: blockdev write read invalid size ...passed 01:28:33.688 Test: blockdev write read offset + nbytes == size of blockdev ...passed 01:28:33.688 Test: blockdev write read offset + nbytes > size of blockdev ...passed 01:28:33.688 Test: blockdev write read max offset ...passed 01:28:33.688 Test: blockdev write read 2 blocks on overlapped address offset ...passed 01:28:33.688 Test: blockdev writev readv 8 blocks ...passed 01:28:33.688 Test: blockdev writev readv 30 x 1block ...passed 01:28:33.688 Test: blockdev writev readv block ...passed 01:28:33.688 Test: blockdev writev readv size > 128k ...passed 01:28:33.688 Test: blockdev writev readv size > 128k in two iovs ...passed 01:28:33.688 Test: blockdev comparev and writev ...[2024-12-09 05:23:25.170123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bbe04000 len:0x1000 01:28:33.688 [2024-12-09 05:23:25.170187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 01:28:33.688 passed 01:28:33.688 Test: blockdev nvme passthru rw ...passed 01:28:33.688 Test: blockdev nvme passthru vendor specific ...[2024-12-09 05:23:25.171146] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 01:28:33.688 passed 01:28:33.688 Test: blockdev nvme admin passthru ...[2024-12-09 05:23:25.171192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 01:28:33.688 passed 01:28:33.688 Test: blockdev copy ...passed 01:28:33.688 Suite: bdevio tests on: Nvme2n3 01:28:33.688 Test: blockdev write read block ...passed 01:28:33.688 Test: blockdev write zeroes read block ...passed 01:28:33.688 Test: blockdev write zeroes read no split ...passed 01:28:33.688 Test: blockdev write zeroes read split ...passed 01:28:33.689 Test: blockdev write zeroes read split partial ...passed 01:28:33.689 Test: blockdev reset ...[2024-12-09 05:23:25.232507] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 01:28:33.689 [2024-12-09 05:23:25.236844] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 01:28:33.689 passed 01:28:33.689 Test: blockdev write read 8 blocks ...passed 01:28:33.689 Test: blockdev write read size > 128k ...passed 01:28:33.689 Test: blockdev write read invalid size ...passed 01:28:33.689 Test: blockdev write read offset + nbytes == size of blockdev ...passed 01:28:33.689 Test: blockdev write read offset + nbytes > size of blockdev ...passed 01:28:33.689 Test: blockdev write read max offset ...passed 01:28:33.689 Test: blockdev write read 2 blocks on overlapped address offset ...passed 01:28:33.689 Test: blockdev writev readv 8 blocks ...passed 01:28:33.689 Test: blockdev writev readv 30 x 1block ...passed 01:28:33.689 Test: blockdev writev readv block ...passed 01:28:33.689 Test: blockdev writev readv size > 128k ...passed 01:28:33.689 Test: blockdev writev readv size > 128k in two iovs ...passed 01:28:33.689 Test: blockdev comparev and writev ...[2024-12-09 05:23:25.245991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bbe02000 len:0x1000 01:28:33.689 [2024-12-09 05:23:25.246104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 01:28:33.689 passed 01:28:33.689 Test: blockdev nvme passthru rw ...passed 01:28:33.689 Test: blockdev nvme passthru vendor specific ...[2024-12-09 05:23:25.247033] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 01:28:33.689 passed 01:28:33.689 Test: blockdev nvme admin passthru ...[2024-12-09 05:23:25.247102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 01:28:33.689 passed 01:28:33.689 Test: blockdev copy ...passed 01:28:33.689 Suite: bdevio tests on: Nvme2n2 01:28:33.689 Test: blockdev write read block ...passed 01:28:33.689 Test: blockdev write zeroes read block ...passed 01:28:33.689 Test: blockdev write zeroes read no split ...passed 01:28:33.689 Test: blockdev write zeroes read split ...passed 01:28:33.946 Test: blockdev write zeroes read split partial ...passed 01:28:33.946 Test: blockdev reset ...[2024-12-09 05:23:25.319122] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 01:28:33.946 [2024-12-09 05:23:25.323485] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 01:28:33.946 passed 01:28:33.946 Test: blockdev write read 8 blocks ...passed 01:28:33.946 Test: blockdev write read size > 128k ...passed 01:28:33.946 Test: blockdev write read invalid size ...passed 01:28:33.946 Test: blockdev write read offset + nbytes == size of blockdev ...passed 01:28:33.946 Test: blockdev write read offset + nbytes > size of blockdev ...passed 01:28:33.946 Test: blockdev write read max offset ...passed 01:28:33.946 Test: blockdev write read 2 blocks on overlapped address offset ...passed 01:28:33.946 Test: blockdev writev readv 8 blocks ...passed 01:28:33.946 Test: blockdev writev readv 30 x 1block ...passed 01:28:33.946 Test: blockdev writev readv block ...passed 01:28:33.946 Test: blockdev writev readv size > 128k ...passed 01:28:33.946 Test: blockdev writev readv size > 128k in two iovs ...passed 01:28:33.946 Test: blockdev comparev and writev ...[2024-12-09 05:23:25.332957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cfc38000 len:0x1000 01:28:33.946 [2024-12-09 05:23:25.333013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 01:28:33.946 passed 01:28:33.946 Test: blockdev nvme passthru rw ...passed 01:28:33.946 Test: blockdev nvme passthru vendor specific ...[2024-12-09 05:23:25.333938] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 01:28:33.946 [2024-12-09 05:23:25.333983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 01:28:33.946 passed 01:28:33.946 Test: blockdev nvme admin passthru ...passed 01:28:33.946 Test: blockdev copy ...passed 01:28:33.946 Suite: bdevio tests on: Nvme2n1 01:28:33.946 Test: blockdev write read block ...passed 01:28:33.946 Test: blockdev write zeroes read block ...passed 01:28:33.946 Test: blockdev write zeroes read no split ...passed 01:28:33.946 Test: blockdev write zeroes read split ...passed 01:28:33.946 Test: blockdev write zeroes read split partial ...passed 01:28:33.946 Test: blockdev reset ...[2024-12-09 05:23:25.400382] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 01:28:33.946 [2024-12-09 05:23:25.404861] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 01:28:33.946 passed 01:28:33.946 Test: blockdev write read 8 blocks ...passed 01:28:33.946 Test: blockdev write read size > 128k ...passed 01:28:33.946 Test: blockdev write read invalid size ...passed 01:28:33.946 Test: blockdev write read offset + nbytes == size of blockdev ...passed 01:28:33.946 Test: blockdev write read offset + nbytes > size of blockdev ...passed 01:28:33.946 Test: blockdev write read max offset ...passed 01:28:33.946 Test: blockdev write read 2 blocks on overlapped address offset ...passed 01:28:33.946 Test: blockdev writev readv 8 blocks ...passed 01:28:33.946 Test: blockdev writev readv 30 x 1block ...passed 01:28:33.946 Test: blockdev writev readv block ...passed 01:28:33.946 Test: blockdev writev readv size > 128k ...passed 01:28:33.946 Test: blockdev writev readv size > 128k in two iovs ...passed 01:28:33.946 Test: blockdev comparev and writev ...[2024-12-09 05:23:25.413771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cfc34000 len:0x1000 01:28:33.946 [2024-12-09 05:23:25.413850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 01:28:33.946 passed 01:28:33.946 Test: blockdev nvme passthru rw ...passed 01:28:33.946 Test: blockdev nvme passthru vendor specific ...[2024-12-09 05:23:25.414649] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 01:28:33.946 [2024-12-09 05:23:25.414733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 01:28:33.946 passed 01:28:33.946 Test: blockdev nvme admin passthru ...passed 01:28:33.946 Test: blockdev copy ...passed 01:28:33.946 Suite: bdevio tests on: Nvme1n1p2 01:28:33.946 Test: blockdev write read block ...passed 01:28:33.946 Test: blockdev write zeroes read block ...passed 01:28:33.946 Test: blockdev write zeroes read no split ...passed 01:28:33.946 Test: blockdev write zeroes read split ...passed 01:28:33.946 Test: blockdev write zeroes read split partial ...passed 01:28:33.946 Test: blockdev reset ...[2024-12-09 05:23:25.479630] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 01:28:33.946 [2024-12-09 05:23:25.483721] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 01:28:33.946 passed 01:28:33.946 Test: blockdev write read 8 blocks ...passed 01:28:33.946 Test: blockdev write read size > 128k ...passed 01:28:33.946 Test: blockdev write read invalid size ...passed 01:28:33.946 Test: blockdev write read offset + nbytes == size of blockdev ...passed 01:28:33.946 Test: blockdev write read offset + nbytes > size of blockdev ...passed 01:28:33.946 Test: blockdev write read max offset ...passed 01:28:33.946 Test: blockdev write read 2 blocks on overlapped address offset ...passed 01:28:33.946 Test: blockdev writev readv 8 blocks ...passed 01:28:33.946 Test: blockdev writev readv 30 x 1block ...passed 01:28:33.946 Test: blockdev writev readv block ...passed 01:28:33.946 Test: blockdev writev readv size > 128k ...passed 01:28:33.946 Test: blockdev writev readv size > 128k in two iovs ...passed 01:28:33.946 Test: blockdev comparev and writev ...[2024-12-09 05:23:25.493677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2cfc30000 len:0x1000 01:28:33.946 [2024-12-09 05:23:25.493736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 01:28:33.946 passed 01:28:33.946 Test: blockdev nvme passthru rw ...passed 01:28:33.946 Test: blockdev nvme passthru vendor specific ...passed 01:28:33.946 Test: blockdev nvme admin passthru ...passed 01:28:33.946 Test: blockdev copy ...passed 01:28:33.946 Suite: bdevio tests on: Nvme1n1p1 01:28:33.946 Test: blockdev write read block ...passed 01:28:33.946 Test: blockdev write zeroes read block ...passed 01:28:33.946 Test: blockdev write zeroes read no split ...passed 01:28:33.946 Test: blockdev write zeroes read split ...passed 01:28:33.946 Test: blockdev write zeroes read split partial ...passed 01:28:33.946 Test: blockdev reset ...[2024-12-09 05:23:25.552656] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 01:28:33.946 [2024-12-09 05:23:25.556889] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 01:28:33.946 passed 01:28:33.946 Test: blockdev write read 8 blocks ...passed 01:28:33.946 Test: blockdev write read size > 128k ...passed 01:28:33.946 Test: blockdev write read invalid size ...passed 01:28:33.946 Test: blockdev write read offset + nbytes == size of blockdev ...passed 01:28:33.946 Test: blockdev write read offset + nbytes > size of blockdev ...passed 01:28:33.946 Test: blockdev write read max offset ...passed 01:28:34.203 Test: blockdev write read 2 blocks on overlapped address offset ...passed 01:28:34.203 Test: blockdev writev readv 8 blocks ...passed 01:28:34.203 Test: blockdev writev readv 30 x 1block ...passed 01:28:34.203 Test: blockdev writev readv block ...passed 01:28:34.203 Test: blockdev writev readv size > 128k ...passed 01:28:34.203 Test: blockdev writev readv size > 128k in two iovs ...passed 01:28:34.203 Test: blockdev comparev and writev ...[2024-12-09 05:23:25.566635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2bc00e000 len:0x1000 01:28:34.203 [2024-12-09 05:23:25.566702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 01:28:34.203 passed 01:28:34.203 Test: blockdev nvme passthru rw ...passed 01:28:34.204 Test: blockdev nvme passthru vendor specific ...passed 01:28:34.204 Test: blockdev nvme admin passthru ...passed 01:28:34.204 Test: blockdev copy ...passed 01:28:34.204 Suite: bdevio tests on: Nvme0n1 01:28:34.204 Test: blockdev write read block ...passed 01:28:34.204 Test: blockdev write zeroes read block ...passed 01:28:34.204 Test: blockdev write zeroes read no split ...passed 01:28:34.204 Test: blockdev write zeroes read split ...passed 01:28:34.204 Test: blockdev write zeroes read split partial ...passed 01:28:34.204 Test: blockdev reset ...[2024-12-09 05:23:25.625049] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 01:28:34.204 [2024-12-09 05:23:25.629453] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 01:28:34.204 passed 01:28:34.204 Test: blockdev write read 8 blocks ...passed 01:28:34.204 Test: blockdev write read size > 128k ...passed 01:28:34.204 Test: blockdev write read invalid size ...passed 01:28:34.204 Test: blockdev write read offset + nbytes == size of blockdev ...passed 01:28:34.204 Test: blockdev write read offset + nbytes > size of blockdev ...passed 01:28:34.204 Test: blockdev write read max offset ...passed 01:28:34.204 Test: blockdev write read 2 blocks on overlapped address offset ...passed 01:28:34.204 Test: blockdev writev readv 8 blocks ...passed 01:28:34.204 Test: blockdev writev readv 30 x 1block ...passed 01:28:34.204 Test: blockdev writev readv block ...passed 01:28:34.204 Test: blockdev writev readv size > 128k ...passed 01:28:34.204 Test: blockdev writev readv size > 128k in two iovs ...passed 01:28:34.204 Test: blockdev comparev and writev ...[2024-12-09 05:23:25.636992] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 01:28:34.204 separate metadata which is not supported yet. 01:28:34.204 passed 01:28:34.204 Test: blockdev nvme passthru rw ...passed 01:28:34.204 Test: blockdev nvme passthru vendor specific ...[2024-12-09 05:23:25.637654] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 01:28:34.204 [2024-12-09 05:23:25.637723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 01:28:34.204 passed 01:28:34.204 Test: blockdev nvme admin passthru ...passed 01:28:34.204 Test: blockdev copy ...passed 01:28:34.204 01:28:34.204 Run Summary: Type Total Ran Passed Failed Inactive 01:28:34.204 suites 7 7 n/a 0 0 01:28:34.204 tests 161 161 161 0 0 01:28:34.204 asserts 1025 1025 1025 0 n/a 01:28:34.204 01:28:34.204 Elapsed time = 1.476 seconds 01:28:34.204 0 01:28:34.204 05:23:25 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 62757 01:28:34.204 05:23:25 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 62757 ']' 01:28:34.204 05:23:25 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 62757 01:28:34.204 05:23:25 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 01:28:34.204 05:23:25 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:28:34.204 05:23:25 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62757 01:28:34.204 05:23:25 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:28:34.204 05:23:25 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:28:34.204 killing process with pid 62757 01:28:34.204 05:23:25 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62757' 01:28:34.204 05:23:25 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 62757 01:28:34.204 05:23:25 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 62757 01:28:35.579 05:23:26 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 01:28:35.579 01:28:35.579 real 0m3.129s 01:28:35.579 user 0m7.765s 01:28:35.579 sys 0m0.533s 01:28:35.579 05:23:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 01:28:35.579 05:23:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 01:28:35.579 ************************************ 01:28:35.579 END TEST bdev_bounds 01:28:35.579 ************************************ 01:28:35.579 05:23:26 blockdev_nvme_gpt -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 01:28:35.579 05:23:26 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 01:28:35.579 05:23:26 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 01:28:35.579 05:23:26 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 01:28:35.579 ************************************ 01:28:35.579 START TEST bdev_nbd 01:28:35.579 ************************************ 01:28:35.579 05:23:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 01:28:35.579 05:23:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 01:28:35.579 05:23:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 01:28:35.579 05:23:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:28:35.579 05:23:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 01:28:35.579 05:23:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 01:28:35.579 05:23:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 01:28:35.579 05:23:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 01:28:35.579 05:23:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 01:28:35.579 05:23:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 01:28:35.579 05:23:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 01:28:35.579 05:23:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 01:28:35.579 05:23:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 01:28:35.579 05:23:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 01:28:35.579 05:23:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 01:28:35.579 05:23:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 01:28:35.579 05:23:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=62822 01:28:35.579 05:23:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 01:28:35.579 05:23:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 01:28:35.579 05:23:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 62822 /var/tmp/spdk-nbd.sock 01:28:35.579 05:23:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 62822 ']' 01:28:35.579 05:23:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 01:28:35.580 05:23:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 01:28:35.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 01:28:35.580 05:23:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 01:28:35.580 05:23:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 01:28:35.580 05:23:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 01:28:35.580 [2024-12-09 05:23:27.048592] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:28:35.580 [2024-12-09 05:23:27.048759] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:28:35.838 [2024-12-09 05:23:27.231935] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:28:35.838 [2024-12-09 05:23:27.379884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:28:36.769 05:23:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:28:36.769 05:23:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 01:28:36.769 05:23:28 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 01:28:36.769 05:23:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:28:36.769 05:23:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 01:28:36.769 05:23:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 01:28:36.769 05:23:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 01:28:36.769 05:23:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:28:36.769 05:23:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 01:28:36.769 05:23:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 01:28:36.769 05:23:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 01:28:36.769 05:23:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 01:28:36.769 05:23:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 01:28:36.769 05:23:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 01:28:36.769 05:23:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 01:28:37.026 05:23:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 01:28:37.026 05:23:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 01:28:37.026 05:23:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 01:28:37.026 05:23:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 01:28:37.026 05:23:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 01:28:37.026 05:23:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:28:37.026 05:23:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:28:37.026 05:23:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 01:28:37.026 05:23:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 01:28:37.026 05:23:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:28:37.026 05:23:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:28:37.026 05:23:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:28:37.026 1+0 records in 01:28:37.026 1+0 records out 01:28:37.026 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000671546 s, 6.1 MB/s 01:28:37.026 05:23:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:28:37.026 05:23:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 01:28:37.026 05:23:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:28:37.026 05:23:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:28:37.026 05:23:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 01:28:37.027 05:23:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 01:28:37.027 05:23:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 01:28:37.027 05:23:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 01:28:37.284 05:23:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 01:28:37.284 05:23:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 01:28:37.284 05:23:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 01:28:37.284 05:23:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 01:28:37.284 05:23:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 01:28:37.284 05:23:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:28:37.284 05:23:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:28:37.284 05:23:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 01:28:37.284 05:23:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 01:28:37.284 05:23:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:28:37.284 05:23:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:28:37.284 05:23:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:28:37.284 1+0 records in 01:28:37.284 1+0 records out 01:28:37.284 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000611447 s, 6.7 MB/s 01:28:37.284 05:23:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:28:37.284 05:23:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 01:28:37.284 05:23:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:28:37.284 05:23:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:28:37.284 05:23:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 01:28:37.284 05:23:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 01:28:37.284 05:23:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 01:28:37.284 05:23:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 01:28:37.542 05:23:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 01:28:37.542 05:23:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 01:28:37.542 05:23:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 01:28:37.542 05:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 01:28:37.542 05:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 01:28:37.542 05:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:28:37.542 05:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:28:37.542 05:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 01:28:37.542 05:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 01:28:37.542 05:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:28:37.542 05:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:28:37.542 05:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:28:37.542 1+0 records in 01:28:37.542 1+0 records out 01:28:37.542 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00063843 s, 6.4 MB/s 01:28:37.542 05:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:28:37.542 05:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 01:28:37.542 05:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:28:37.542 05:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:28:37.542 05:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 01:28:37.542 05:23:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 01:28:37.542 05:23:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 01:28:37.542 05:23:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 01:28:38.107 05:23:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 01:28:38.107 05:23:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 01:28:38.107 05:23:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 01:28:38.107 05:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 01:28:38.107 05:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 01:28:38.107 05:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:28:38.107 05:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:28:38.107 05:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 01:28:38.107 05:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 01:28:38.107 05:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:28:38.107 05:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:28:38.107 05:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:28:38.107 1+0 records in 01:28:38.107 1+0 records out 01:28:38.107 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000771851 s, 5.3 MB/s 01:28:38.107 05:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:28:38.107 05:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 01:28:38.107 05:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:28:38.107 05:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:28:38.107 05:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 01:28:38.107 05:23:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 01:28:38.107 05:23:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 01:28:38.107 05:23:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 01:28:38.365 05:23:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 01:28:38.365 05:23:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 01:28:38.365 05:23:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 01:28:38.365 05:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 01:28:38.365 05:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 01:28:38.365 05:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:28:38.365 05:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:28:38.365 05:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 01:28:38.365 05:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 01:28:38.365 05:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:28:38.365 05:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:28:38.365 05:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:28:38.365 1+0 records in 01:28:38.365 1+0 records out 01:28:38.365 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000845156 s, 4.8 MB/s 01:28:38.365 05:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:28:38.365 05:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 01:28:38.365 05:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:28:38.365 05:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:28:38.365 05:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 01:28:38.365 05:23:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 01:28:38.365 05:23:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 01:28:38.365 05:23:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 01:28:38.622 05:23:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 01:28:38.622 05:23:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 01:28:38.622 05:23:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 01:28:38.622 05:23:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 01:28:38.622 05:23:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 01:28:38.622 05:23:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:28:38.622 05:23:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:28:38.622 05:23:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 01:28:38.622 05:23:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 01:28:38.622 05:23:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:28:38.622 05:23:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:28:38.622 05:23:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:28:38.622 1+0 records in 01:28:38.622 1+0 records out 01:28:38.622 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000726471 s, 5.6 MB/s 01:28:38.622 05:23:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:28:38.622 05:23:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 01:28:38.622 05:23:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:28:38.622 05:23:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:28:38.622 05:23:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 01:28:38.622 05:23:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 01:28:38.622 05:23:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 01:28:38.622 05:23:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 01:28:39.188 05:23:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 01:28:39.189 05:23:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 01:28:39.189 05:23:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 01:28:39.189 05:23:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 01:28:39.189 05:23:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 01:28:39.189 05:23:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:28:39.189 05:23:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:28:39.189 05:23:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 01:28:39.189 05:23:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 01:28:39.189 05:23:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:28:39.189 05:23:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:28:39.189 05:23:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:28:39.189 1+0 records in 01:28:39.189 1+0 records out 01:28:39.189 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000670696 s, 6.1 MB/s 01:28:39.189 05:23:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:28:39.189 05:23:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 01:28:39.189 05:23:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:28:39.189 05:23:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:28:39.189 05:23:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 01:28:39.189 05:23:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 01:28:39.189 05:23:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 01:28:39.189 05:23:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 01:28:39.447 05:23:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 01:28:39.447 { 01:28:39.447 "nbd_device": "/dev/nbd0", 01:28:39.447 "bdev_name": "Nvme0n1" 01:28:39.447 }, 01:28:39.447 { 01:28:39.447 "nbd_device": "/dev/nbd1", 01:28:39.447 "bdev_name": "Nvme1n1p1" 01:28:39.447 }, 01:28:39.447 { 01:28:39.447 "nbd_device": "/dev/nbd2", 01:28:39.447 "bdev_name": "Nvme1n1p2" 01:28:39.447 }, 01:28:39.447 { 01:28:39.447 "nbd_device": "/dev/nbd3", 01:28:39.447 "bdev_name": "Nvme2n1" 01:28:39.447 }, 01:28:39.447 { 01:28:39.447 "nbd_device": "/dev/nbd4", 01:28:39.447 "bdev_name": "Nvme2n2" 01:28:39.447 }, 01:28:39.447 { 01:28:39.447 "nbd_device": "/dev/nbd5", 01:28:39.447 "bdev_name": "Nvme2n3" 01:28:39.447 }, 01:28:39.447 { 01:28:39.447 "nbd_device": "/dev/nbd6", 01:28:39.447 "bdev_name": "Nvme3n1" 01:28:39.447 } 01:28:39.447 ]' 01:28:39.447 05:23:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 01:28:39.447 05:23:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 01:28:39.447 05:23:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 01:28:39.447 { 01:28:39.447 "nbd_device": "/dev/nbd0", 01:28:39.447 "bdev_name": "Nvme0n1" 01:28:39.447 }, 01:28:39.447 { 01:28:39.447 "nbd_device": "/dev/nbd1", 01:28:39.447 "bdev_name": "Nvme1n1p1" 01:28:39.447 }, 01:28:39.447 { 01:28:39.447 "nbd_device": "/dev/nbd2", 01:28:39.447 "bdev_name": "Nvme1n1p2" 01:28:39.447 }, 01:28:39.447 { 01:28:39.447 "nbd_device": "/dev/nbd3", 01:28:39.447 "bdev_name": "Nvme2n1" 01:28:39.447 }, 01:28:39.447 { 01:28:39.447 "nbd_device": "/dev/nbd4", 01:28:39.447 "bdev_name": "Nvme2n2" 01:28:39.447 }, 01:28:39.447 { 01:28:39.447 "nbd_device": "/dev/nbd5", 01:28:39.447 "bdev_name": "Nvme2n3" 01:28:39.447 }, 01:28:39.447 { 01:28:39.447 "nbd_device": "/dev/nbd6", 01:28:39.447 "bdev_name": "Nvme3n1" 01:28:39.447 } 01:28:39.447 ]' 01:28:39.447 05:23:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 01:28:39.447 05:23:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:28:39.447 05:23:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 01:28:39.447 05:23:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 01:28:39.447 05:23:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 01:28:39.447 05:23:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:28:39.447 05:23:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 01:28:39.707 05:23:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 01:28:39.707 05:23:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 01:28:39.707 05:23:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 01:28:39.707 05:23:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:28:39.707 05:23:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:28:39.707 05:23:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 01:28:39.707 05:23:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 01:28:39.707 05:23:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 01:28:39.707 05:23:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:28:39.707 05:23:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 01:28:39.966 05:23:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 01:28:39.966 05:23:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 01:28:39.966 05:23:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 01:28:39.966 05:23:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:28:39.966 05:23:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:28:39.966 05:23:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 01:28:39.966 05:23:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 01:28:39.966 05:23:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 01:28:39.966 05:23:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:28:39.966 05:23:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 01:28:40.225 05:23:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 01:28:40.225 05:23:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 01:28:40.225 05:23:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 01:28:40.225 05:23:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:28:40.225 05:23:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:28:40.225 05:23:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 01:28:40.225 05:23:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 01:28:40.225 05:23:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 01:28:40.225 05:23:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:28:40.225 05:23:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 01:28:40.483 05:23:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 01:28:40.483 05:23:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 01:28:40.483 05:23:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 01:28:40.483 05:23:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:28:40.483 05:23:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:28:40.483 05:23:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 01:28:40.483 05:23:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 01:28:40.483 05:23:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 01:28:40.483 05:23:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:28:40.483 05:23:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 01:28:40.742 05:23:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 01:28:40.742 05:23:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 01:28:40.742 05:23:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 01:28:40.742 05:23:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:28:40.742 05:23:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:28:40.742 05:23:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 01:28:40.742 05:23:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 01:28:40.742 05:23:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 01:28:40.742 05:23:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:28:40.742 05:23:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 01:28:41.001 05:23:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 01:28:41.001 05:23:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 01:28:41.001 05:23:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 01:28:41.001 05:23:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:28:41.001 05:23:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:28:41.001 05:23:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 01:28:41.001 05:23:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 01:28:41.001 05:23:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 01:28:41.001 05:23:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:28:41.001 05:23:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 01:28:41.568 05:23:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 01:28:41.568 05:23:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 01:28:41.568 05:23:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 01:28:41.568 05:23:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:28:41.568 05:23:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:28:41.568 05:23:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 01:28:41.568 05:23:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 01:28:41.568 05:23:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 01:28:41.568 05:23:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 01:28:41.568 05:23:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:28:41.568 05:23:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 01:28:41.826 05:23:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 01:28:41.826 05:23:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 01:28:41.826 05:23:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 01:28:41.826 05:23:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 01:28:41.826 05:23:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 01:28:41.826 05:23:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 01:28:41.826 05:23:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 01:28:41.826 05:23:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 01:28:41.826 05:23:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 01:28:41.827 05:23:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 01:28:41.827 05:23:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 01:28:41.827 05:23:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 01:28:41.827 05:23:33 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 01:28:41.827 05:23:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:28:41.827 05:23:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 01:28:41.827 05:23:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 01:28:41.827 05:23:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 01:28:41.827 05:23:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 01:28:41.827 05:23:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 01:28:41.827 05:23:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:28:41.827 05:23:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 01:28:41.827 05:23:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 01:28:41.827 05:23:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 01:28:41.827 05:23:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 01:28:41.827 05:23:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 01:28:41.827 05:23:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 01:28:41.827 05:23:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 01:28:41.827 05:23:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 01:28:42.085 /dev/nbd0 01:28:42.085 05:23:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 01:28:42.085 05:23:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 01:28:42.085 05:23:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 01:28:42.085 05:23:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 01:28:42.085 05:23:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:28:42.085 05:23:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:28:42.085 05:23:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 01:28:42.085 05:23:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 01:28:42.085 05:23:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:28:42.085 05:23:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:28:42.085 05:23:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:28:42.085 1+0 records in 01:28:42.085 1+0 records out 01:28:42.085 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000615552 s, 6.7 MB/s 01:28:42.085 05:23:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:28:42.085 05:23:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 01:28:42.085 05:23:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:28:42.085 05:23:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:28:42.085 05:23:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 01:28:42.085 05:23:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:28:42.085 05:23:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 01:28:42.085 05:23:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 01:28:42.343 /dev/nbd1 01:28:42.343 05:23:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 01:28:42.343 05:23:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 01:28:42.343 05:23:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 01:28:42.343 05:23:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 01:28:42.343 05:23:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:28:42.343 05:23:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:28:42.343 05:23:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 01:28:42.343 05:23:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 01:28:42.344 05:23:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:28:42.344 05:23:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:28:42.344 05:23:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:28:42.344 1+0 records in 01:28:42.344 1+0 records out 01:28:42.344 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000583256 s, 7.0 MB/s 01:28:42.344 05:23:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:28:42.344 05:23:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 01:28:42.344 05:23:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:28:42.344 05:23:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:28:42.344 05:23:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 01:28:42.344 05:23:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:28:42.344 05:23:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 01:28:42.344 05:23:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 01:28:42.911 /dev/nbd10 01:28:42.911 05:23:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 01:28:42.911 05:23:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 01:28:42.911 05:23:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 01:28:42.911 05:23:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 01:28:42.911 05:23:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:28:42.911 05:23:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:28:42.911 05:23:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 01:28:42.911 05:23:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 01:28:42.911 05:23:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:28:42.911 05:23:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:28:42.911 05:23:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:28:42.911 1+0 records in 01:28:42.911 1+0 records out 01:28:42.911 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00083702 s, 4.9 MB/s 01:28:42.911 05:23:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:28:42.911 05:23:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 01:28:42.911 05:23:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:28:42.911 05:23:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:28:42.911 05:23:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 01:28:42.911 05:23:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:28:42.911 05:23:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 01:28:42.911 05:23:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 01:28:43.171 /dev/nbd11 01:28:43.171 05:23:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 01:28:43.171 05:23:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 01:28:43.171 05:23:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 01:28:43.171 05:23:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 01:28:43.171 05:23:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:28:43.171 05:23:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:28:43.171 05:23:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 01:28:43.171 05:23:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 01:28:43.171 05:23:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:28:43.171 05:23:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:28:43.171 05:23:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:28:43.171 1+0 records in 01:28:43.171 1+0 records out 01:28:43.171 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000818788 s, 5.0 MB/s 01:28:43.171 05:23:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:28:43.171 05:23:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 01:28:43.171 05:23:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:28:43.171 05:23:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:28:43.171 05:23:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 01:28:43.171 05:23:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:28:43.171 05:23:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 01:28:43.171 05:23:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 01:28:43.429 /dev/nbd12 01:28:43.429 05:23:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 01:28:43.429 05:23:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 01:28:43.429 05:23:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 01:28:43.429 05:23:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 01:28:43.429 05:23:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:28:43.429 05:23:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:28:43.429 05:23:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 01:28:43.429 05:23:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 01:28:43.429 05:23:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:28:43.429 05:23:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:28:43.429 05:23:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:28:43.429 1+0 records in 01:28:43.429 1+0 records out 01:28:43.429 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0007702 s, 5.3 MB/s 01:28:43.429 05:23:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:28:43.429 05:23:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 01:28:43.429 05:23:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:28:43.429 05:23:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:28:43.429 05:23:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 01:28:43.429 05:23:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:28:43.429 05:23:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 01:28:43.429 05:23:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 01:28:43.687 /dev/nbd13 01:28:43.687 05:23:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 01:28:43.687 05:23:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 01:28:43.687 05:23:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 01:28:43.687 05:23:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 01:28:43.687 05:23:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:28:43.687 05:23:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:28:43.687 05:23:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 01:28:43.687 05:23:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 01:28:43.687 05:23:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:28:43.687 05:23:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:28:43.687 05:23:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:28:43.687 1+0 records in 01:28:43.687 1+0 records out 01:28:43.687 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000785121 s, 5.2 MB/s 01:28:43.687 05:23:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:28:43.687 05:23:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 01:28:43.687 05:23:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:28:43.687 05:23:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:28:43.687 05:23:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 01:28:43.687 05:23:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:28:43.687 05:23:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 01:28:43.687 05:23:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 01:28:43.945 /dev/nbd14 01:28:43.945 05:23:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 01:28:43.945 05:23:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 01:28:43.945 05:23:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 01:28:43.945 05:23:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 01:28:43.945 05:23:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:28:43.945 05:23:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:28:43.945 05:23:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 01:28:43.945 05:23:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 01:28:43.945 05:23:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:28:43.945 05:23:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:28:43.945 05:23:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:28:43.945 1+0 records in 01:28:43.945 1+0 records out 01:28:43.945 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000836076 s, 4.9 MB/s 01:28:43.945 05:23:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:28:43.945 05:23:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 01:28:43.945 05:23:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:28:43.945 05:23:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:28:43.945 05:23:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 01:28:43.945 05:23:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:28:43.945 05:23:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 01:28:43.945 05:23:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 01:28:43.945 05:23:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:28:43.945 05:23:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 01:28:44.511 05:23:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 01:28:44.511 { 01:28:44.511 "nbd_device": "/dev/nbd0", 01:28:44.511 "bdev_name": "Nvme0n1" 01:28:44.511 }, 01:28:44.511 { 01:28:44.511 "nbd_device": "/dev/nbd1", 01:28:44.511 "bdev_name": "Nvme1n1p1" 01:28:44.511 }, 01:28:44.511 { 01:28:44.511 "nbd_device": "/dev/nbd10", 01:28:44.511 "bdev_name": "Nvme1n1p2" 01:28:44.511 }, 01:28:44.511 { 01:28:44.511 "nbd_device": "/dev/nbd11", 01:28:44.511 "bdev_name": "Nvme2n1" 01:28:44.511 }, 01:28:44.511 { 01:28:44.511 "nbd_device": "/dev/nbd12", 01:28:44.511 "bdev_name": "Nvme2n2" 01:28:44.511 }, 01:28:44.511 { 01:28:44.511 "nbd_device": "/dev/nbd13", 01:28:44.511 "bdev_name": "Nvme2n3" 01:28:44.511 }, 01:28:44.511 { 01:28:44.511 "nbd_device": "/dev/nbd14", 01:28:44.511 "bdev_name": "Nvme3n1" 01:28:44.511 } 01:28:44.511 ]' 01:28:44.511 05:23:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 01:28:44.511 { 01:28:44.511 "nbd_device": "/dev/nbd0", 01:28:44.511 "bdev_name": "Nvme0n1" 01:28:44.511 }, 01:28:44.511 { 01:28:44.511 "nbd_device": "/dev/nbd1", 01:28:44.511 "bdev_name": "Nvme1n1p1" 01:28:44.511 }, 01:28:44.511 { 01:28:44.511 "nbd_device": "/dev/nbd10", 01:28:44.511 "bdev_name": "Nvme1n1p2" 01:28:44.511 }, 01:28:44.511 { 01:28:44.511 "nbd_device": "/dev/nbd11", 01:28:44.511 "bdev_name": "Nvme2n1" 01:28:44.511 }, 01:28:44.511 { 01:28:44.511 "nbd_device": "/dev/nbd12", 01:28:44.511 "bdev_name": "Nvme2n2" 01:28:44.511 }, 01:28:44.511 { 01:28:44.511 "nbd_device": "/dev/nbd13", 01:28:44.511 "bdev_name": "Nvme2n3" 01:28:44.511 }, 01:28:44.511 { 01:28:44.511 "nbd_device": "/dev/nbd14", 01:28:44.511 "bdev_name": "Nvme3n1" 01:28:44.511 } 01:28:44.511 ]' 01:28:44.511 05:23:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 01:28:44.511 05:23:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 01:28:44.511 /dev/nbd1 01:28:44.511 /dev/nbd10 01:28:44.511 /dev/nbd11 01:28:44.511 /dev/nbd12 01:28:44.511 /dev/nbd13 01:28:44.511 /dev/nbd14' 01:28:44.511 05:23:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 01:28:44.511 /dev/nbd1 01:28:44.511 /dev/nbd10 01:28:44.511 /dev/nbd11 01:28:44.511 /dev/nbd12 01:28:44.511 /dev/nbd13 01:28:44.511 /dev/nbd14' 01:28:44.511 05:23:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 01:28:44.511 05:23:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 01:28:44.511 05:23:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 01:28:44.511 05:23:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 01:28:44.511 05:23:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 01:28:44.511 05:23:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 01:28:44.511 05:23:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 01:28:44.511 05:23:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 01:28:44.511 05:23:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 01:28:44.511 05:23:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 01:28:44.511 05:23:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 01:28:44.511 05:23:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 01:28:44.511 256+0 records in 01:28:44.511 256+0 records out 01:28:44.511 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00726411 s, 144 MB/s 01:28:44.511 05:23:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 01:28:44.511 05:23:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 01:28:44.511 256+0 records in 01:28:44.511 256+0 records out 01:28:44.511 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.185578 s, 5.7 MB/s 01:28:44.511 05:23:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 01:28:44.511 05:23:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 01:28:44.770 256+0 records in 01:28:44.770 256+0 records out 01:28:44.770 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.192255 s, 5.5 MB/s 01:28:44.770 05:23:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 01:28:44.770 05:23:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 01:28:45.029 256+0 records in 01:28:45.029 256+0 records out 01:28:45.029 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.184094 s, 5.7 MB/s 01:28:45.029 05:23:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 01:28:45.029 05:23:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 01:28:45.288 256+0 records in 01:28:45.288 256+0 records out 01:28:45.288 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.190394 s, 5.5 MB/s 01:28:45.288 05:23:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 01:28:45.288 05:23:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 01:28:45.288 256+0 records in 01:28:45.288 256+0 records out 01:28:45.288 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.170803 s, 6.1 MB/s 01:28:45.288 05:23:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 01:28:45.288 05:23:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 01:28:45.545 256+0 records in 01:28:45.545 256+0 records out 01:28:45.545 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.193148 s, 5.4 MB/s 01:28:45.545 05:23:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 01:28:45.545 05:23:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 01:28:45.803 256+0 records in 01:28:45.803 256+0 records out 01:28:45.803 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.180815 s, 5.8 MB/s 01:28:45.803 05:23:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 01:28:45.803 05:23:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 01:28:45.803 05:23:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 01:28:45.803 05:23:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 01:28:45.803 05:23:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 01:28:45.803 05:23:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 01:28:45.803 05:23:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 01:28:45.803 05:23:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 01:28:45.803 05:23:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 01:28:45.803 05:23:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 01:28:45.803 05:23:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 01:28:45.803 05:23:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 01:28:45.803 05:23:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 01:28:45.803 05:23:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 01:28:45.803 05:23:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 01:28:45.803 05:23:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 01:28:45.803 05:23:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 01:28:45.803 05:23:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 01:28:45.803 05:23:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 01:28:45.803 05:23:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 01:28:45.803 05:23:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 01:28:45.803 05:23:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 01:28:45.803 05:23:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 01:28:45.803 05:23:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:28:45.803 05:23:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 01:28:45.803 05:23:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 01:28:45.803 05:23:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 01:28:45.803 05:23:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:28:45.803 05:23:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 01:28:46.062 05:23:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 01:28:46.062 05:23:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 01:28:46.062 05:23:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 01:28:46.062 05:23:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:28:46.062 05:23:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:28:46.062 05:23:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 01:28:46.062 05:23:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 01:28:46.062 05:23:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 01:28:46.062 05:23:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:28:46.062 05:23:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 01:28:46.334 05:23:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 01:28:46.334 05:23:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 01:28:46.334 05:23:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 01:28:46.334 05:23:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:28:46.334 05:23:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:28:46.334 05:23:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 01:28:46.334 05:23:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 01:28:46.334 05:23:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 01:28:46.334 05:23:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:28:46.334 05:23:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 01:28:46.907 05:23:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 01:28:46.908 05:23:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 01:28:46.908 05:23:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 01:28:46.908 05:23:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:28:46.908 05:23:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:28:46.908 05:23:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 01:28:46.908 05:23:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 01:28:46.908 05:23:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 01:28:46.908 05:23:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:28:46.908 05:23:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 01:28:47.165 05:23:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 01:28:47.165 05:23:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 01:28:47.165 05:23:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 01:28:47.165 05:23:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:28:47.165 05:23:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:28:47.165 05:23:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 01:28:47.165 05:23:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 01:28:47.165 05:23:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 01:28:47.165 05:23:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:28:47.165 05:23:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 01:28:47.423 05:23:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 01:28:47.423 05:23:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 01:28:47.423 05:23:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 01:28:47.423 05:23:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:28:47.423 05:23:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:28:47.423 05:23:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 01:28:47.423 05:23:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 01:28:47.423 05:23:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 01:28:47.423 05:23:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:28:47.423 05:23:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 01:28:47.681 05:23:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 01:28:47.681 05:23:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 01:28:47.681 05:23:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 01:28:47.681 05:23:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:28:47.681 05:23:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:28:47.681 05:23:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 01:28:47.681 05:23:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 01:28:47.681 05:23:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 01:28:47.681 05:23:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:28:47.681 05:23:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 01:28:47.939 05:23:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 01:28:47.939 05:23:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 01:28:47.939 05:23:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 01:28:47.939 05:23:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:28:47.939 05:23:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:28:47.939 05:23:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 01:28:47.939 05:23:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 01:28:47.939 05:23:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 01:28:47.939 05:23:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 01:28:47.939 05:23:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:28:47.939 05:23:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 01:28:48.197 05:23:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 01:28:48.197 05:23:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 01:28:48.197 05:23:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 01:28:48.197 05:23:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 01:28:48.197 05:23:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 01:28:48.197 05:23:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 01:28:48.197 05:23:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 01:28:48.197 05:23:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 01:28:48.197 05:23:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 01:28:48.197 05:23:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 01:28:48.197 05:23:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 01:28:48.197 05:23:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 01:28:48.197 05:23:39 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 01:28:48.197 05:23:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:28:48.197 05:23:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 01:28:48.197 05:23:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 01:28:48.454 malloc_lvol_verify 01:28:48.454 05:23:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 01:28:48.711 5b757460-c7e3-43c2-be56-cc5a0d46b8b3 01:28:48.712 05:23:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 01:28:48.969 df710893-035d-4b5b-9150-75c818f27db9 01:28:48.969 05:23:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 01:28:49.226 /dev/nbd0 01:28:49.483 05:23:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 01:28:49.483 05:23:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 01:28:49.483 05:23:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 01:28:49.483 05:23:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 01:28:49.483 05:23:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 01:28:49.483 mke2fs 1.47.0 (5-Feb-2023) 01:28:49.483 Discarding device blocks: 0/4096 done 01:28:49.483 Creating filesystem with 4096 1k blocks and 1024 inodes 01:28:49.483 01:28:49.483 Allocating group tables: 0/1 done 01:28:49.483 Writing inode tables: 0/1 done 01:28:49.483 Creating journal (1024 blocks): done 01:28:49.483 Writing superblocks and filesystem accounting information: 0/1 done 01:28:49.483 01:28:49.483 05:23:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 01:28:49.483 05:23:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:28:49.483 05:23:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 01:28:49.483 05:23:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 01:28:49.483 05:23:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 01:28:49.483 05:23:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:28:49.483 05:23:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 01:28:49.742 05:23:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 01:28:49.742 05:23:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 01:28:49.742 05:23:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 01:28:49.742 05:23:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:28:49.742 05:23:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:28:49.742 05:23:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 01:28:49.742 05:23:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 01:28:49.742 05:23:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 01:28:49.742 05:23:41 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 62822 01:28:49.742 05:23:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 62822 ']' 01:28:49.742 05:23:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 62822 01:28:49.742 05:23:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 01:28:49.742 05:23:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:28:49.742 05:23:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62822 01:28:49.742 05:23:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:28:49.742 05:23:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:28:49.742 killing process with pid 62822 01:28:49.742 05:23:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62822' 01:28:49.742 05:23:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 62822 01:28:49.742 05:23:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 62822 01:28:51.118 05:23:42 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 01:28:51.118 01:28:51.118 real 0m15.555s 01:28:51.118 user 0m21.858s 01:28:51.118 sys 0m5.124s 01:28:51.118 05:23:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 01:28:51.118 05:23:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 01:28:51.118 ************************************ 01:28:51.118 END TEST bdev_nbd 01:28:51.118 ************************************ 01:28:51.118 05:23:42 blockdev_nvme_gpt -- bdev/blockdev.sh@800 -- # [[ y == y ]] 01:28:51.118 05:23:42 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = nvme ']' 01:28:51.118 05:23:42 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = gpt ']' 01:28:51.118 skipping fio tests on NVMe due to multi-ns failures. 01:28:51.118 05:23:42 blockdev_nvme_gpt -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 01:28:51.118 05:23:42 blockdev_nvme_gpt -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 01:28:51.118 05:23:42 blockdev_nvme_gpt -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 01:28:51.118 05:23:42 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 01:28:51.118 05:23:42 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 01:28:51.118 05:23:42 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 01:28:51.118 ************************************ 01:28:51.118 START TEST bdev_verify 01:28:51.118 ************************************ 01:28:51.118 05:23:42 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 01:28:51.118 [2024-12-09 05:23:42.661944] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:28:51.118 [2024-12-09 05:23:42.662186] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63290 ] 01:28:51.376 [2024-12-09 05:23:42.855844] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 01:28:51.635 [2024-12-09 05:23:43.007265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:28:51.635 [2024-12-09 05:23:43.007294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:28:52.201 Running I/O for 5 seconds... 01:28:54.565 15360.00 IOPS, 60.00 MiB/s [2024-12-09T05:23:47.554Z] 16032.00 IOPS, 62.62 MiB/s [2024-12-09T05:23:48.118Z] 16512.00 IOPS, 64.50 MiB/s [2024-12-09T05:23:49.052Z] 16640.00 IOPS, 65.00 MiB/s [2024-12-09T05:23:49.052Z] 16358.40 IOPS, 63.90 MiB/s 01:28:57.435 Latency(us) 01:28:57.435 [2024-12-09T05:23:49.052Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:28:57.435 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 01:28:57.435 Verification LBA range: start 0x0 length 0xbd0bd 01:28:57.435 Nvme0n1 : 5.05 1140.47 4.45 0.00 0.00 111659.16 27405.96 95325.09 01:28:57.435 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 01:28:57.435 Verification LBA range: start 0xbd0bd length 0xbd0bd 01:28:57.435 Nvme0n1 : 5.13 1148.08 4.48 0.00 0.00 111149.10 26691.03 102951.10 01:28:57.435 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 01:28:57.435 Verification LBA range: start 0x0 length 0x4ff80 01:28:57.435 Nvme1n1p1 : 5.10 1142.47 4.46 0.00 0.00 111189.67 16443.58 93418.59 01:28:57.435 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 01:28:57.435 Verification LBA range: start 0x4ff80 length 0x4ff80 01:28:57.435 Nvme1n1p1 : 5.13 1147.24 4.48 0.00 0.00 110940.66 29431.62 90558.84 01:28:57.435 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 01:28:57.435 Verification LBA range: start 0x0 length 0x4ff7f 01:28:57.435 Nvme1n1p2 : 5.10 1141.99 4.46 0.00 0.00 110984.83 14954.12 92941.96 01:28:57.435 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 01:28:57.435 Verification LBA range: start 0x4ff7f length 0x4ff7f 01:28:57.435 Nvme1n1p2 : 5.14 1145.78 4.48 0.00 0.00 110802.00 33125.47 84362.71 01:28:57.435 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 01:28:57.435 Verification LBA range: start 0x0 length 0x80000 01:28:57.435 Nvme2n1 : 5.13 1148.84 4.49 0.00 0.00 110522.23 19303.33 90558.84 01:28:57.435 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 01:28:57.435 Verification LBA range: start 0x80000 length 0x80000 01:28:57.435 Nvme2n1 : 5.14 1145.29 4.47 0.00 0.00 110617.13 35031.97 82932.83 01:28:57.435 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 01:28:57.435 Verification LBA range: start 0x0 length 0x80000 01:28:57.435 Nvme2n2 : 5.13 1148.23 4.49 0.00 0.00 110351.36 18588.39 90082.21 01:28:57.435 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 01:28:57.435 Verification LBA range: start 0x80000 length 0x80000 01:28:57.435 Nvme2n2 : 5.14 1144.84 4.47 0.00 0.00 110418.93 31457.28 83409.45 01:28:57.435 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 01:28:57.435 Verification LBA range: start 0x0 length 0x80000 01:28:57.435 Nvme2n3 : 5.13 1147.66 4.48 0.00 0.00 110186.31 18826.71 93418.59 01:28:57.435 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 01:28:57.435 Verification LBA range: start 0x80000 length 0x80000 01:28:57.435 Nvme2n3 : 5.15 1144.38 4.47 0.00 0.00 110230.46 25261.15 85792.58 01:28:57.435 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 01:28:57.435 Verification LBA range: start 0x0 length 0x20000 01:28:57.435 Nvme3n1 : 5.13 1147.14 4.48 0.00 0.00 110009.83 17754.30 94848.47 01:28:57.435 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 01:28:57.435 Verification LBA range: start 0x20000 length 0x20000 01:28:57.435 Nvme3n1 : 5.15 1143.90 4.47 0.00 0.00 110066.97 20375.74 88652.33 01:28:57.435 [2024-12-09T05:23:49.052Z] =================================================================================================================== 01:28:57.435 [2024-12-09T05:23:49.052Z] Total : 16036.31 62.64 0.00 0.00 110649.80 14954.12 102951.10 01:28:58.810 01:28:58.810 real 0m7.843s 01:28:58.810 user 0m14.276s 01:28:58.810 sys 0m0.399s 01:28:58.810 05:23:50 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 01:28:58.810 05:23:50 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 01:28:58.810 ************************************ 01:28:58.810 END TEST bdev_verify 01:28:58.810 ************************************ 01:28:59.068 05:23:50 blockdev_nvme_gpt -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 01:28:59.068 05:23:50 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 01:28:59.068 05:23:50 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 01:28:59.068 05:23:50 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 01:28:59.068 ************************************ 01:28:59.068 START TEST bdev_verify_big_io 01:28:59.068 ************************************ 01:28:59.068 05:23:50 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 01:28:59.068 [2024-12-09 05:23:50.552852] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:28:59.068 [2024-12-09 05:23:50.553069] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63388 ] 01:28:59.325 [2024-12-09 05:23:50.733877] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 01:28:59.325 [2024-12-09 05:23:50.856289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:28:59.325 [2024-12-09 05:23:50.856295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:29:00.260 Running I/O for 5 seconds... 01:29:04.498 2086.00 IOPS, 130.38 MiB/s [2024-12-09T05:23:57.489Z] 3430.50 IOPS, 214.41 MiB/s [2024-12-09T05:23:57.748Z] 3060.00 IOPS, 191.25 MiB/s [2024-12-09T05:23:57.748Z] 3076.50 IOPS, 192.28 MiB/s 01:29:06.131 Latency(us) 01:29:06.131 [2024-12-09T05:23:57.748Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:29:06.131 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 01:29:06.131 Verification LBA range: start 0x0 length 0xbd0b 01:29:06.131 Nvme0n1 : 5.70 134.88 8.43 0.00 0.00 911846.13 14715.81 903681.86 01:29:06.131 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 01:29:06.131 Verification LBA range: start 0xbd0b length 0xbd0b 01:29:06.131 Nvme0n1 : 5.63 149.43 9.34 0.00 0.00 817027.80 23473.80 1311673.25 01:29:06.131 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 01:29:06.131 Verification LBA range: start 0x0 length 0x4ff8 01:29:06.131 Nvme1n1p1 : 5.71 137.37 8.59 0.00 0.00 888498.09 57909.99 1334551.27 01:29:06.131 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 01:29:06.131 Verification LBA range: start 0x4ff8 length 0x4ff8 01:29:06.131 Nvme1n1p1 : 5.64 159.33 9.96 0.00 0.00 752459.70 76260.07 766413.73 01:29:06.131 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 01:29:06.131 Verification LBA range: start 0x0 length 0x4ff7 01:29:06.131 Nvme1n1p2 : 5.75 137.29 8.58 0.00 0.00 863896.78 58863.24 1204909.15 01:29:06.131 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 01:29:06.131 Verification LBA range: start 0x4ff7 length 0x4ff7 01:29:06.131 Nvme1n1p2 : 5.70 160.97 10.06 0.00 0.00 736401.92 100091.35 937998.89 01:29:06.131 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 01:29:06.131 Verification LBA range: start 0x0 length 0x8000 01:29:06.131 Nvme2n1 : 5.77 141.78 8.86 0.00 0.00 820840.37 37176.79 1380307.32 01:29:06.131 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 01:29:06.131 Verification LBA range: start 0x8000 length 0x8000 01:29:06.131 Nvme2n1 : 5.70 165.27 10.33 0.00 0.00 708195.35 57195.05 960876.92 01:29:06.131 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 01:29:06.131 Verification LBA range: start 0x0 length 0x8000 01:29:06.131 Nvme2n2 : 5.75 147.44 9.21 0.00 0.00 770920.31 36938.47 1021884.97 01:29:06.131 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 01:29:06.131 Verification LBA range: start 0x8000 length 0x8000 01:29:06.131 Nvme2n2 : 5.74 174.88 10.93 0.00 0.00 660660.91 22043.93 808356.77 01:29:06.131 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 01:29:06.131 Verification LBA range: start 0x0 length 0x8000 01:29:06.131 Nvme2n3 : 5.79 151.68 9.48 0.00 0.00 733102.13 15252.01 1448941.38 01:29:06.131 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 01:29:06.131 Verification LBA range: start 0x8000 length 0x8000 01:29:06.131 Nvme2n3 : 5.74 178.65 11.17 0.00 0.00 634224.96 9234.62 819795.78 01:29:06.131 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 01:29:06.131 Verification LBA range: start 0x0 length 0x2000 01:29:06.131 Nvme3n1 : 5.85 172.51 10.78 0.00 0.00 633051.27 1035.17 1471819.40 01:29:06.131 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 01:29:06.131 Verification LBA range: start 0x2000 length 0x2000 01:29:06.131 Nvme3n1 : 5.75 183.26 11.45 0.00 0.00 605187.49 4438.57 823608.79 01:29:06.131 [2024-12-09T05:23:57.748Z] =================================================================================================================== 01:29:06.131 [2024-12-09T05:23:57.748Z] Total : 2194.74 137.17 0.00 0.00 742976.61 1035.17 1471819.40 01:29:08.034 01:29:08.034 real 0m9.176s 01:29:08.034 user 0m16.979s 01:29:08.034 sys 0m0.397s 01:29:08.034 05:23:59 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 01:29:08.034 ************************************ 01:29:08.034 END TEST bdev_verify_big_io 01:29:08.034 05:23:59 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 01:29:08.034 ************************************ 01:29:08.300 05:23:59 blockdev_nvme_gpt -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 01:29:08.300 05:23:59 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 01:29:08.300 05:23:59 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 01:29:08.300 05:23:59 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 01:29:08.300 ************************************ 01:29:08.300 START TEST bdev_write_zeroes 01:29:08.300 ************************************ 01:29:08.300 05:23:59 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 01:29:08.300 [2024-12-09 05:23:59.773807] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:29:08.300 [2024-12-09 05:23:59.774062] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63508 ] 01:29:08.572 [2024-12-09 05:23:59.954044] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:29:08.572 [2024-12-09 05:24:00.091665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:29:09.506 Running I/O for 1 seconds... 01:29:10.439 55488.00 IOPS, 216.75 MiB/s 01:29:10.439 Latency(us) 01:29:10.439 [2024-12-09T05:24:02.056Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:29:10.439 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 01:29:10.439 Nvme0n1 : 1.03 7856.53 30.69 0.00 0.00 16245.11 12809.31 35508.60 01:29:10.439 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 01:29:10.439 Nvme1n1p1 : 1.04 7843.53 30.64 0.00 0.00 16238.11 13047.62 34555.35 01:29:10.439 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 01:29:10.439 Nvme1n1p2 : 1.04 7830.44 30.59 0.00 0.00 16211.38 12749.73 33602.09 01:29:10.439 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 01:29:10.439 Nvme2n1 : 1.04 7818.66 30.54 0.00 0.00 16142.56 12571.00 32648.84 01:29:10.439 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 01:29:10.439 Nvme2n2 : 1.04 7806.50 30.49 0.00 0.00 16108.21 10485.76 31933.91 01:29:10.439 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 01:29:10.439 Nvme2n3 : 1.04 7794.73 30.45 0.00 0.00 16087.63 9413.35 33363.78 01:29:10.439 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 01:29:10.439 Nvme3n1 : 1.04 7721.82 30.16 0.00 0.00 16200.36 12928.47 35508.60 01:29:10.439 [2024-12-09T05:24:02.056Z] =================================================================================================================== 01:29:10.439 [2024-12-09T05:24:02.056Z] Total : 54672.20 213.56 0.00 0.00 16176.17 9413.35 35508.60 01:29:11.813 01:29:11.813 real 0m3.403s 01:29:11.813 user 0m3.000s 01:29:11.813 sys 0m0.282s 01:29:11.813 05:24:03 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 01:29:11.813 05:24:03 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 01:29:11.813 ************************************ 01:29:11.813 END TEST bdev_write_zeroes 01:29:11.813 ************************************ 01:29:11.813 05:24:03 blockdev_nvme_gpt -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 01:29:11.813 05:24:03 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 01:29:11.813 05:24:03 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 01:29:11.813 05:24:03 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 01:29:11.813 ************************************ 01:29:11.813 START TEST bdev_json_nonenclosed 01:29:11.813 ************************************ 01:29:11.813 05:24:03 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 01:29:11.813 [2024-12-09 05:24:03.251557] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:29:11.813 [2024-12-09 05:24:03.251805] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63567 ] 01:29:12.071 [2024-12-09 05:24:03.428955] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:29:12.071 [2024-12-09 05:24:03.589634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:29:12.071 [2024-12-09 05:24:03.589783] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 01:29:12.071 [2024-12-09 05:24:03.589813] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 01:29:12.071 [2024-12-09 05:24:03.589838] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 01:29:12.330 01:29:12.330 real 0m0.793s 01:29:12.330 user 0m0.536s 01:29:12.330 sys 0m0.151s 01:29:12.330 05:24:03 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 01:29:12.330 ************************************ 01:29:12.330 END TEST bdev_json_nonenclosed 01:29:12.330 ************************************ 01:29:12.330 05:24:03 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 01:29:12.589 05:24:03 blockdev_nvme_gpt -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 01:29:12.589 05:24:03 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 01:29:12.589 05:24:03 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 01:29:12.589 05:24:03 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 01:29:12.589 ************************************ 01:29:12.589 START TEST bdev_json_nonarray 01:29:12.589 ************************************ 01:29:12.589 05:24:03 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 01:29:12.589 [2024-12-09 05:24:04.117538] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:29:12.589 [2024-12-09 05:24:04.117753] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63598 ] 01:29:12.847 [2024-12-09 05:24:04.310140] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:29:12.847 [2024-12-09 05:24:04.447886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:29:12.847 [2024-12-09 05:24:04.448049] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 01:29:12.847 [2024-12-09 05:24:04.448078] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 01:29:12.847 [2024-12-09 05:24:04.448091] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 01:29:13.430 01:29:13.430 real 0m0.812s 01:29:13.430 user 0m0.547s 01:29:13.430 sys 0m0.159s 01:29:13.430 05:24:04 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 01:29:13.430 ************************************ 01:29:13.430 END TEST bdev_json_nonarray 01:29:13.430 05:24:04 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 01:29:13.430 ************************************ 01:29:13.430 05:24:04 blockdev_nvme_gpt -- bdev/blockdev.sh@824 -- # [[ gpt == bdev ]] 01:29:13.430 05:24:04 blockdev_nvme_gpt -- bdev/blockdev.sh@832 -- # [[ gpt == gpt ]] 01:29:13.430 05:24:04 blockdev_nvme_gpt -- bdev/blockdev.sh@833 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 01:29:13.430 05:24:04 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:29:13.430 05:24:04 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 01:29:13.430 05:24:04 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 01:29:13.430 ************************************ 01:29:13.430 START TEST bdev_gpt_uuid 01:29:13.430 ************************************ 01:29:13.430 05:24:04 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 01:29:13.430 05:24:04 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@651 -- # local bdev 01:29:13.430 05:24:04 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@653 -- # start_spdk_tgt 01:29:13.430 05:24:04 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=63618 01:29:13.430 05:24:04 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 01:29:13.430 05:24:04 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 63618 01:29:13.430 05:24:04 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 01:29:13.430 05:24:04 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 63618 ']' 01:29:13.430 05:24:04 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:29:13.430 05:24:04 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 01:29:13.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:29:13.430 05:24:04 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:29:13.430 05:24:04 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 01:29:13.430 05:24:04 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 01:29:13.430 [2024-12-09 05:24:04.984202] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:29:13.430 [2024-12-09 05:24:04.984370] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63618 ] 01:29:13.687 [2024-12-09 05:24:05.187650] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:29:13.945 [2024-12-09 05:24:05.368565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:29:14.879 05:24:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:29:14.879 05:24:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 01:29:14.879 05:24:06 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@655 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 01:29:14.879 05:24:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:14.879 05:24:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 01:29:15.137 Some configs were skipped because the RPC state that can call them passed over. 01:29:15.137 05:24:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:15.137 05:24:06 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@656 -- # rpc_cmd bdev_wait_for_examine 01:29:15.137 05:24:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:15.137 05:24:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 01:29:15.137 05:24:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:15.137 05:24:06 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 01:29:15.137 05:24:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:15.137 05:24:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 01:29:15.137 05:24:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:15.137 05:24:06 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # bdev='[ 01:29:15.137 { 01:29:15.137 "name": "Nvme1n1p1", 01:29:15.137 "aliases": [ 01:29:15.137 "6f89f330-603b-4116-ac73-2ca8eae53030" 01:29:15.137 ], 01:29:15.137 "product_name": "GPT Disk", 01:29:15.137 "block_size": 4096, 01:29:15.137 "num_blocks": 655104, 01:29:15.137 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 01:29:15.137 "assigned_rate_limits": { 01:29:15.137 "rw_ios_per_sec": 0, 01:29:15.137 "rw_mbytes_per_sec": 0, 01:29:15.137 "r_mbytes_per_sec": 0, 01:29:15.137 "w_mbytes_per_sec": 0 01:29:15.137 }, 01:29:15.137 "claimed": false, 01:29:15.137 "zoned": false, 01:29:15.138 "supported_io_types": { 01:29:15.138 "read": true, 01:29:15.138 "write": true, 01:29:15.138 "unmap": true, 01:29:15.138 "flush": true, 01:29:15.138 "reset": true, 01:29:15.138 "nvme_admin": false, 01:29:15.138 "nvme_io": false, 01:29:15.138 "nvme_io_md": false, 01:29:15.138 "write_zeroes": true, 01:29:15.138 "zcopy": false, 01:29:15.138 "get_zone_info": false, 01:29:15.138 "zone_management": false, 01:29:15.138 "zone_append": false, 01:29:15.138 "compare": true, 01:29:15.138 "compare_and_write": false, 01:29:15.138 "abort": true, 01:29:15.138 "seek_hole": false, 01:29:15.138 "seek_data": false, 01:29:15.138 "copy": true, 01:29:15.138 "nvme_iov_md": false 01:29:15.138 }, 01:29:15.138 "driver_specific": { 01:29:15.138 "gpt": { 01:29:15.138 "base_bdev": "Nvme1n1", 01:29:15.138 "offset_blocks": 256, 01:29:15.138 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 01:29:15.138 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 01:29:15.138 "partition_name": "SPDK_TEST_first" 01:29:15.138 } 01:29:15.138 } 01:29:15.138 } 01:29:15.138 ]' 01:29:15.138 05:24:06 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # jq -r length 01:29:15.138 05:24:06 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # [[ 1 == \1 ]] 01:29:15.138 05:24:06 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # jq -r '.[0].aliases[0]' 01:29:15.138 05:24:06 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 01:29:15.138 05:24:06 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 01:29:15.138 05:24:06 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 01:29:15.138 05:24:06 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 01:29:15.138 05:24:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:15.138 05:24:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 01:29:15.138 05:24:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:15.138 05:24:06 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # bdev='[ 01:29:15.138 { 01:29:15.138 "name": "Nvme1n1p2", 01:29:15.138 "aliases": [ 01:29:15.138 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 01:29:15.138 ], 01:29:15.138 "product_name": "GPT Disk", 01:29:15.138 "block_size": 4096, 01:29:15.138 "num_blocks": 655103, 01:29:15.138 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 01:29:15.138 "assigned_rate_limits": { 01:29:15.138 "rw_ios_per_sec": 0, 01:29:15.138 "rw_mbytes_per_sec": 0, 01:29:15.138 "r_mbytes_per_sec": 0, 01:29:15.138 "w_mbytes_per_sec": 0 01:29:15.138 }, 01:29:15.138 "claimed": false, 01:29:15.138 "zoned": false, 01:29:15.138 "supported_io_types": { 01:29:15.138 "read": true, 01:29:15.138 "write": true, 01:29:15.138 "unmap": true, 01:29:15.138 "flush": true, 01:29:15.138 "reset": true, 01:29:15.138 "nvme_admin": false, 01:29:15.138 "nvme_io": false, 01:29:15.138 "nvme_io_md": false, 01:29:15.138 "write_zeroes": true, 01:29:15.138 "zcopy": false, 01:29:15.138 "get_zone_info": false, 01:29:15.138 "zone_management": false, 01:29:15.138 "zone_append": false, 01:29:15.138 "compare": true, 01:29:15.138 "compare_and_write": false, 01:29:15.138 "abort": true, 01:29:15.138 "seek_hole": false, 01:29:15.138 "seek_data": false, 01:29:15.138 "copy": true, 01:29:15.138 "nvme_iov_md": false 01:29:15.138 }, 01:29:15.138 "driver_specific": { 01:29:15.138 "gpt": { 01:29:15.138 "base_bdev": "Nvme1n1", 01:29:15.138 "offset_blocks": 655360, 01:29:15.138 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 01:29:15.138 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 01:29:15.138 "partition_name": "SPDK_TEST_second" 01:29:15.138 } 01:29:15.138 } 01:29:15.138 } 01:29:15.138 ]' 01:29:15.138 05:24:06 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # jq -r length 01:29:15.462 05:24:06 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # [[ 1 == \1 ]] 01:29:15.462 05:24:06 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # jq -r '.[0].aliases[0]' 01:29:15.462 05:24:06 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 01:29:15.462 05:24:06 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 01:29:15.462 05:24:06 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 01:29:15.462 05:24:06 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@668 -- # killprocess 63618 01:29:15.462 05:24:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 63618 ']' 01:29:15.462 05:24:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 63618 01:29:15.462 05:24:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 01:29:15.462 05:24:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:29:15.462 05:24:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63618 01:29:15.462 05:24:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:29:15.462 05:24:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:29:15.462 killing process with pid 63618 01:29:15.462 05:24:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63618' 01:29:15.462 05:24:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 63618 01:29:15.462 05:24:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 63618 01:29:17.993 ************************************ 01:29:17.993 END TEST bdev_gpt_uuid 01:29:17.993 ************************************ 01:29:17.993 01:29:17.993 real 0m4.188s 01:29:17.993 user 0m4.423s 01:29:17.993 sys 0m0.613s 01:29:17.993 05:24:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 01:29:17.994 05:24:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 01:29:17.994 05:24:09 blockdev_nvme_gpt -- bdev/blockdev.sh@836 -- # [[ gpt == crypto_sw ]] 01:29:17.994 05:24:09 blockdev_nvme_gpt -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 01:29:17.994 05:24:09 blockdev_nvme_gpt -- bdev/blockdev.sh@849 -- # cleanup 01:29:17.994 05:24:09 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 01:29:17.994 05:24:09 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 01:29:17.994 05:24:09 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 01:29:17.994 05:24:09 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 01:29:17.994 05:24:09 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 01:29:17.994 05:24:09 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 01:29:17.994 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:29:18.269 Waiting for block devices as requested 01:29:18.269 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 01:29:18.269 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 01:29:18.269 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 01:29:18.527 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 01:29:23.793 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 01:29:23.793 05:24:15 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 01:29:23.793 05:24:15 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 01:29:23.793 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 01:29:23.793 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 01:29:23.793 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 01:29:23.793 /dev/nvme0n1: calling ioctl to re-read partition table: Success 01:29:23.793 05:24:15 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 01:29:23.793 01:29:23.793 real 1m7.692s 01:29:23.793 user 1m26.231s 01:29:23.793 sys 0m11.349s 01:29:23.793 05:24:15 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 01:29:23.793 05:24:15 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 01:29:23.793 ************************************ 01:29:23.793 END TEST blockdev_nvme_gpt 01:29:23.793 ************************************ 01:29:23.793 05:24:15 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 01:29:23.793 05:24:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:29:23.793 05:24:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:29:23.793 05:24:15 -- common/autotest_common.sh@10 -- # set +x 01:29:23.793 ************************************ 01:29:23.793 START TEST nvme 01:29:23.793 ************************************ 01:29:23.793 05:24:15 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 01:29:24.052 * Looking for test storage... 01:29:24.052 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 01:29:24.052 05:24:15 nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:29:24.052 05:24:15 nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:29:24.052 05:24:15 nvme -- common/autotest_common.sh@1693 -- # lcov --version 01:29:24.052 05:24:15 nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:29:24.052 05:24:15 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:29:24.052 05:24:15 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 01:29:24.052 05:24:15 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 01:29:24.052 05:24:15 nvme -- scripts/common.sh@336 -- # IFS=.-: 01:29:24.052 05:24:15 nvme -- scripts/common.sh@336 -- # read -ra ver1 01:29:24.052 05:24:15 nvme -- scripts/common.sh@337 -- # IFS=.-: 01:29:24.052 05:24:15 nvme -- scripts/common.sh@337 -- # read -ra ver2 01:29:24.052 05:24:15 nvme -- scripts/common.sh@338 -- # local 'op=<' 01:29:24.052 05:24:15 nvme -- scripts/common.sh@340 -- # ver1_l=2 01:29:24.052 05:24:15 nvme -- scripts/common.sh@341 -- # ver2_l=1 01:29:24.052 05:24:15 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:29:24.052 05:24:15 nvme -- scripts/common.sh@344 -- # case "$op" in 01:29:24.052 05:24:15 nvme -- scripts/common.sh@345 -- # : 1 01:29:24.052 05:24:15 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 01:29:24.052 05:24:15 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:29:24.052 05:24:15 nvme -- scripts/common.sh@365 -- # decimal 1 01:29:24.052 05:24:15 nvme -- scripts/common.sh@353 -- # local d=1 01:29:24.052 05:24:15 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:29:24.052 05:24:15 nvme -- scripts/common.sh@355 -- # echo 1 01:29:24.052 05:24:15 nvme -- scripts/common.sh@365 -- # ver1[v]=1 01:29:24.052 05:24:15 nvme -- scripts/common.sh@366 -- # decimal 2 01:29:24.052 05:24:15 nvme -- scripts/common.sh@353 -- # local d=2 01:29:24.052 05:24:15 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:29:24.052 05:24:15 nvme -- scripts/common.sh@355 -- # echo 2 01:29:24.052 05:24:15 nvme -- scripts/common.sh@366 -- # ver2[v]=2 01:29:24.052 05:24:15 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:29:24.052 05:24:15 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:29:24.052 05:24:15 nvme -- scripts/common.sh@368 -- # return 0 01:29:24.052 05:24:15 nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:29:24.052 05:24:15 nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:29:24.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:29:24.052 --rc genhtml_branch_coverage=1 01:29:24.052 --rc genhtml_function_coverage=1 01:29:24.052 --rc genhtml_legend=1 01:29:24.052 --rc geninfo_all_blocks=1 01:29:24.052 --rc geninfo_unexecuted_blocks=1 01:29:24.052 01:29:24.052 ' 01:29:24.052 05:24:15 nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:29:24.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:29:24.052 --rc genhtml_branch_coverage=1 01:29:24.052 --rc genhtml_function_coverage=1 01:29:24.052 --rc genhtml_legend=1 01:29:24.052 --rc geninfo_all_blocks=1 01:29:24.052 --rc geninfo_unexecuted_blocks=1 01:29:24.052 01:29:24.052 ' 01:29:24.052 05:24:15 nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:29:24.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:29:24.052 --rc genhtml_branch_coverage=1 01:29:24.052 --rc genhtml_function_coverage=1 01:29:24.052 --rc genhtml_legend=1 01:29:24.052 --rc geninfo_all_blocks=1 01:29:24.052 --rc geninfo_unexecuted_blocks=1 01:29:24.052 01:29:24.052 ' 01:29:24.052 05:24:15 nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:29:24.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:29:24.052 --rc genhtml_branch_coverage=1 01:29:24.052 --rc genhtml_function_coverage=1 01:29:24.052 --rc genhtml_legend=1 01:29:24.052 --rc geninfo_all_blocks=1 01:29:24.052 --rc geninfo_unexecuted_blocks=1 01:29:24.052 01:29:24.052 ' 01:29:24.052 05:24:15 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 01:29:24.632 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:29:25.199 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 01:29:25.199 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 01:29:25.199 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 01:29:25.199 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 01:29:25.199 05:24:16 nvme -- nvme/nvme.sh@79 -- # uname 01:29:25.199 05:24:16 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 01:29:25.199 05:24:16 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 01:29:25.199 05:24:16 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 01:29:25.199 05:24:16 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 01:29:25.199 05:24:16 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 01:29:25.199 05:24:16 nvme -- common/autotest_common.sh@1073 -- # echo 0 01:29:25.199 05:24:16 nvme -- common/autotest_common.sh@1075 -- # stubpid=64273 01:29:25.199 05:24:16 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 01:29:25.199 Waiting for stub to ready for secondary processes... 01:29:25.199 05:24:16 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 01:29:25.199 05:24:16 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 01:29:25.200 05:24:16 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/64273 ]] 01:29:25.200 05:24:16 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 01:29:25.200 [2024-12-09 05:24:16.797892] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:29:25.200 [2024-12-09 05:24:16.798089] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 01:29:26.135 05:24:17 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 01:29:26.394 05:24:17 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/64273 ]] 01:29:26.394 05:24:17 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 01:29:26.653 [2024-12-09 05:24:18.180563] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 01:29:26.910 [2024-12-09 05:24:18.309876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:29:26.910 [2024-12-09 05:24:18.310031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:29:26.910 [2024-12-09 05:24:18.310046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:29:26.910 [2024-12-09 05:24:18.332697] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 01:29:26.910 [2024-12-09 05:24:18.332748] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 01:29:26.910 [2024-12-09 05:24:18.345967] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 01:29:26.910 [2024-12-09 05:24:18.346107] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 01:29:26.910 [2024-12-09 05:24:18.348183] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 01:29:26.910 [2024-12-09 05:24:18.348397] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 01:29:26.910 [2024-12-09 05:24:18.348478] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 01:29:26.910 [2024-12-09 05:24:18.350687] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 01:29:26.910 [2024-12-09 05:24:18.350922] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 01:29:26.910 [2024-12-09 05:24:18.351024] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 01:29:26.910 [2024-12-09 05:24:18.353739] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 01:29:26.910 [2024-12-09 05:24:18.354114] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 01:29:26.910 [2024-12-09 05:24:18.354450] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 01:29:26.910 [2024-12-09 05:24:18.354569] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 01:29:26.910 [2024-12-09 05:24:18.354650] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 01:29:27.168 05:24:18 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 01:29:27.168 done. 01:29:27.168 05:24:18 nvme -- common/autotest_common.sh@1082 -- # echo done. 01:29:27.168 05:24:18 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 01:29:27.168 05:24:18 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 01:29:27.168 05:24:18 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 01:29:27.168 05:24:18 nvme -- common/autotest_common.sh@10 -- # set +x 01:29:27.168 ************************************ 01:29:27.168 START TEST nvme_reset 01:29:27.168 ************************************ 01:29:27.168 05:24:18 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 01:29:27.733 Initializing NVMe Controllers 01:29:27.733 Skipping QEMU NVMe SSD at 0000:00:10.0 01:29:27.733 Skipping QEMU NVMe SSD at 0000:00:11.0 01:29:27.733 Skipping QEMU NVMe SSD at 0000:00:13.0 01:29:27.733 Skipping QEMU NVMe SSD at 0000:00:12.0 01:29:27.733 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 01:29:27.733 01:29:27.733 real 0m0.354s 01:29:27.733 user 0m0.134s 01:29:27.733 sys 0m0.162s 01:29:27.733 05:24:19 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 01:29:27.733 05:24:19 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 01:29:27.733 ************************************ 01:29:27.733 END TEST nvme_reset 01:29:27.733 ************************************ 01:29:27.733 05:24:19 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 01:29:27.733 05:24:19 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:29:27.733 05:24:19 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 01:29:27.733 05:24:19 nvme -- common/autotest_common.sh@10 -- # set +x 01:29:27.733 ************************************ 01:29:27.733 START TEST nvme_identify 01:29:27.733 ************************************ 01:29:27.733 05:24:19 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 01:29:27.733 05:24:19 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 01:29:27.733 05:24:19 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 01:29:27.733 05:24:19 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 01:29:27.733 05:24:19 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 01:29:27.733 05:24:19 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 01:29:27.733 05:24:19 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 01:29:27.733 05:24:19 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 01:29:27.733 05:24:19 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 01:29:27.733 05:24:19 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 01:29:27.733 05:24:19 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 01:29:27.733 05:24:19 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 01:29:27.733 05:24:19 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 01:29:27.994 [2024-12-09 05:24:19.533767] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 64306 terminated unexpected 01:29:27.994 ===================================================== 01:29:27.994 NVMe Controller at 0000:00:10.0 [1b36:0010] 01:29:27.994 ===================================================== 01:29:27.994 Controller Capabilities/Features 01:29:27.994 ================================ 01:29:27.994 Vendor ID: 1b36 01:29:27.994 Subsystem Vendor ID: 1af4 01:29:27.994 Serial Number: 12340 01:29:27.994 Model Number: QEMU NVMe Ctrl 01:29:27.994 Firmware Version: 8.0.0 01:29:27.994 Recommended Arb Burst: 6 01:29:27.994 IEEE OUI Identifier: 00 54 52 01:29:27.994 Multi-path I/O 01:29:27.994 May have multiple subsystem ports: No 01:29:27.994 May have multiple controllers: No 01:29:27.994 Associated with SR-IOV VF: No 01:29:27.995 Max Data Transfer Size: 524288 01:29:27.995 Max Number of Namespaces: 256 01:29:27.995 Max Number of I/O Queues: 64 01:29:27.995 NVMe Specification Version (VS): 1.4 01:29:27.995 NVMe Specification Version (Identify): 1.4 01:29:27.995 Maximum Queue Entries: 2048 01:29:27.995 Contiguous Queues Required: Yes 01:29:27.995 Arbitration Mechanisms Supported 01:29:27.995 Weighted Round Robin: Not Supported 01:29:27.995 Vendor Specific: Not Supported 01:29:27.995 Reset Timeout: 7500 ms 01:29:27.995 Doorbell Stride: 4 bytes 01:29:27.995 NVM Subsystem Reset: Not Supported 01:29:27.995 Command Sets Supported 01:29:27.995 NVM Command Set: Supported 01:29:27.995 Boot Partition: Not Supported 01:29:27.995 Memory Page Size Minimum: 4096 bytes 01:29:27.995 Memory Page Size Maximum: 65536 bytes 01:29:27.995 Persistent Memory Region: Not Supported 01:29:27.995 Optional Asynchronous Events Supported 01:29:27.995 Namespace Attribute Notices: Supported 01:29:27.995 Firmware Activation Notices: Not Supported 01:29:27.995 ANA Change Notices: Not Supported 01:29:27.995 PLE Aggregate Log Change Notices: Not Supported 01:29:27.995 LBA Status Info Alert Notices: Not Supported 01:29:27.995 EGE Aggregate Log Change Notices: Not Supported 01:29:27.995 Normal NVM Subsystem Shutdown event: Not Supported 01:29:27.995 Zone Descriptor Change Notices: Not Supported 01:29:27.995 Discovery Log Change Notices: Not Supported 01:29:27.995 Controller Attributes 01:29:27.995 128-bit Host Identifier: Not Supported 01:29:27.995 Non-Operational Permissive Mode: Not Supported 01:29:27.995 NVM Sets: Not Supported 01:29:27.995 Read Recovery Levels: Not Supported 01:29:27.995 Endurance Groups: Not Supported 01:29:27.995 Predictable Latency Mode: Not Supported 01:29:27.995 Traffic Based Keep ALive: Not Supported 01:29:27.995 Namespace Granularity: Not Supported 01:29:27.995 SQ Associations: Not Supported 01:29:27.995 UUID List: Not Supported 01:29:27.995 Multi-Domain Subsystem: Not Supported 01:29:27.995 Fixed Capacity Management: Not Supported 01:29:27.995 Variable Capacity Management: Not Supported 01:29:27.995 Delete Endurance Group: Not Supported 01:29:27.995 Delete NVM Set: Not Supported 01:29:27.995 Extended LBA Formats Supported: Supported 01:29:27.995 Flexible Data Placement Supported: Not Supported 01:29:27.995 01:29:27.995 Controller Memory Buffer Support 01:29:27.995 ================================ 01:29:27.995 Supported: No 01:29:27.995 01:29:27.995 Persistent Memory Region Support 01:29:27.995 ================================ 01:29:27.995 Supported: No 01:29:27.995 01:29:27.995 Admin Command Set Attributes 01:29:27.995 ============================ 01:29:27.995 Security Send/Receive: Not Supported 01:29:27.995 Format NVM: Supported 01:29:27.995 Firmware Activate/Download: Not Supported 01:29:27.995 Namespace Management: Supported 01:29:27.995 Device Self-Test: Not Supported 01:29:27.995 Directives: Supported 01:29:27.995 NVMe-MI: Not Supported 01:29:27.995 Virtualization Management: Not Supported 01:29:27.995 Doorbell Buffer Config: Supported 01:29:27.995 Get LBA Status Capability: Not Supported 01:29:27.995 Command & Feature Lockdown Capability: Not Supported 01:29:27.995 Abort Command Limit: 4 01:29:27.995 Async Event Request Limit: 4 01:29:27.995 Number of Firmware Slots: N/A 01:29:27.995 Firmware Slot 1 Read-Only: N/A 01:29:27.995 Firmware Activation Without Reset: N/A 01:29:27.995 Multiple Update Detection Support: N/A 01:29:27.995 Firmware Update Granularity: No Information Provided 01:29:27.995 Per-Namespace SMART Log: Yes 01:29:27.995 Asymmetric Namespace Access Log Page: Not Supported 01:29:27.995 Subsystem NQN: nqn.2019-08.org.qemu:12340 01:29:27.995 Command Effects Log Page: Supported 01:29:27.995 Get Log Page Extended Data: Supported 01:29:27.995 Telemetry Log Pages: Not Supported 01:29:27.995 Persistent Event Log Pages: Not Supported 01:29:27.995 Supported Log Pages Log Page: May Support 01:29:27.995 Commands Supported & Effects Log Page: Not Supported 01:29:27.995 Feature Identifiers & Effects Log Page:May Support 01:29:27.995 NVMe-MI Commands & Effects Log Page: May Support 01:29:27.995 Data Area 4 for Telemetry Log: Not Supported 01:29:27.995 Error Log Page Entries Supported: 1 01:29:27.995 Keep Alive: Not Supported 01:29:27.995 01:29:27.995 NVM Command Set Attributes 01:29:27.995 ========================== 01:29:27.995 Submission Queue Entry Size 01:29:27.995 Max: 64 01:29:27.995 Min: 64 01:29:27.995 Completion Queue Entry Size 01:29:27.995 Max: 16 01:29:27.995 Min: 16 01:29:27.995 Number of Namespaces: 256 01:29:27.995 Compare Command: Supported 01:29:27.995 Write Uncorrectable Command: Not Supported 01:29:27.995 Dataset Management Command: Supported 01:29:27.995 Write Zeroes Command: Supported 01:29:27.995 Set Features Save Field: Supported 01:29:27.995 Reservations: Not Supported 01:29:27.995 Timestamp: Supported 01:29:27.995 Copy: Supported 01:29:27.995 Volatile Write Cache: Present 01:29:27.995 Atomic Write Unit (Normal): 1 01:29:27.995 Atomic Write Unit (PFail): 1 01:29:27.995 Atomic Compare & Write Unit: 1 01:29:27.995 Fused Compare & Write: Not Supported 01:29:27.995 Scatter-Gather List 01:29:27.995 SGL Command Set: Supported 01:29:27.995 SGL Keyed: Not Supported 01:29:27.995 SGL Bit Bucket Descriptor: Not Supported 01:29:27.995 SGL Metadata Pointer: Not Supported 01:29:27.995 Oversized SGL: Not Supported 01:29:27.995 SGL Metadata Address: Not Supported 01:29:27.995 SGL Offset: Not Supported 01:29:27.995 Transport SGL Data Block: Not Supported 01:29:27.995 Replay Protected Memory Block: Not Supported 01:29:27.995 01:29:27.995 Firmware Slot Information 01:29:27.995 ========================= 01:29:27.995 Active slot: 1 01:29:27.995 Slot 1 Firmware Revision: 1.0 01:29:27.995 01:29:27.995 01:29:27.995 Commands Supported and Effects 01:29:27.995 ============================== 01:29:27.995 Admin Commands 01:29:27.995 -------------- 01:29:27.995 Delete I/O Submission Queue (00h): Supported 01:29:27.995 Create I/O Submission Queue (01h): Supported 01:29:27.995 Get Log Page (02h): Supported 01:29:27.995 Delete I/O Completion Queue (04h): Supported 01:29:27.995 Create I/O Completion Queue (05h): Supported 01:29:27.995 Identify (06h): Supported 01:29:27.995 Abort (08h): Supported 01:29:27.995 Set Features (09h): Supported 01:29:27.995 Get Features (0Ah): Supported 01:29:27.995 Asynchronous Event Request (0Ch): Supported 01:29:27.995 Namespace Attachment (15h): Supported NS-Inventory-Change 01:29:27.995 Directive Send (19h): Supported 01:29:27.995 Directive Receive (1Ah): Supported 01:29:27.995 Virtualization Management (1Ch): Supported 01:29:27.995 Doorbell Buffer Config (7Ch): Supported 01:29:27.995 Format NVM (80h): Supported LBA-Change 01:29:27.995 I/O Commands 01:29:27.995 ------------ 01:29:27.995 Flush (00h): Supported LBA-Change 01:29:27.995 Write (01h): Supported LBA-Change 01:29:27.995 Read (02h): Supported 01:29:27.995 Compare (05h): Supported 01:29:27.995 Write Zeroes (08h): Supported LBA-Change 01:29:27.995 Dataset Management (09h): Supported LBA-Change 01:29:27.995 Unknown (0Ch): Supported 01:29:27.995 Unknown (12h): Supported 01:29:27.995 Copy (19h): Supported LBA-Change 01:29:27.995 Unknown (1Dh): Supported LBA-Change 01:29:27.995 01:29:27.995 Error Log 01:29:27.995 ========= 01:29:27.995 01:29:27.995 Arbitration 01:29:27.995 =========== 01:29:27.995 Arbitration Burst: no limit 01:29:27.995 01:29:27.995 Power Management 01:29:27.995 ================ 01:29:27.995 Number of Power States: 1 01:29:27.995 Current Power State: Power State #0 01:29:27.995 Power State #0: 01:29:27.995 Max Power: 25.00 W 01:29:27.995 Non-Operational State: Operational 01:29:27.995 Entry Latency: 16 microseconds 01:29:27.995 Exit Latency: 4 microseconds 01:29:27.995 Relative Read Throughput: 0 01:29:27.995 Relative Read Latency: 0 01:29:27.995 Relative Write Throughput: 0 01:29:27.995 Relative Write Latency: 0 01:29:27.995 Idle Power[2024-12-09 05:24:19.536340] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 64306 terminated unexpected 01:29:27.995 : Not Reported 01:29:27.995 Active Power: Not Reported 01:29:27.995 Non-Operational Permissive Mode: Not Supported 01:29:27.995 01:29:27.995 Health Information 01:29:27.995 ================== 01:29:27.995 Critical Warnings: 01:29:27.995 Available Spare Space: OK 01:29:27.995 Temperature: OK 01:29:27.995 Device Reliability: OK 01:29:27.995 Read Only: No 01:29:27.995 Volatile Memory Backup: OK 01:29:27.995 Current Temperature: 323 Kelvin (50 Celsius) 01:29:27.995 Temperature Threshold: 343 Kelvin (70 Celsius) 01:29:27.995 Available Spare: 0% 01:29:27.995 Available Spare Threshold: 0% 01:29:27.995 Life Percentage Used: 0% 01:29:27.995 Data Units Read: 703 01:29:27.995 Data Units Written: 631 01:29:27.995 Host Read Commands: 31673 01:29:27.995 Host Write Commands: 31459 01:29:27.995 Controller Busy Time: 0 minutes 01:29:27.995 Power Cycles: 0 01:29:27.995 Power On Hours: 0 hours 01:29:27.995 Unsafe Shutdowns: 0 01:29:27.995 Unrecoverable Media Errors: 0 01:29:27.995 Lifetime Error Log Entries: 0 01:29:27.995 Warning Temperature Time: 0 minutes 01:29:27.995 Critical Temperature Time: 0 minutes 01:29:27.995 01:29:27.995 Number of Queues 01:29:27.995 ================ 01:29:27.995 Number of I/O Submission Queues: 64 01:29:27.995 Number of I/O Completion Queues: 64 01:29:27.995 01:29:27.995 ZNS Specific Controller Data 01:29:27.995 ============================ 01:29:27.995 Zone Append Size Limit: 0 01:29:27.995 01:29:27.995 01:29:27.995 Active Namespaces 01:29:27.995 ================= 01:29:27.995 Namespace ID:1 01:29:27.995 Error Recovery Timeout: Unlimited 01:29:27.995 Command Set Identifier: NVM (00h) 01:29:27.995 Deallocate: Supported 01:29:27.995 Deallocated/Unwritten Error: Supported 01:29:27.995 Deallocated Read Value: All 0x00 01:29:27.995 Deallocate in Write Zeroes: Not Supported 01:29:27.995 Deallocated Guard Field: 0xFFFF 01:29:27.995 Flush: Supported 01:29:27.995 Reservation: Not Supported 01:29:27.995 Metadata Transferred as: Separate Metadata Buffer 01:29:27.996 Namespace Sharing Capabilities: Private 01:29:27.996 Size (in LBAs): 1548666 (5GiB) 01:29:27.996 Capacity (in LBAs): 1548666 (5GiB) 01:29:27.996 Utilization (in LBAs): 1548666 (5GiB) 01:29:27.996 Thin Provisioning: Not Supported 01:29:27.996 Per-NS Atomic Units: No 01:29:27.996 Maximum Single Source Range Length: 128 01:29:27.996 Maximum Copy Length: 128 01:29:27.996 Maximum Source Range Count: 128 01:29:27.996 NGUID/EUI64 Never Reused: No 01:29:27.996 Namespace Write Protected: No 01:29:27.996 Number of LBA Formats: 8 01:29:27.996 Current LBA Format: LBA Format #07 01:29:27.996 LBA Format #00: Data Size: 512 Metadata Size: 0 01:29:27.996 LBA Format #01: Data Size: 512 Metadata Size: 8 01:29:27.996 LBA Format #02: Data Size: 512 Metadata Size: 16 01:29:27.996 LBA Format #03: Data Size: 512 Metadata Size: 64 01:29:27.996 LBA Format #04: Data Size: 4096 Metadata Size: 0 01:29:27.996 LBA Format #05: Data Size: 4096 Metadata Size: 8 01:29:27.996 LBA Format #06: Data Size: 4096 Metadata Size: 16 01:29:27.996 LBA Format #07: Data Size: 4096 Metadata Size: 64 01:29:27.996 01:29:27.996 NVM Specific Namespace Data 01:29:27.996 =========================== 01:29:27.996 Logical Block Storage Tag Mask: 0 01:29:27.996 Protection Information Capabilities: 01:29:27.996 16b Guard Protection Information Storage Tag Support: No 01:29:27.996 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 01:29:27.996 Storage Tag Check Read Support: No 01:29:27.996 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:29:27.996 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:29:27.996 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:29:27.996 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:29:27.996 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:29:27.996 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:29:27.996 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:29:27.996 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:29:27.996 ===================================================== 01:29:27.996 NVMe Controller at 0000:00:11.0 [1b36:0010] 01:29:27.996 ===================================================== 01:29:27.996 Controller Capabilities/Features 01:29:27.996 ================================ 01:29:27.996 Vendor ID: 1b36 01:29:27.996 Subsystem Vendor ID: 1af4 01:29:27.996 Serial Number: 12341 01:29:27.996 Model Number: QEMU NVMe Ctrl 01:29:27.996 Firmware Version: 8.0.0 01:29:27.996 Recommended Arb Burst: 6 01:29:27.996 IEEE OUI Identifier: 00 54 52 01:29:27.996 Multi-path I/O 01:29:27.996 May have multiple subsystem ports: No 01:29:27.996 May have multiple controllers: No 01:29:27.996 Associated with SR-IOV VF: No 01:29:27.996 Max Data Transfer Size: 524288 01:29:27.996 Max Number of Namespaces: 256 01:29:27.996 Max Number of I/O Queues: 64 01:29:27.996 NVMe Specification Version (VS): 1.4 01:29:27.996 NVMe Specification Version (Identify): 1.4 01:29:27.996 Maximum Queue Entries: 2048 01:29:27.996 Contiguous Queues Required: Yes 01:29:27.996 Arbitration Mechanisms Supported 01:29:27.996 Weighted Round Robin: Not Supported 01:29:27.996 Vendor Specific: Not Supported 01:29:27.996 Reset Timeout: 7500 ms 01:29:27.996 Doorbell Stride: 4 bytes 01:29:27.996 NVM Subsystem Reset: Not Supported 01:29:27.996 Command Sets Supported 01:29:27.996 NVM Command Set: Supported 01:29:27.996 Boot Partition: Not Supported 01:29:27.996 Memory Page Size Minimum: 4096 bytes 01:29:27.996 Memory Page Size Maximum: 65536 bytes 01:29:27.996 Persistent Memory Region: Not Supported 01:29:27.996 Optional Asynchronous Events Supported 01:29:27.996 Namespace Attribute Notices: Supported 01:29:27.996 Firmware Activation Notices: Not Supported 01:29:27.996 ANA Change Notices: Not Supported 01:29:27.996 PLE Aggregate Log Change Notices: Not Supported 01:29:27.996 LBA Status Info Alert Notices: Not Supported 01:29:27.996 EGE Aggregate Log Change Notices: Not Supported 01:29:27.996 Normal NVM Subsystem Shutdown event: Not Supported 01:29:27.996 Zone Descriptor Change Notices: Not Supported 01:29:27.996 Discovery Log Change Notices: Not Supported 01:29:27.996 Controller Attributes 01:29:27.996 128-bit Host Identifier: Not Supported 01:29:27.996 Non-Operational Permissive Mode: Not Supported 01:29:27.996 NVM Sets: Not Supported 01:29:27.996 Read Recovery Levels: Not Supported 01:29:27.996 Endurance Groups: Not Supported 01:29:27.996 Predictable Latency Mode: Not Supported 01:29:27.996 Traffic Based Keep ALive: Not Supported 01:29:27.996 Namespace Granularity: Not Supported 01:29:27.996 SQ Associations: Not Supported 01:29:27.996 UUID List: Not Supported 01:29:27.996 Multi-Domain Subsystem: Not Supported 01:29:27.996 Fixed Capacity Management: Not Supported 01:29:27.996 Variable Capacity Management: Not Supported 01:29:27.996 Delete Endurance Group: Not Supported 01:29:27.996 Delete NVM Set: Not Supported 01:29:27.996 Extended LBA Formats Supported: Supported 01:29:27.996 Flexible Data Placement Supported: Not Supported 01:29:27.996 01:29:27.996 Controller Memory Buffer Support 01:29:27.996 ================================ 01:29:27.996 Supported: No 01:29:27.996 01:29:27.996 Persistent Memory Region Support 01:29:27.996 ================================ 01:29:27.996 Supported: No 01:29:27.996 01:29:27.996 Admin Command Set Attributes 01:29:27.996 ============================ 01:29:27.996 Security Send/Receive: Not Supported 01:29:27.996 Format NVM: Supported 01:29:27.996 Firmware Activate/Download: Not Supported 01:29:27.996 Namespace Management: Supported 01:29:27.996 Device Self-Test: Not Supported 01:29:27.996 Directives: Supported 01:29:27.996 NVMe-MI: Not Supported 01:29:27.996 Virtualization Management: Not Supported 01:29:27.996 Doorbell Buffer Config: Supported 01:29:27.996 Get LBA Status Capability: Not Supported 01:29:27.996 Command & Feature Lockdown Capability: Not Supported 01:29:27.996 Abort Command Limit: 4 01:29:27.996 Async Event Request Limit: 4 01:29:27.996 Number of Firmware Slots: N/A 01:29:27.996 Firmware Slot 1 Read-Only: N/A 01:29:27.996 Firmware Activation Without Reset: N/A 01:29:27.996 Multiple Update Detection Support: N/A 01:29:27.996 Firmware Update Granularity: No Information Provided 01:29:27.996 Per-Namespace SMART Log: Yes 01:29:27.996 Asymmetric Namespace Access Log Page: Not Supported 01:29:27.996 Subsystem NQN: nqn.2019-08.org.qemu:12341 01:29:27.996 Command Effects Log Page: Supported 01:29:27.996 Get Log Page Extended Data: Supported 01:29:27.996 Telemetry Log Pages: Not Supported 01:29:27.996 Persistent Event Log Pages: Not Supported 01:29:27.996 Supported Log Pages Log Page: May Support 01:29:27.996 Commands Supported & Effects Log Page: Not Supported 01:29:27.996 Feature Identifiers & Effects Log Page:May Support 01:29:27.996 NVMe-MI Commands & Effects Log Page: May Support 01:29:27.996 Data Area 4 for Telemetry Log: Not Supported 01:29:27.996 Error Log Page Entries Supported: 1 01:29:27.996 Keep Alive: Not Supported 01:29:27.996 01:29:27.996 NVM Command Set Attributes 01:29:27.996 ========================== 01:29:27.996 Submission Queue Entry Size 01:29:27.996 Max: 64 01:29:27.996 Min: 64 01:29:27.996 Completion Queue Entry Size 01:29:27.996 Max: 16 01:29:27.996 Min: 16 01:29:27.996 Number of Namespaces: 256 01:29:27.996 Compare Command: Supported 01:29:27.996 Write Uncorrectable Command: Not Supported 01:29:27.996 Dataset Management Command: Supported 01:29:27.996 Write Zeroes Command: Supported 01:29:27.996 Set Features Save Field: Supported 01:29:27.996 Reservations: Not Supported 01:29:27.996 Timestamp: Supported 01:29:27.996 Copy: Supported 01:29:27.996 Volatile Write Cache: Present 01:29:27.996 Atomic Write Unit (Normal): 1 01:29:27.996 Atomic Write Unit (PFail): 1 01:29:27.996 Atomic Compare & Write Unit: 1 01:29:27.996 Fused Compare & Write: Not Supported 01:29:27.996 Scatter-Gather List 01:29:27.996 SGL Command Set: Supported 01:29:27.996 SGL Keyed: Not Supported 01:29:27.996 SGL Bit Bucket Descriptor: Not Supported 01:29:27.996 SGL Metadata Pointer: Not Supported 01:29:27.996 Oversized SGL: Not Supported 01:29:27.997 SGL Metadata Address: Not Supported 01:29:27.997 SGL Offset: Not Supported 01:29:27.997 Transport SGL Data Block: Not Supported 01:29:27.997 Replay Protected Memory Block: Not Supported 01:29:27.997 01:29:27.997 Firmware Slot Information 01:29:27.997 ========================= 01:29:27.997 Active slot: 1 01:29:27.997 Slot 1 Firmware Revision: 1.0 01:29:27.997 01:29:27.997 01:29:27.997 Commands Supported and Effects 01:29:27.997 ============================== 01:29:27.997 Admin Commands 01:29:27.997 -------------- 01:29:27.997 Delete I/O Submission Queue (00h): Supported 01:29:27.997 Create I/O Submission Queue (01h): Supported 01:29:27.997 Get Log Page (02h): Supported 01:29:27.997 Delete I/O Completion Queue (04h): Supported 01:29:27.997 Create I/O Completion Queue (05h): Supported 01:29:27.997 Identify (06h): Supported 01:29:27.997 Abort (08h): Supported 01:29:27.997 Set Features (09h): Supported 01:29:27.997 Get Features (0Ah): Supported 01:29:27.997 Asynchronous Event Request (0Ch): Supported 01:29:27.997 Namespace Attachment (15h): Supported NS-Inventory-Change 01:29:27.997 Directive Send (19h): Supported 01:29:27.997 Directive Receive (1Ah): Supported 01:29:27.997 Virtualization Management (1Ch): Supported 01:29:27.997 Doorbell Buffer Config (7Ch): Supported 01:29:27.997 Format NVM (80h): Supported LBA-Change 01:29:27.997 I/O Commands 01:29:27.997 ------------ 01:29:27.997 Flush (00h): Supported LBA-Change 01:29:27.997 Write (01h): Supported LBA-Change 01:29:27.997 Read (02h): Supported 01:29:27.997 Compare (05h): Supported 01:29:27.997 Write Zeroes (08h): Supported LBA-Change 01:29:27.997 Dataset Management (09h): Supported LBA-Change 01:29:27.997 Unknown (0Ch): Supported 01:29:27.997 Unknown (12h): Supported 01:29:27.997 Copy (19h): Supported LBA-Change 01:29:27.997 Unknown (1Dh): Supported LBA-Change 01:29:27.997 01:29:27.997 Error Log 01:29:27.997 ========= 01:29:27.997 01:29:27.997 Arbitration 01:29:27.997 =========== 01:29:27.997 Arbitration Burst: no limit 01:29:27.997 01:29:27.997 Power Management 01:29:27.997 ================ 01:29:27.997 Number of Power States: 1 01:29:27.997 Current Power State: Power State #0 01:29:27.997 Power State #0: 01:29:27.997 Max Power: 25.00 W 01:29:27.997 Non-Operational State: Operational 01:29:27.997 Entry Latency: 16 microseconds 01:29:27.997 Exit Latency: 4 microseconds 01:29:27.997 Relative Read Throughput: 0 01:29:27.997 Relative Read Latency: 0 01:29:27.997 Relative Write Throughput: 0 01:29:27.997 Relative Write Latency: 0 01:29:27.997 Idle Power: Not Reported 01:29:27.997 Active Power: Not Reported 01:29:27.997 Non-Operational Permissive Mode: Not Supported 01:29:27.997 01:29:27.997 Health Information 01:29:27.997 ================== 01:29:27.997 Critical Warnings: 01:29:27.997 Available Spare Space: OK 01:29:27.997 Temperature: [2024-12-09 05:24:19.537803] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 64306 terminated unexpected 01:29:27.997 OK 01:29:27.997 Device Reliability: OK 01:29:27.997 Read Only: No 01:29:27.997 Volatile Memory Backup: OK 01:29:27.997 Current Temperature: 323 Kelvin (50 Celsius) 01:29:27.997 Temperature Threshold: 343 Kelvin (70 Celsius) 01:29:27.997 Available Spare: 0% 01:29:27.997 Available Spare Threshold: 0% 01:29:27.997 Life Percentage Used: 0% 01:29:27.997 Data Units Read: 1098 01:29:27.997 Data Units Written: 958 01:29:27.997 Host Read Commands: 46356 01:29:27.997 Host Write Commands: 45052 01:29:27.997 Controller Busy Time: 0 minutes 01:29:27.997 Power Cycles: 0 01:29:27.997 Power On Hours: 0 hours 01:29:27.997 Unsafe Shutdowns: 0 01:29:27.997 Unrecoverable Media Errors: 0 01:29:27.997 Lifetime Error Log Entries: 0 01:29:27.997 Warning Temperature Time: 0 minutes 01:29:27.997 Critical Temperature Time: 0 minutes 01:29:27.997 01:29:27.997 Number of Queues 01:29:27.997 ================ 01:29:27.997 Number of I/O Submission Queues: 64 01:29:27.997 Number of I/O Completion Queues: 64 01:29:27.997 01:29:27.997 ZNS Specific Controller Data 01:29:27.997 ============================ 01:29:27.997 Zone Append Size Limit: 0 01:29:27.997 01:29:27.997 01:29:27.997 Active Namespaces 01:29:27.997 ================= 01:29:27.997 Namespace ID:1 01:29:27.997 Error Recovery Timeout: Unlimited 01:29:27.997 Command Set Identifier: NVM (00h) 01:29:27.997 Deallocate: Supported 01:29:27.997 Deallocated/Unwritten Error: Supported 01:29:27.997 Deallocated Read Value: All 0x00 01:29:27.997 Deallocate in Write Zeroes: Not Supported 01:29:27.997 Deallocated Guard Field: 0xFFFF 01:29:27.997 Flush: Supported 01:29:27.997 Reservation: Not Supported 01:29:27.997 Namespace Sharing Capabilities: Private 01:29:27.997 Size (in LBAs): 1310720 (5GiB) 01:29:27.997 Capacity (in LBAs): 1310720 (5GiB) 01:29:27.997 Utilization (in LBAs): 1310720 (5GiB) 01:29:27.997 Thin Provisioning: Not Supported 01:29:27.997 Per-NS Atomic Units: No 01:29:27.997 Maximum Single Source Range Length: 128 01:29:27.997 Maximum Copy Length: 128 01:29:27.997 Maximum Source Range Count: 128 01:29:27.997 NGUID/EUI64 Never Reused: No 01:29:27.997 Namespace Write Protected: No 01:29:27.997 Number of LBA Formats: 8 01:29:27.997 Current LBA Format: LBA Format #04 01:29:27.997 LBA Format #00: Data Size: 512 Metadata Size: 0 01:29:27.997 LBA Format #01: Data Size: 512 Metadata Size: 8 01:29:27.997 LBA Format #02: Data Size: 512 Metadata Size: 16 01:29:27.997 LBA Format #03: Data Size: 512 Metadata Size: 64 01:29:27.997 LBA Format #04: Data Size: 4096 Metadata Size: 0 01:29:27.997 LBA Format #05: Data Size: 4096 Metadata Size: 8 01:29:27.997 LBA Format #06: Data Size: 4096 Metadata Size: 16 01:29:27.997 LBA Format #07: Data Size: 4096 Metadata Size: 64 01:29:27.997 01:29:27.997 NVM Specific Namespace Data 01:29:27.997 =========================== 01:29:27.997 Logical Block Storage Tag Mask: 0 01:29:27.997 Protection Information Capabilities: 01:29:27.997 16b Guard Protection Information Storage Tag Support: No 01:29:27.997 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 01:29:27.997 Storage Tag Check Read Support: No 01:29:27.997 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:29:27.997 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:29:27.997 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:29:27.997 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:29:27.997 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:29:27.997 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:29:27.997 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:29:27.997 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:29:27.997 ===================================================== 01:29:27.997 NVMe Controller at 0000:00:13.0 [1b36:0010] 01:29:27.997 ===================================================== 01:29:27.997 Controller Capabilities/Features 01:29:27.997 ================================ 01:29:27.997 Vendor ID: 1b36 01:29:27.997 Subsystem Vendor ID: 1af4 01:29:27.997 Serial Number: 12343 01:29:27.997 Model Number: QEMU NVMe Ctrl 01:29:27.997 Firmware Version: 8.0.0 01:29:27.997 Recommended Arb Burst: 6 01:29:27.997 IEEE OUI Identifier: 00 54 52 01:29:27.997 Multi-path I/O 01:29:27.997 May have multiple subsystem ports: No 01:29:27.997 May have multiple controllers: Yes 01:29:27.997 Associated with SR-IOV VF: No 01:29:27.997 Max Data Transfer Size: 524288 01:29:27.997 Max Number of Namespaces: 256 01:29:27.997 Max Number of I/O Queues: 64 01:29:27.997 NVMe Specification Version (VS): 1.4 01:29:27.997 NVMe Specification Version (Identify): 1.4 01:29:27.997 Maximum Queue Entries: 2048 01:29:27.997 Contiguous Queues Required: Yes 01:29:27.997 Arbitration Mechanisms Supported 01:29:27.997 Weighted Round Robin: Not Supported 01:29:27.997 Vendor Specific: Not Supported 01:29:27.997 Reset Timeout: 7500 ms 01:29:27.997 Doorbell Stride: 4 bytes 01:29:27.997 NVM Subsystem Reset: Not Supported 01:29:27.997 Command Sets Supported 01:29:27.997 NVM Command Set: Supported 01:29:27.997 Boot Partition: Not Supported 01:29:27.997 Memory Page Size Minimum: 4096 bytes 01:29:27.997 Memory Page Size Maximum: 65536 bytes 01:29:27.997 Persistent Memory Region: Not Supported 01:29:27.998 Optional Asynchronous Events Supported 01:29:27.998 Namespace Attribute Notices: Supported 01:29:27.998 Firmware Activation Notices: Not Supported 01:29:27.998 ANA Change Notices: Not Supported 01:29:27.998 PLE Aggregate Log Change Notices: Not Supported 01:29:27.998 LBA Status Info Alert Notices: Not Supported 01:29:27.998 EGE Aggregate Log Change Notices: Not Supported 01:29:27.998 Normal NVM Subsystem Shutdown event: Not Supported 01:29:27.998 Zone Descriptor Change Notices: Not Supported 01:29:27.998 Discovery Log Change Notices: Not Supported 01:29:27.998 Controller Attributes 01:29:27.998 128-bit Host Identifier: Not Supported 01:29:27.998 Non-Operational Permissive Mode: Not Supported 01:29:27.998 NVM Sets: Not Supported 01:29:27.998 Read Recovery Levels: Not Supported 01:29:27.998 Endurance Groups: Supported 01:29:27.998 Predictable Latency Mode: Not Supported 01:29:27.998 Traffic Based Keep ALive: Not Supported 01:29:27.998 Namespace Granularity: Not Supported 01:29:27.998 SQ Associations: Not Supported 01:29:27.998 UUID List: Not Supported 01:29:27.998 Multi-Domain Subsystem: Not Supported 01:29:27.998 Fixed Capacity Management: Not Supported 01:29:27.998 Variable Capacity Management: Not Supported 01:29:27.998 Delete Endurance Group: Not Supported 01:29:27.998 Delete NVM Set: Not Supported 01:29:27.998 Extended LBA Formats Supported: Supported 01:29:27.998 Flexible Data Placement Supported: Supported 01:29:27.998 01:29:27.998 Controller Memory Buffer Support 01:29:27.998 ================================ 01:29:27.998 Supported: No 01:29:27.998 01:29:27.998 Persistent Memory Region Support 01:29:27.998 ================================ 01:29:27.998 Supported: No 01:29:27.998 01:29:27.998 Admin Command Set Attributes 01:29:27.998 ============================ 01:29:27.998 Security Send/Receive: Not Supported 01:29:27.998 Format NVM: Supported 01:29:27.998 Firmware Activate/Download: Not Supported 01:29:27.998 Namespace Management: Supported 01:29:27.998 Device Self-Test: Not Supported 01:29:27.998 Directives: Supported 01:29:27.998 NVMe-MI: Not Supported 01:29:27.998 Virtualization Management: Not Supported 01:29:27.998 Doorbell Buffer Config: Supported 01:29:27.998 Get LBA Status Capability: Not Supported 01:29:27.998 Command & Feature Lockdown Capability: Not Supported 01:29:27.998 Abort Command Limit: 4 01:29:27.998 Async Event Request Limit: 4 01:29:27.998 Number of Firmware Slots: N/A 01:29:27.998 Firmware Slot 1 Read-Only: N/A 01:29:27.998 Firmware Activation Without Reset: N/A 01:29:27.998 Multiple Update Detection Support: N/A 01:29:27.998 Firmware Update Granularity: No Information Provided 01:29:27.998 Per-Namespace SMART Log: Yes 01:29:27.998 Asymmetric Namespace Access Log Page: Not Supported 01:29:27.998 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 01:29:27.998 Command Effects Log Page: Supported 01:29:27.998 Get Log Page Extended Data: Supported 01:29:27.998 Telemetry Log Pages: Not Supported 01:29:27.998 Persistent Event Log Pages: Not Supported 01:29:27.998 Supported Log Pages Log Page: May Support 01:29:27.998 Commands Supported & Effects Log Page: Not Supported 01:29:27.998 Feature Identifiers & Effects Log Page:May Support 01:29:27.998 NVMe-MI Commands & Effects Log Page: May Support 01:29:27.998 Data Area 4 for Telemetry Log: Not Supported 01:29:27.998 Error Log Page Entries Supported: 1 01:29:27.998 Keep Alive: Not Supported 01:29:27.998 01:29:27.998 NVM Command Set Attributes 01:29:27.998 ========================== 01:29:27.998 Submission Queue Entry Size 01:29:27.998 Max: 64 01:29:27.998 Min: 64 01:29:27.998 Completion Queue Entry Size 01:29:27.998 Max: 16 01:29:27.998 Min: 16 01:29:27.998 Number of Namespaces: 256 01:29:27.998 Compare Command: Supported 01:29:27.998 Write Uncorrectable Command: Not Supported 01:29:27.998 Dataset Management Command: Supported 01:29:27.998 Write Zeroes Command: Supported 01:29:27.998 Set Features Save Field: Supported 01:29:27.998 Reservations: Not Supported 01:29:27.998 Timestamp: Supported 01:29:27.998 Copy: Supported 01:29:27.998 Volatile Write Cache: Present 01:29:27.998 Atomic Write Unit (Normal): 1 01:29:27.998 Atomic Write Unit (PFail): 1 01:29:27.998 Atomic Compare & Write Unit: 1 01:29:27.998 Fused Compare & Write: Not Supported 01:29:27.998 Scatter-Gather List 01:29:27.998 SGL Command Set: Supported 01:29:27.998 SGL Keyed: Not Supported 01:29:27.998 SGL Bit Bucket Descriptor: Not Supported 01:29:27.998 SGL Metadata Pointer: Not Supported 01:29:27.998 Oversized SGL: Not Supported 01:29:27.998 SGL Metadata Address: Not Supported 01:29:27.998 SGL Offset: Not Supported 01:29:27.998 Transport SGL Data Block: Not Supported 01:29:27.998 Replay Protected Memory Block: Not Supported 01:29:27.998 01:29:27.998 Firmware Slot Information 01:29:27.998 ========================= 01:29:27.998 Active slot: 1 01:29:27.998 Slot 1 Firmware Revision: 1.0 01:29:27.998 01:29:27.998 01:29:27.998 Commands Supported and Effects 01:29:27.998 ============================== 01:29:27.998 Admin Commands 01:29:27.998 -------------- 01:29:27.998 Delete I/O Submission Queue (00h): Supported 01:29:27.998 Create I/O Submission Queue (01h): Supported 01:29:27.998 Get Log Page (02h): Supported 01:29:27.998 Delete I/O Completion Queue (04h): Supported 01:29:27.998 Create I/O Completion Queue (05h): Supported 01:29:27.998 Identify (06h): Supported 01:29:27.998 Abort (08h): Supported 01:29:27.998 Set Features (09h): Supported 01:29:27.998 Get Features (0Ah): Supported 01:29:27.998 Asynchronous Event Request (0Ch): Supported 01:29:27.998 Namespace Attachment (15h): Supported NS-Inventory-Change 01:29:27.998 Directive Send (19h): Supported 01:29:27.998 Directive Receive (1Ah): Supported 01:29:27.998 Virtualization Management (1Ch): Supported 01:29:27.998 Doorbell Buffer Config (7Ch): Supported 01:29:27.998 Format NVM (80h): Supported LBA-Change 01:29:27.998 I/O Commands 01:29:27.998 ------------ 01:29:27.998 Flush (00h): Supported LBA-Change 01:29:27.998 Write (01h): Supported LBA-Change 01:29:27.998 Read (02h): Supported 01:29:27.998 Compare (05h): Supported 01:29:27.998 Write Zeroes (08h): Supported LBA-Change 01:29:27.998 Dataset Management (09h): Supported LBA-Change 01:29:27.998 Unknown (0Ch): Supported 01:29:27.998 Unknown (12h): Supported 01:29:27.998 Copy (19h): Supported LBA-Change 01:29:27.998 Unknown (1Dh): Supported LBA-Change 01:29:27.998 01:29:27.998 Error Log 01:29:27.998 ========= 01:29:27.998 01:29:27.998 Arbitration 01:29:27.998 =========== 01:29:27.998 Arbitration Burst: no limit 01:29:27.998 01:29:27.998 Power Management 01:29:27.998 ================ 01:29:27.998 Number of Power States: 1 01:29:27.998 Current Power State: Power State #0 01:29:27.998 Power State #0: 01:29:27.998 Max Power: 25.00 W 01:29:27.998 Non-Operational State: Operational 01:29:27.998 Entry Latency: 16 microseconds 01:29:27.998 Exit Latency: 4 microseconds 01:29:27.998 Relative Read Throughput: 0 01:29:27.998 Relative Read Latency: 0 01:29:27.998 Relative Write Throughput: 0 01:29:27.998 Relative Write Latency: 0 01:29:27.998 Idle Power: Not Reported 01:29:27.998 Active Power: Not Reported 01:29:27.998 Non-Operational Permissive Mode: Not Supported 01:29:27.998 01:29:27.998 Health Information 01:29:27.998 ================== 01:29:27.998 Critical Warnings: 01:29:27.998 Available Spare Space: OK 01:29:27.998 Temperature: OK 01:29:27.998 Device Reliability: OK 01:29:27.998 Read Only: No 01:29:27.998 Volatile Memory Backup: OK 01:29:27.998 Current Temperature: 323 Kelvin (50 Celsius) 01:29:27.998 Temperature Threshold: 343 Kelvin (70 Celsius) 01:29:27.998 Available Spare: 0% 01:29:27.998 Available Spare Threshold: 0% 01:29:27.998 Life Percentage Used: 0% 01:29:27.998 Data Units Read: 796 01:29:27.998 Data Units Written: 725 01:29:27.998 Host Read Commands: 32693 01:29:27.998 Host Write Commands: 32116 01:29:27.998 Controller Busy Time: 0 minutes 01:29:27.998 Power Cycles: 0 01:29:27.998 Power On Hours: 0 hours 01:29:27.998 Unsafe Shutdowns: 0 01:29:27.998 Unrecoverable Media Errors: 0 01:29:27.998 Lifetime Error Log Entries: 0 01:29:27.998 Warning Temperature Time: 0 minutes 01:29:27.998 Critical Temperature Time: 0 minutes 01:29:27.998 01:29:27.998 Number of Queues 01:29:27.998 ================ 01:29:27.999 Number of I/O Submission Queues: 64 01:29:27.999 Number of I/O Completion Queues: 64 01:29:27.999 01:29:27.999 ZNS Specific Controller Data 01:29:27.999 ============================ 01:29:27.999 Zone Append Size Limit: 0 01:29:27.999 01:29:27.999 01:29:27.999 Active Namespaces 01:29:27.999 ================= 01:29:27.999 Namespace ID:1 01:29:27.999 Error Recovery Timeout: Unlimited 01:29:27.999 Command Set Identifier: NVM (00h) 01:29:27.999 Deallocate: Supported 01:29:27.999 Deallocated/Unwritten Error: Supported 01:29:27.999 Deallocated Read Value: All 0x00 01:29:27.999 Deallocate in Write Zeroes: Not Supported 01:29:27.999 Deallocated Guard Field: 0xFFFF 01:29:27.999 Flush: Supported 01:29:27.999 Reservation: Not Supported 01:29:27.999 Namespace Sharing Capabilities: Multiple Controllers 01:29:27.999 Size (in LBAs): 262144 (1GiB) 01:29:27.999 Capacity (in LBAs): 262144 (1GiB) 01:29:27.999 Utilization (in LBAs): 262144 (1GiB) 01:29:27.999 Thin Provisioning: Not Supported 01:29:27.999 Per-NS Atomic Units: No 01:29:27.999 Maximum Single Source Range Length: 128 01:29:27.999 Maximum Copy Length: 128 01:29:27.999 Maximum Source Range Count: 128 01:29:27.999 NGUID/EUI64 Never Reused: No 01:29:27.999 Namespace Write Protected: No 01:29:27.999 Endurance group ID: 1 01:29:27.999 Number of LBA Formats: 8 01:29:27.999 Current LBA Format: LBA Format #04 01:29:27.999 LBA Format #00: Data Size: 512 Metadata Size: 0 01:29:27.999 LBA Format #01: Data Size: 512 Metadata Size: 8 01:29:27.999 LBA Format #02: Data Size: 512 Metadata Size: 16 01:29:27.999 LBA Format #03: Data Size: 512 Metadata Size: 64 01:29:27.999 LBA Format #04: Data Size: 4096 Metadata Size: 0 01:29:27.999 LBA Format #05: Data Size: 4096 Metadata Size: 8 01:29:27.999 LBA Format #06: Data Size: 4096 Metadata Size: 16 01:29:27.999 LBA Format #07: Data Size: 4096 Metadata Size: 64 01:29:27.999 01:29:27.999 Get Feature FDP: 01:29:27.999 ================ 01:29:27.999 Enabled: Yes 01:29:27.999 FDP configuration index: 0 01:29:27.999 01:29:27.999 FDP configurations log page 01:29:27.999 =========================== 01:29:27.999 Number of FDP configurations: 1 01:29:27.999 Version: 0 01:29:27.999 Size: 112 01:29:27.999 FDP Configuration Descriptor: 0 01:29:27.999 Descriptor Size: 96 01:29:27.999 Reclaim Group Identifier format: 2 01:29:27.999 FDP Volatile Write Cache: Not Present 01:29:27.999 FDP Configuration: Valid 01:29:27.999 Vendor Specific Size: 0 01:29:27.999 Number of Reclaim Groups: 2 01:29:27.999 Number of Recalim Unit Handles: 8 01:29:27.999 Max Placement Identifiers: 128 01:29:27.999 Number of Namespaces Suppprted: 256 01:29:27.999 Reclaim unit Nominal Size: 6000000 bytes 01:29:27.999 Estimated Reclaim Unit Time Limit: Not Reported 01:29:27.999 RUH Desc #000: RUH Type: Initially Isolated 01:29:27.999 RUH Desc #001: RUH Type: Initially Isolated 01:29:27.999 RUH Desc #002: RUH Type: Initially Isolated 01:29:27.999 RUH Desc #003: RUH Type: Initially Isolated 01:29:27.999 RUH Desc #004: RUH Type: Initially Isolated 01:29:27.999 RUH Desc #005: RUH Type: Initially Isolated 01:29:27.999 RUH Desc #006: RUH Type: Initially Isolated 01:29:27.999 RUH Desc #007: RUH Type: Initially Isolated 01:29:27.999 01:29:27.999 FDP reclaim unit handle usage log page 01:29:27.999 ====================================== 01:29:27.999 Number of Reclaim Unit Handles: 8 01:29:27.999 RUH Usage Desc #000: RUH Attributes: Controller Specified 01:29:27.999 RUH Usage Desc #001: RUH Attributes: Unused 01:29:27.999 RUH Usage Desc #002: RUH Attributes: Unused 01:29:27.999 RUH Usage Desc #003: RUH Attributes: Unused 01:29:27.999 RUH Usage Desc #004: RUH Attributes: Unused 01:29:27.999 RUH Usage Desc #005: RUH Attributes: Unused 01:29:27.999 RUH Usage Desc #006: RUH Attributes: Unused 01:29:27.999 RUH Usage Desc #007: RUH Attributes: Unused 01:29:27.999 01:29:27.999 FDP statistics log page 01:29:27.999 ======================= 01:29:27.999 Host bytes with metadata written: 459382784 01:29:27.999 Medi[2024-12-09 05:24:19.540059] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 64306 terminated unexpected 01:29:27.999 a bytes with metadata written: 459448320 01:29:27.999 Media bytes erased: 0 01:29:27.999 01:29:27.999 FDP events log page 01:29:27.999 =================== 01:29:27.999 Number of FDP events: 0 01:29:27.999 01:29:27.999 NVM Specific Namespace Data 01:29:27.999 =========================== 01:29:27.999 Logical Block Storage Tag Mask: 0 01:29:27.999 Protection Information Capabilities: 01:29:27.999 16b Guard Protection Information Storage Tag Support: No 01:29:27.999 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 01:29:27.999 Storage Tag Check Read Support: No 01:29:27.999 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:29:27.999 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:29:27.999 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:29:27.999 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:29:27.999 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:29:27.999 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:29:27.999 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:29:27.999 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:29:27.999 ===================================================== 01:29:27.999 NVMe Controller at 0000:00:12.0 [1b36:0010] 01:29:27.999 ===================================================== 01:29:27.999 Controller Capabilities/Features 01:29:27.999 ================================ 01:29:27.999 Vendor ID: 1b36 01:29:27.999 Subsystem Vendor ID: 1af4 01:29:27.999 Serial Number: 12342 01:29:27.999 Model Number: QEMU NVMe Ctrl 01:29:27.999 Firmware Version: 8.0.0 01:29:27.999 Recommended Arb Burst: 6 01:29:27.999 IEEE OUI Identifier: 00 54 52 01:29:27.999 Multi-path I/O 01:29:27.999 May have multiple subsystem ports: No 01:29:27.999 May have multiple controllers: No 01:29:27.999 Associated with SR-IOV VF: No 01:29:27.999 Max Data Transfer Size: 524288 01:29:27.999 Max Number of Namespaces: 256 01:29:27.999 Max Number of I/O Queues: 64 01:29:27.999 NVMe Specification Version (VS): 1.4 01:29:27.999 NVMe Specification Version (Identify): 1.4 01:29:27.999 Maximum Queue Entries: 2048 01:29:27.999 Contiguous Queues Required: Yes 01:29:27.999 Arbitration Mechanisms Supported 01:29:27.999 Weighted Round Robin: Not Supported 01:29:27.999 Vendor Specific: Not Supported 01:29:27.999 Reset Timeout: 7500 ms 01:29:27.999 Doorbell Stride: 4 bytes 01:29:27.999 NVM Subsystem Reset: Not Supported 01:29:27.999 Command Sets Supported 01:29:27.999 NVM Command Set: Supported 01:29:27.999 Boot Partition: Not Supported 01:29:27.999 Memory Page Size Minimum: 4096 bytes 01:29:27.999 Memory Page Size Maximum: 65536 bytes 01:29:27.999 Persistent Memory Region: Not Supported 01:29:27.999 Optional Asynchronous Events Supported 01:29:27.999 Namespace Attribute Notices: Supported 01:29:27.999 Firmware Activation Notices: Not Supported 01:29:27.999 ANA Change Notices: Not Supported 01:29:27.999 PLE Aggregate Log Change Notices: Not Supported 01:29:27.999 LBA Status Info Alert Notices: Not Supported 01:29:27.999 EGE Aggregate Log Change Notices: Not Supported 01:29:28.000 Normal NVM Subsystem Shutdown event: Not Supported 01:29:28.000 Zone Descriptor Change Notices: Not Supported 01:29:28.000 Discovery Log Change Notices: Not Supported 01:29:28.000 Controller Attributes 01:29:28.000 128-bit Host Identifier: Not Supported 01:29:28.000 Non-Operational Permissive Mode: Not Supported 01:29:28.000 NVM Sets: Not Supported 01:29:28.000 Read Recovery Levels: Not Supported 01:29:28.000 Endurance Groups: Not Supported 01:29:28.000 Predictable Latency Mode: Not Supported 01:29:28.000 Traffic Based Keep ALive: Not Supported 01:29:28.000 Namespace Granularity: Not Supported 01:29:28.000 SQ Associations: Not Supported 01:29:28.000 UUID List: Not Supported 01:29:28.000 Multi-Domain Subsystem: Not Supported 01:29:28.000 Fixed Capacity Management: Not Supported 01:29:28.000 Variable Capacity Management: Not Supported 01:29:28.000 Delete Endurance Group: Not Supported 01:29:28.000 Delete NVM Set: Not Supported 01:29:28.000 Extended LBA Formats Supported: Supported 01:29:28.000 Flexible Data Placement Supported: Not Supported 01:29:28.000 01:29:28.000 Controller Memory Buffer Support 01:29:28.000 ================================ 01:29:28.000 Supported: No 01:29:28.000 01:29:28.000 Persistent Memory Region Support 01:29:28.000 ================================ 01:29:28.000 Supported: No 01:29:28.000 01:29:28.000 Admin Command Set Attributes 01:29:28.000 ============================ 01:29:28.000 Security Send/Receive: Not Supported 01:29:28.000 Format NVM: Supported 01:29:28.000 Firmware Activate/Download: Not Supported 01:29:28.000 Namespace Management: Supported 01:29:28.000 Device Self-Test: Not Supported 01:29:28.000 Directives: Supported 01:29:28.000 NVMe-MI: Not Supported 01:29:28.000 Virtualization Management: Not Supported 01:29:28.000 Doorbell Buffer Config: Supported 01:29:28.000 Get LBA Status Capability: Not Supported 01:29:28.000 Command & Feature Lockdown Capability: Not Supported 01:29:28.000 Abort Command Limit: 4 01:29:28.000 Async Event Request Limit: 4 01:29:28.000 Number of Firmware Slots: N/A 01:29:28.000 Firmware Slot 1 Read-Only: N/A 01:29:28.000 Firmware Activation Without Reset: N/A 01:29:28.000 Multiple Update Detection Support: N/A 01:29:28.000 Firmware Update Granularity: No Information Provided 01:29:28.000 Per-Namespace SMART Log: Yes 01:29:28.000 Asymmetric Namespace Access Log Page: Not Supported 01:29:28.000 Subsystem NQN: nqn.2019-08.org.qemu:12342 01:29:28.000 Command Effects Log Page: Supported 01:29:28.000 Get Log Page Extended Data: Supported 01:29:28.000 Telemetry Log Pages: Not Supported 01:29:28.000 Persistent Event Log Pages: Not Supported 01:29:28.000 Supported Log Pages Log Page: May Support 01:29:28.000 Commands Supported & Effects Log Page: Not Supported 01:29:28.000 Feature Identifiers & Effects Log Page:May Support 01:29:28.000 NVMe-MI Commands & Effects Log Page: May Support 01:29:28.000 Data Area 4 for Telemetry Log: Not Supported 01:29:28.000 Error Log Page Entries Supported: 1 01:29:28.000 Keep Alive: Not Supported 01:29:28.000 01:29:28.000 NVM Command Set Attributes 01:29:28.000 ========================== 01:29:28.000 Submission Queue Entry Size 01:29:28.000 Max: 64 01:29:28.000 Min: 64 01:29:28.000 Completion Queue Entry Size 01:29:28.000 Max: 16 01:29:28.000 Min: 16 01:29:28.000 Number of Namespaces: 256 01:29:28.000 Compare Command: Supported 01:29:28.000 Write Uncorrectable Command: Not Supported 01:29:28.000 Dataset Management Command: Supported 01:29:28.000 Write Zeroes Command: Supported 01:29:28.000 Set Features Save Field: Supported 01:29:28.000 Reservations: Not Supported 01:29:28.000 Timestamp: Supported 01:29:28.000 Copy: Supported 01:29:28.000 Volatile Write Cache: Present 01:29:28.000 Atomic Write Unit (Normal): 1 01:29:28.000 Atomic Write Unit (PFail): 1 01:29:28.000 Atomic Compare & Write Unit: 1 01:29:28.000 Fused Compare & Write: Not Supported 01:29:28.000 Scatter-Gather List 01:29:28.000 SGL Command Set: Supported 01:29:28.000 SGL Keyed: Not Supported 01:29:28.000 SGL Bit Bucket Descriptor: Not Supported 01:29:28.000 SGL Metadata Pointer: Not Supported 01:29:28.000 Oversized SGL: Not Supported 01:29:28.000 SGL Metadata Address: Not Supported 01:29:28.000 SGL Offset: Not Supported 01:29:28.000 Transport SGL Data Block: Not Supported 01:29:28.000 Replay Protected Memory Block: Not Supported 01:29:28.000 01:29:28.000 Firmware Slot Information 01:29:28.000 ========================= 01:29:28.000 Active slot: 1 01:29:28.000 Slot 1 Firmware Revision: 1.0 01:29:28.000 01:29:28.000 01:29:28.000 Commands Supported and Effects 01:29:28.000 ============================== 01:29:28.000 Admin Commands 01:29:28.000 -------------- 01:29:28.000 Delete I/O Submission Queue (00h): Supported 01:29:28.000 Create I/O Submission Queue (01h): Supported 01:29:28.000 Get Log Page (02h): Supported 01:29:28.000 Delete I/O Completion Queue (04h): Supported 01:29:28.000 Create I/O Completion Queue (05h): Supported 01:29:28.000 Identify (06h): Supported 01:29:28.000 Abort (08h): Supported 01:29:28.000 Set Features (09h): Supported 01:29:28.000 Get Features (0Ah): Supported 01:29:28.000 Asynchronous Event Request (0Ch): Supported 01:29:28.000 Namespace Attachment (15h): Supported NS-Inventory-Change 01:29:28.000 Directive Send (19h): Supported 01:29:28.000 Directive Receive (1Ah): Supported 01:29:28.000 Virtualization Management (1Ch): Supported 01:29:28.000 Doorbell Buffer Config (7Ch): Supported 01:29:28.000 Format NVM (80h): Supported LBA-Change 01:29:28.000 I/O Commands 01:29:28.000 ------------ 01:29:28.000 Flush (00h): Supported LBA-Change 01:29:28.000 Write (01h): Supported LBA-Change 01:29:28.000 Read (02h): Supported 01:29:28.000 Compare (05h): Supported 01:29:28.000 Write Zeroes (08h): Supported LBA-Change 01:29:28.000 Dataset Management (09h): Supported LBA-Change 01:29:28.000 Unknown (0Ch): Supported 01:29:28.000 Unknown (12h): Supported 01:29:28.000 Copy (19h): Supported LBA-Change 01:29:28.000 Unknown (1Dh): Supported LBA-Change 01:29:28.000 01:29:28.000 Error Log 01:29:28.000 ========= 01:29:28.000 01:29:28.000 Arbitration 01:29:28.000 =========== 01:29:28.000 Arbitration Burst: no limit 01:29:28.000 01:29:28.000 Power Management 01:29:28.000 ================ 01:29:28.000 Number of Power States: 1 01:29:28.000 Current Power State: Power State #0 01:29:28.000 Power State #0: 01:29:28.000 Max Power: 25.00 W 01:29:28.000 Non-Operational State: Operational 01:29:28.000 Entry Latency: 16 microseconds 01:29:28.000 Exit Latency: 4 microseconds 01:29:28.000 Relative Read Throughput: 0 01:29:28.000 Relative Read Latency: 0 01:29:28.000 Relative Write Throughput: 0 01:29:28.000 Relative Write Latency: 0 01:29:28.000 Idle Power: Not Reported 01:29:28.000 Active Power: Not Reported 01:29:28.000 Non-Operational Permissive Mode: Not Supported 01:29:28.000 01:29:28.000 Health Information 01:29:28.000 ================== 01:29:28.000 Critical Warnings: 01:29:28.000 Available Spare Space: OK 01:29:28.000 Temperature: OK 01:29:28.000 Device Reliability: OK 01:29:28.000 Read Only: No 01:29:28.000 Volatile Memory Backup: OK 01:29:28.000 Current Temperature: 323 Kelvin (50 Celsius) 01:29:28.000 Temperature Threshold: 343 Kelvin (70 Celsius) 01:29:28.000 Available Spare: 0% 01:29:28.000 Available Spare Threshold: 0% 01:29:28.000 Life Percentage Used: 0% 01:29:28.000 Data Units Read: 2238 01:29:28.000 Data Units Written: 2025 01:29:28.000 Host Read Commands: 96804 01:29:28.000 Host Write Commands: 95073 01:29:28.000 Controller Busy Time: 0 minutes 01:29:28.000 Power Cycles: 0 01:29:28.000 Power On Hours: 0 hours 01:29:28.000 Unsafe Shutdowns: 0 01:29:28.000 Unrecoverable Media Errors: 0 01:29:28.000 Lifetime Error Log Entries: 0 01:29:28.000 Warning Temperature Time: 0 minutes 01:29:28.000 Critical Temperature Time: 0 minutes 01:29:28.000 01:29:28.000 Number of Queues 01:29:28.000 ================ 01:29:28.000 Number of I/O Submission Queues: 64 01:29:28.000 Number of I/O Completion Queues: 64 01:29:28.000 01:29:28.000 ZNS Specific Controller Data 01:29:28.001 ============================ 01:29:28.001 Zone Append Size Limit: 0 01:29:28.001 01:29:28.001 01:29:28.001 Active Namespaces 01:29:28.001 ================= 01:29:28.001 Namespace ID:1 01:29:28.001 Error Recovery Timeout: Unlimited 01:29:28.001 Command Set Identifier: NVM (00h) 01:29:28.001 Deallocate: Supported 01:29:28.001 Deallocated/Unwritten Error: Supported 01:29:28.001 Deallocated Read Value: All 0x00 01:29:28.001 Deallocate in Write Zeroes: Not Supported 01:29:28.001 Deallocated Guard Field: 0xFFFF 01:29:28.001 Flush: Supported 01:29:28.001 Reservation: Not Supported 01:29:28.001 Namespace Sharing Capabilities: Private 01:29:28.001 Size (in LBAs): 1048576 (4GiB) 01:29:28.001 Capacity (in LBAs): 1048576 (4GiB) 01:29:28.001 Utilization (in LBAs): 1048576 (4GiB) 01:29:28.001 Thin Provisioning: Not Supported 01:29:28.001 Per-NS Atomic Units: No 01:29:28.001 Maximum Single Source Range Length: 128 01:29:28.001 Maximum Copy Length: 128 01:29:28.001 Maximum Source Range Count: 128 01:29:28.001 NGUID/EUI64 Never Reused: No 01:29:28.001 Namespace Write Protected: No 01:29:28.001 Number of LBA Formats: 8 01:29:28.001 Current LBA Format: LBA Format #04 01:29:28.001 LBA Format #00: Data Size: 512 Metadata Size: 0 01:29:28.001 LBA Format #01: Data Size: 512 Metadata Size: 8 01:29:28.001 LBA Format #02: Data Size: 512 Metadata Size: 16 01:29:28.001 LBA Format #03: Data Size: 512 Metadata Size: 64 01:29:28.001 LBA Format #04: Data Size: 4096 Metadata Size: 0 01:29:28.001 LBA Format #05: Data Size: 4096 Metadata Size: 8 01:29:28.001 LBA Format #06: Data Size: 4096 Metadata Size: 16 01:29:28.001 LBA Format #07: Data Size: 4096 Metadata Size: 64 01:29:28.001 01:29:28.001 NVM Specific Namespace Data 01:29:28.001 =========================== 01:29:28.001 Logical Block Storage Tag Mask: 0 01:29:28.001 Protection Information Capabilities: 01:29:28.001 16b Guard Protection Information Storage Tag Support: No 01:29:28.001 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 01:29:28.001 Storage Tag Check Read Support: No 01:29:28.001 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:29:28.001 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:29:28.001 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:29:28.001 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:29:28.001 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:29:28.001 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:29:28.001 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:29:28.001 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:29:28.001 Namespace ID:2 01:29:28.001 Error Recovery Timeout: Unlimited 01:29:28.001 Command Set Identifier: NVM (00h) 01:29:28.001 Deallocate: Supported 01:29:28.001 Deallocated/Unwritten Error: Supported 01:29:28.001 Deallocated Read Value: All 0x00 01:29:28.001 Deallocate in Write Zeroes: Not Supported 01:29:28.001 Deallocated Guard Field: 0xFFFF 01:29:28.001 Flush: Supported 01:29:28.001 Reservation: Not Supported 01:29:28.001 Namespace Sharing Capabilities: Private 01:29:28.001 Size (in LBAs): 1048576 (4GiB) 01:29:28.001 Capacity (in LBAs): 1048576 (4GiB) 01:29:28.001 Utilization (in LBAs): 1048576 (4GiB) 01:29:28.001 Thin Provisioning: Not Supported 01:29:28.001 Per-NS Atomic Units: No 01:29:28.001 Maximum Single Source Range Length: 128 01:29:28.001 Maximum Copy Length: 128 01:29:28.001 Maximum Source Range Count: 128 01:29:28.001 NGUID/EUI64 Never Reused: No 01:29:28.001 Namespace Write Protected: No 01:29:28.001 Number of LBA Formats: 8 01:29:28.001 Current LBA Format: LBA Format #04 01:29:28.001 LBA Format #00: Data Size: 512 Metadata Size: 0 01:29:28.001 LBA Format #01: Data Size: 512 Metadata Size: 8 01:29:28.001 LBA Format #02: Data Size: 512 Metadata Size: 16 01:29:28.001 LBA Format #03: Data Size: 512 Metadata Size: 64 01:29:28.001 LBA Format #04: Data Size: 4096 Metadata Size: 0 01:29:28.001 LBA Format #05: Data Size: 4096 Metadata Size: 8 01:29:28.001 LBA Format #06: Data Size: 4096 Metadata Size: 16 01:29:28.001 LBA Format #07: Data Size: 4096 Metadata Size: 64 01:29:28.001 01:29:28.001 NVM Specific Namespace Data 01:29:28.001 =========================== 01:29:28.001 Logical Block Storage Tag Mask: 0 01:29:28.001 Protection Information Capabilities: 01:29:28.001 16b Guard Protection Information Storage Tag Support: No 01:29:28.001 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 01:29:28.001 Storage Tag Check Read Support: No 01:29:28.001 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:29:28.001 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:29:28.001 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:29:28.001 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:29:28.001 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:29:28.001 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:29:28.001 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:29:28.001 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:29:28.001 Namespace ID:3 01:29:28.001 Error Recovery Timeout: Unlimited 01:29:28.001 Command Set Identifier: NVM (00h) 01:29:28.001 Deallocate: Supported 01:29:28.001 Deallocated/Unwritten Error: Supported 01:29:28.001 Deallocated Read Value: All 0x00 01:29:28.001 Deallocate in Write Zeroes: Not Supported 01:29:28.001 Deallocated Guard Field: 0xFFFF 01:29:28.001 Flush: Supported 01:29:28.001 Reservation: Not Supported 01:29:28.001 Namespace Sharing Capabilities: Private 01:29:28.001 Size (in LBAs): 1048576 (4GiB) 01:29:28.260 Capacity (in LBAs): 1048576 (4GiB) 01:29:28.260 Utilization (in LBAs): 1048576 (4GiB) 01:29:28.260 Thin Provisioning: Not Supported 01:29:28.260 Per-NS Atomic Units: No 01:29:28.260 Maximum Single Source Range Length: 128 01:29:28.260 Maximum Copy Length: 128 01:29:28.260 Maximum Source Range Count: 128 01:29:28.260 NGUID/EUI64 Never Reused: No 01:29:28.260 Namespace Write Protected: No 01:29:28.260 Number of LBA Formats: 8 01:29:28.260 Current LBA Format: LBA Format #04 01:29:28.260 LBA Format #00: Data Size: 512 Metadata Size: 0 01:29:28.260 LBA Format #01: Data Size: 512 Metadata Size: 8 01:29:28.260 LBA Format #02: Data Size: 512 Metadata Size: 16 01:29:28.260 LBA Format #03: Data Size: 512 Metadata Size: 64 01:29:28.260 LBA Format #04: Data Size: 4096 Metadata Size: 0 01:29:28.260 LBA Format #05: Data Size: 4096 Metadata Size: 8 01:29:28.260 LBA Format #06: Data Size: 4096 Metadata Size: 16 01:29:28.260 LBA Format #07: Data Size: 4096 Metadata Size: 64 01:29:28.260 01:29:28.260 NVM Specific Namespace Data 01:29:28.260 =========================== 01:29:28.260 Logical Block Storage Tag Mask: 0 01:29:28.260 Protection Information Capabilities: 01:29:28.260 16b Guard Protection Information Storage Tag Support: No 01:29:28.260 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 01:29:28.260 Storage Tag Check Read Support: No 01:29:28.260 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:29:28.260 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:29:28.260 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:29:28.260 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:29:28.260 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:29:28.260 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:29:28.260 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:29:28.260 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:29:28.260 05:24:19 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 01:29:28.260 05:24:19 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 01:29:28.520 ===================================================== 01:29:28.520 NVMe Controller at 0000:00:10.0 [1b36:0010] 01:29:28.520 ===================================================== 01:29:28.520 Controller Capabilities/Features 01:29:28.520 ================================ 01:29:28.520 Vendor ID: 1b36 01:29:28.520 Subsystem Vendor ID: 1af4 01:29:28.520 Serial Number: 12340 01:29:28.520 Model Number: QEMU NVMe Ctrl 01:29:28.520 Firmware Version: 8.0.0 01:29:28.520 Recommended Arb Burst: 6 01:29:28.520 IEEE OUI Identifier: 00 54 52 01:29:28.520 Multi-path I/O 01:29:28.520 May have multiple subsystem ports: No 01:29:28.520 May have multiple controllers: No 01:29:28.520 Associated with SR-IOV VF: No 01:29:28.520 Max Data Transfer Size: 524288 01:29:28.520 Max Number of Namespaces: 256 01:29:28.520 Max Number of I/O Queues: 64 01:29:28.520 NVMe Specification Version (VS): 1.4 01:29:28.520 NVMe Specification Version (Identify): 1.4 01:29:28.520 Maximum Queue Entries: 2048 01:29:28.520 Contiguous Queues Required: Yes 01:29:28.520 Arbitration Mechanisms Supported 01:29:28.520 Weighted Round Robin: Not Supported 01:29:28.520 Vendor Specific: Not Supported 01:29:28.520 Reset Timeout: 7500 ms 01:29:28.520 Doorbell Stride: 4 bytes 01:29:28.520 NVM Subsystem Reset: Not Supported 01:29:28.520 Command Sets Supported 01:29:28.520 NVM Command Set: Supported 01:29:28.520 Boot Partition: Not Supported 01:29:28.520 Memory Page Size Minimum: 4096 bytes 01:29:28.520 Memory Page Size Maximum: 65536 bytes 01:29:28.520 Persistent Memory Region: Not Supported 01:29:28.520 Optional Asynchronous Events Supported 01:29:28.520 Namespace Attribute Notices: Supported 01:29:28.520 Firmware Activation Notices: Not Supported 01:29:28.520 ANA Change Notices: Not Supported 01:29:28.520 PLE Aggregate Log Change Notices: Not Supported 01:29:28.520 LBA Status Info Alert Notices: Not Supported 01:29:28.520 EGE Aggregate Log Change Notices: Not Supported 01:29:28.520 Normal NVM Subsystem Shutdown event: Not Supported 01:29:28.520 Zone Descriptor Change Notices: Not Supported 01:29:28.520 Discovery Log Change Notices: Not Supported 01:29:28.520 Controller Attributes 01:29:28.520 128-bit Host Identifier: Not Supported 01:29:28.520 Non-Operational Permissive Mode: Not Supported 01:29:28.520 NVM Sets: Not Supported 01:29:28.520 Read Recovery Levels: Not Supported 01:29:28.520 Endurance Groups: Not Supported 01:29:28.520 Predictable Latency Mode: Not Supported 01:29:28.520 Traffic Based Keep ALive: Not Supported 01:29:28.520 Namespace Granularity: Not Supported 01:29:28.520 SQ Associations: Not Supported 01:29:28.520 UUID List: Not Supported 01:29:28.520 Multi-Domain Subsystem: Not Supported 01:29:28.520 Fixed Capacity Management: Not Supported 01:29:28.520 Variable Capacity Management: Not Supported 01:29:28.520 Delete Endurance Group: Not Supported 01:29:28.520 Delete NVM Set: Not Supported 01:29:28.520 Extended LBA Formats Supported: Supported 01:29:28.520 Flexible Data Placement Supported: Not Supported 01:29:28.520 01:29:28.520 Controller Memory Buffer Support 01:29:28.520 ================================ 01:29:28.520 Supported: No 01:29:28.520 01:29:28.520 Persistent Memory Region Support 01:29:28.520 ================================ 01:29:28.520 Supported: No 01:29:28.520 01:29:28.520 Admin Command Set Attributes 01:29:28.520 ============================ 01:29:28.520 Security Send/Receive: Not Supported 01:29:28.520 Format NVM: Supported 01:29:28.520 Firmware Activate/Download: Not Supported 01:29:28.520 Namespace Management: Supported 01:29:28.520 Device Self-Test: Not Supported 01:29:28.520 Directives: Supported 01:29:28.520 NVMe-MI: Not Supported 01:29:28.520 Virtualization Management: Not Supported 01:29:28.520 Doorbell Buffer Config: Supported 01:29:28.520 Get LBA Status Capability: Not Supported 01:29:28.520 Command & Feature Lockdown Capability: Not Supported 01:29:28.520 Abort Command Limit: 4 01:29:28.520 Async Event Request Limit: 4 01:29:28.520 Number of Firmware Slots: N/A 01:29:28.520 Firmware Slot 1 Read-Only: N/A 01:29:28.520 Firmware Activation Without Reset: N/A 01:29:28.520 Multiple Update Detection Support: N/A 01:29:28.520 Firmware Update Granularity: No Information Provided 01:29:28.520 Per-Namespace SMART Log: Yes 01:29:28.520 Asymmetric Namespace Access Log Page: Not Supported 01:29:28.520 Subsystem NQN: nqn.2019-08.org.qemu:12340 01:29:28.520 Command Effects Log Page: Supported 01:29:28.520 Get Log Page Extended Data: Supported 01:29:28.520 Telemetry Log Pages: Not Supported 01:29:28.520 Persistent Event Log Pages: Not Supported 01:29:28.520 Supported Log Pages Log Page: May Support 01:29:28.520 Commands Supported & Effects Log Page: Not Supported 01:29:28.520 Feature Identifiers & Effects Log Page:May Support 01:29:28.520 NVMe-MI Commands & Effects Log Page: May Support 01:29:28.520 Data Area 4 for Telemetry Log: Not Supported 01:29:28.520 Error Log Page Entries Supported: 1 01:29:28.520 Keep Alive: Not Supported 01:29:28.520 01:29:28.520 NVM Command Set Attributes 01:29:28.520 ========================== 01:29:28.520 Submission Queue Entry Size 01:29:28.520 Max: 64 01:29:28.520 Min: 64 01:29:28.520 Completion Queue Entry Size 01:29:28.520 Max: 16 01:29:28.520 Min: 16 01:29:28.520 Number of Namespaces: 256 01:29:28.520 Compare Command: Supported 01:29:28.520 Write Uncorrectable Command: Not Supported 01:29:28.520 Dataset Management Command: Supported 01:29:28.520 Write Zeroes Command: Supported 01:29:28.520 Set Features Save Field: Supported 01:29:28.520 Reservations: Not Supported 01:29:28.520 Timestamp: Supported 01:29:28.520 Copy: Supported 01:29:28.520 Volatile Write Cache: Present 01:29:28.520 Atomic Write Unit (Normal): 1 01:29:28.520 Atomic Write Unit (PFail): 1 01:29:28.520 Atomic Compare & Write Unit: 1 01:29:28.520 Fused Compare & Write: Not Supported 01:29:28.520 Scatter-Gather List 01:29:28.520 SGL Command Set: Supported 01:29:28.520 SGL Keyed: Not Supported 01:29:28.520 SGL Bit Bucket Descriptor: Not Supported 01:29:28.520 SGL Metadata Pointer: Not Supported 01:29:28.520 Oversized SGL: Not Supported 01:29:28.520 SGL Metadata Address: Not Supported 01:29:28.520 SGL Offset: Not Supported 01:29:28.520 Transport SGL Data Block: Not Supported 01:29:28.520 Replay Protected Memory Block: Not Supported 01:29:28.520 01:29:28.520 Firmware Slot Information 01:29:28.520 ========================= 01:29:28.520 Active slot: 1 01:29:28.520 Slot 1 Firmware Revision: 1.0 01:29:28.520 01:29:28.520 01:29:28.520 Commands Supported and Effects 01:29:28.520 ============================== 01:29:28.520 Admin Commands 01:29:28.520 -------------- 01:29:28.520 Delete I/O Submission Queue (00h): Supported 01:29:28.520 Create I/O Submission Queue (01h): Supported 01:29:28.520 Get Log Page (02h): Supported 01:29:28.520 Delete I/O Completion Queue (04h): Supported 01:29:28.520 Create I/O Completion Queue (05h): Supported 01:29:28.520 Identify (06h): Supported 01:29:28.520 Abort (08h): Supported 01:29:28.520 Set Features (09h): Supported 01:29:28.520 Get Features (0Ah): Supported 01:29:28.520 Asynchronous Event Request (0Ch): Supported 01:29:28.520 Namespace Attachment (15h): Supported NS-Inventory-Change 01:29:28.520 Directive Send (19h): Supported 01:29:28.520 Directive Receive (1Ah): Supported 01:29:28.520 Virtualization Management (1Ch): Supported 01:29:28.520 Doorbell Buffer Config (7Ch): Supported 01:29:28.520 Format NVM (80h): Supported LBA-Change 01:29:28.520 I/O Commands 01:29:28.520 ------------ 01:29:28.520 Flush (00h): Supported LBA-Change 01:29:28.520 Write (01h): Supported LBA-Change 01:29:28.520 Read (02h): Supported 01:29:28.520 Compare (05h): Supported 01:29:28.520 Write Zeroes (08h): Supported LBA-Change 01:29:28.521 Dataset Management (09h): Supported LBA-Change 01:29:28.521 Unknown (0Ch): Supported 01:29:28.521 Unknown (12h): Supported 01:29:28.521 Copy (19h): Supported LBA-Change 01:29:28.521 Unknown (1Dh): Supported LBA-Change 01:29:28.521 01:29:28.521 Error Log 01:29:28.521 ========= 01:29:28.521 01:29:28.521 Arbitration 01:29:28.521 =========== 01:29:28.521 Arbitration Burst: no limit 01:29:28.521 01:29:28.521 Power Management 01:29:28.521 ================ 01:29:28.521 Number of Power States: 1 01:29:28.521 Current Power State: Power State #0 01:29:28.521 Power State #0: 01:29:28.521 Max Power: 25.00 W 01:29:28.521 Non-Operational State: Operational 01:29:28.521 Entry Latency: 16 microseconds 01:29:28.521 Exit Latency: 4 microseconds 01:29:28.521 Relative Read Throughput: 0 01:29:28.521 Relative Read Latency: 0 01:29:28.521 Relative Write Throughput: 0 01:29:28.521 Relative Write Latency: 0 01:29:28.780 Idle Power: Not Reported 01:29:28.780 Active Power: Not Reported 01:29:28.780 Non-Operational Permissive Mode: Not Supported 01:29:28.780 01:29:28.780 Health Information 01:29:28.780 ================== 01:29:28.780 Critical Warnings: 01:29:28.780 Available Spare Space: OK 01:29:28.780 Temperature: OK 01:29:28.780 Device Reliability: OK 01:29:28.780 Read Only: No 01:29:28.780 Volatile Memory Backup: OK 01:29:28.780 Current Temperature: 323 Kelvin (50 Celsius) 01:29:28.780 Temperature Threshold: 343 Kelvin (70 Celsius) 01:29:28.780 Available Spare: 0% 01:29:28.780 Available Spare Threshold: 0% 01:29:28.780 Life Percentage Used: 0% 01:29:28.780 Data Units Read: 703 01:29:28.780 Data Units Written: 631 01:29:28.780 Host Read Commands: 31673 01:29:28.780 Host Write Commands: 31459 01:29:28.780 Controller Busy Time: 0 minutes 01:29:28.780 Power Cycles: 0 01:29:28.780 Power On Hours: 0 hours 01:29:28.780 Unsafe Shutdowns: 0 01:29:28.780 Unrecoverable Media Errors: 0 01:29:28.780 Lifetime Error Log Entries: 0 01:29:28.780 Warning Temperature Time: 0 minutes 01:29:28.780 Critical Temperature Time: 0 minutes 01:29:28.780 01:29:28.780 Number of Queues 01:29:28.780 ================ 01:29:28.780 Number of I/O Submission Queues: 64 01:29:28.780 Number of I/O Completion Queues: 64 01:29:28.780 01:29:28.780 ZNS Specific Controller Data 01:29:28.780 ============================ 01:29:28.780 Zone Append Size Limit: 0 01:29:28.780 01:29:28.780 01:29:28.780 Active Namespaces 01:29:28.780 ================= 01:29:28.780 Namespace ID:1 01:29:28.780 Error Recovery Timeout: Unlimited 01:29:28.780 Command Set Identifier: NVM (00h) 01:29:28.780 Deallocate: Supported 01:29:28.780 Deallocated/Unwritten Error: Supported 01:29:28.780 Deallocated Read Value: All 0x00 01:29:28.780 Deallocate in Write Zeroes: Not Supported 01:29:28.780 Deallocated Guard Field: 0xFFFF 01:29:28.780 Flush: Supported 01:29:28.780 Reservation: Not Supported 01:29:28.780 Metadata Transferred as: Separate Metadata Buffer 01:29:28.780 Namespace Sharing Capabilities: Private 01:29:28.780 Size (in LBAs): 1548666 (5GiB) 01:29:28.780 Capacity (in LBAs): 1548666 (5GiB) 01:29:28.780 Utilization (in LBAs): 1548666 (5GiB) 01:29:28.780 Thin Provisioning: Not Supported 01:29:28.780 Per-NS Atomic Units: No 01:29:28.780 Maximum Single Source Range Length: 128 01:29:28.780 Maximum Copy Length: 128 01:29:28.780 Maximum Source Range Count: 128 01:29:28.780 NGUID/EUI64 Never Reused: No 01:29:28.780 Namespace Write Protected: No 01:29:28.780 Number of LBA Formats: 8 01:29:28.780 Current LBA Format: LBA Format #07 01:29:28.780 LBA Format #00: Data Size: 512 Metadata Size: 0 01:29:28.780 LBA Format #01: Data Size: 512 Metadata Size: 8 01:29:28.780 LBA Format #02: Data Size: 512 Metadata Size: 16 01:29:28.780 LBA Format #03: Data Size: 512 Metadata Size: 64 01:29:28.780 LBA Format #04: Data Size: 4096 Metadata Size: 0 01:29:28.780 LBA Format #05: Data Size: 4096 Metadata Size: 8 01:29:28.780 LBA Format #06: Data Size: 4096 Metadata Size: 16 01:29:28.780 LBA Format #07: Data Size: 4096 Metadata Size: 64 01:29:28.780 01:29:28.780 NVM Specific Namespace Data 01:29:28.780 =========================== 01:29:28.780 Logical Block Storage Tag Mask: 0 01:29:28.780 Protection Information Capabilities: 01:29:28.780 16b Guard Protection Information Storage Tag Support: No 01:29:28.780 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 01:29:28.780 Storage Tag Check Read Support: No 01:29:28.780 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:29:28.780 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:29:28.780 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:29:28.780 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:29:28.780 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:29:28.780 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:29:28.780 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:29:28.780 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:29:28.780 05:24:20 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 01:29:28.781 05:24:20 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 01:29:29.040 ===================================================== 01:29:29.040 NVMe Controller at 0000:00:11.0 [1b36:0010] 01:29:29.040 ===================================================== 01:29:29.040 Controller Capabilities/Features 01:29:29.040 ================================ 01:29:29.040 Vendor ID: 1b36 01:29:29.040 Subsystem Vendor ID: 1af4 01:29:29.040 Serial Number: 12341 01:29:29.040 Model Number: QEMU NVMe Ctrl 01:29:29.040 Firmware Version: 8.0.0 01:29:29.040 Recommended Arb Burst: 6 01:29:29.040 IEEE OUI Identifier: 00 54 52 01:29:29.040 Multi-path I/O 01:29:29.040 May have multiple subsystem ports: No 01:29:29.040 May have multiple controllers: No 01:29:29.040 Associated with SR-IOV VF: No 01:29:29.040 Max Data Transfer Size: 524288 01:29:29.040 Max Number of Namespaces: 256 01:29:29.040 Max Number of I/O Queues: 64 01:29:29.040 NVMe Specification Version (VS): 1.4 01:29:29.040 NVMe Specification Version (Identify): 1.4 01:29:29.040 Maximum Queue Entries: 2048 01:29:29.040 Contiguous Queues Required: Yes 01:29:29.040 Arbitration Mechanisms Supported 01:29:29.040 Weighted Round Robin: Not Supported 01:29:29.040 Vendor Specific: Not Supported 01:29:29.040 Reset Timeout: 7500 ms 01:29:29.040 Doorbell Stride: 4 bytes 01:29:29.040 NVM Subsystem Reset: Not Supported 01:29:29.040 Command Sets Supported 01:29:29.040 NVM Command Set: Supported 01:29:29.040 Boot Partition: Not Supported 01:29:29.040 Memory Page Size Minimum: 4096 bytes 01:29:29.040 Memory Page Size Maximum: 65536 bytes 01:29:29.040 Persistent Memory Region: Not Supported 01:29:29.040 Optional Asynchronous Events Supported 01:29:29.040 Namespace Attribute Notices: Supported 01:29:29.040 Firmware Activation Notices: Not Supported 01:29:29.040 ANA Change Notices: Not Supported 01:29:29.040 PLE Aggregate Log Change Notices: Not Supported 01:29:29.040 LBA Status Info Alert Notices: Not Supported 01:29:29.040 EGE Aggregate Log Change Notices: Not Supported 01:29:29.040 Normal NVM Subsystem Shutdown event: Not Supported 01:29:29.040 Zone Descriptor Change Notices: Not Supported 01:29:29.040 Discovery Log Change Notices: Not Supported 01:29:29.040 Controller Attributes 01:29:29.040 128-bit Host Identifier: Not Supported 01:29:29.040 Non-Operational Permissive Mode: Not Supported 01:29:29.040 NVM Sets: Not Supported 01:29:29.040 Read Recovery Levels: Not Supported 01:29:29.040 Endurance Groups: Not Supported 01:29:29.040 Predictable Latency Mode: Not Supported 01:29:29.040 Traffic Based Keep ALive: Not Supported 01:29:29.040 Namespace Granularity: Not Supported 01:29:29.040 SQ Associations: Not Supported 01:29:29.040 UUID List: Not Supported 01:29:29.040 Multi-Domain Subsystem: Not Supported 01:29:29.040 Fixed Capacity Management: Not Supported 01:29:29.040 Variable Capacity Management: Not Supported 01:29:29.040 Delete Endurance Group: Not Supported 01:29:29.040 Delete NVM Set: Not Supported 01:29:29.040 Extended LBA Formats Supported: Supported 01:29:29.040 Flexible Data Placement Supported: Not Supported 01:29:29.040 01:29:29.040 Controller Memory Buffer Support 01:29:29.040 ================================ 01:29:29.040 Supported: No 01:29:29.040 01:29:29.040 Persistent Memory Region Support 01:29:29.040 ================================ 01:29:29.040 Supported: No 01:29:29.040 01:29:29.040 Admin Command Set Attributes 01:29:29.040 ============================ 01:29:29.040 Security Send/Receive: Not Supported 01:29:29.040 Format NVM: Supported 01:29:29.040 Firmware Activate/Download: Not Supported 01:29:29.040 Namespace Management: Supported 01:29:29.040 Device Self-Test: Not Supported 01:29:29.040 Directives: Supported 01:29:29.040 NVMe-MI: Not Supported 01:29:29.040 Virtualization Management: Not Supported 01:29:29.040 Doorbell Buffer Config: Supported 01:29:29.040 Get LBA Status Capability: Not Supported 01:29:29.040 Command & Feature Lockdown Capability: Not Supported 01:29:29.040 Abort Command Limit: 4 01:29:29.040 Async Event Request Limit: 4 01:29:29.040 Number of Firmware Slots: N/A 01:29:29.040 Firmware Slot 1 Read-Only: N/A 01:29:29.040 Firmware Activation Without Reset: N/A 01:29:29.040 Multiple Update Detection Support: N/A 01:29:29.040 Firmware Update Granularity: No Information Provided 01:29:29.040 Per-Namespace SMART Log: Yes 01:29:29.040 Asymmetric Namespace Access Log Page: Not Supported 01:29:29.040 Subsystem NQN: nqn.2019-08.org.qemu:12341 01:29:29.040 Command Effects Log Page: Supported 01:29:29.040 Get Log Page Extended Data: Supported 01:29:29.040 Telemetry Log Pages: Not Supported 01:29:29.040 Persistent Event Log Pages: Not Supported 01:29:29.040 Supported Log Pages Log Page: May Support 01:29:29.040 Commands Supported & Effects Log Page: Not Supported 01:29:29.040 Feature Identifiers & Effects Log Page:May Support 01:29:29.040 NVMe-MI Commands & Effects Log Page: May Support 01:29:29.040 Data Area 4 for Telemetry Log: Not Supported 01:29:29.040 Error Log Page Entries Supported: 1 01:29:29.040 Keep Alive: Not Supported 01:29:29.040 01:29:29.040 NVM Command Set Attributes 01:29:29.040 ========================== 01:29:29.040 Submission Queue Entry Size 01:29:29.041 Max: 64 01:29:29.041 Min: 64 01:29:29.041 Completion Queue Entry Size 01:29:29.041 Max: 16 01:29:29.041 Min: 16 01:29:29.041 Number of Namespaces: 256 01:29:29.041 Compare Command: Supported 01:29:29.041 Write Uncorrectable Command: Not Supported 01:29:29.041 Dataset Management Command: Supported 01:29:29.041 Write Zeroes Command: Supported 01:29:29.041 Set Features Save Field: Supported 01:29:29.041 Reservations: Not Supported 01:29:29.041 Timestamp: Supported 01:29:29.041 Copy: Supported 01:29:29.041 Volatile Write Cache: Present 01:29:29.041 Atomic Write Unit (Normal): 1 01:29:29.041 Atomic Write Unit (PFail): 1 01:29:29.041 Atomic Compare & Write Unit: 1 01:29:29.041 Fused Compare & Write: Not Supported 01:29:29.041 Scatter-Gather List 01:29:29.041 SGL Command Set: Supported 01:29:29.041 SGL Keyed: Not Supported 01:29:29.041 SGL Bit Bucket Descriptor: Not Supported 01:29:29.041 SGL Metadata Pointer: Not Supported 01:29:29.041 Oversized SGL: Not Supported 01:29:29.041 SGL Metadata Address: Not Supported 01:29:29.041 SGL Offset: Not Supported 01:29:29.041 Transport SGL Data Block: Not Supported 01:29:29.041 Replay Protected Memory Block: Not Supported 01:29:29.041 01:29:29.041 Firmware Slot Information 01:29:29.041 ========================= 01:29:29.041 Active slot: 1 01:29:29.041 Slot 1 Firmware Revision: 1.0 01:29:29.041 01:29:29.041 01:29:29.041 Commands Supported and Effects 01:29:29.041 ============================== 01:29:29.041 Admin Commands 01:29:29.041 -------------- 01:29:29.041 Delete I/O Submission Queue (00h): Supported 01:29:29.041 Create I/O Submission Queue (01h): Supported 01:29:29.041 Get Log Page (02h): Supported 01:29:29.041 Delete I/O Completion Queue (04h): Supported 01:29:29.041 Create I/O Completion Queue (05h): Supported 01:29:29.041 Identify (06h): Supported 01:29:29.041 Abort (08h): Supported 01:29:29.041 Set Features (09h): Supported 01:29:29.041 Get Features (0Ah): Supported 01:29:29.041 Asynchronous Event Request (0Ch): Supported 01:29:29.041 Namespace Attachment (15h): Supported NS-Inventory-Change 01:29:29.041 Directive Send (19h): Supported 01:29:29.041 Directive Receive (1Ah): Supported 01:29:29.041 Virtualization Management (1Ch): Supported 01:29:29.041 Doorbell Buffer Config (7Ch): Supported 01:29:29.041 Format NVM (80h): Supported LBA-Change 01:29:29.041 I/O Commands 01:29:29.041 ------------ 01:29:29.041 Flush (00h): Supported LBA-Change 01:29:29.041 Write (01h): Supported LBA-Change 01:29:29.041 Read (02h): Supported 01:29:29.041 Compare (05h): Supported 01:29:29.041 Write Zeroes (08h): Supported LBA-Change 01:29:29.041 Dataset Management (09h): Supported LBA-Change 01:29:29.041 Unknown (0Ch): Supported 01:29:29.041 Unknown (12h): Supported 01:29:29.041 Copy (19h): Supported LBA-Change 01:29:29.041 Unknown (1Dh): Supported LBA-Change 01:29:29.041 01:29:29.041 Error Log 01:29:29.041 ========= 01:29:29.041 01:29:29.041 Arbitration 01:29:29.041 =========== 01:29:29.041 Arbitration Burst: no limit 01:29:29.041 01:29:29.041 Power Management 01:29:29.041 ================ 01:29:29.041 Number of Power States: 1 01:29:29.041 Current Power State: Power State #0 01:29:29.041 Power State #0: 01:29:29.041 Max Power: 25.00 W 01:29:29.041 Non-Operational State: Operational 01:29:29.041 Entry Latency: 16 microseconds 01:29:29.041 Exit Latency: 4 microseconds 01:29:29.041 Relative Read Throughput: 0 01:29:29.041 Relative Read Latency: 0 01:29:29.041 Relative Write Throughput: 0 01:29:29.041 Relative Write Latency: 0 01:29:29.300 Idle Power: Not Reported 01:29:29.300 Active Power: Not Reported 01:29:29.300 Non-Operational Permissive Mode: Not Supported 01:29:29.300 01:29:29.300 Health Information 01:29:29.300 ================== 01:29:29.300 Critical Warnings: 01:29:29.300 Available Spare Space: OK 01:29:29.300 Temperature: OK 01:29:29.300 Device Reliability: OK 01:29:29.300 Read Only: No 01:29:29.300 Volatile Memory Backup: OK 01:29:29.300 Current Temperature: 323 Kelvin (50 Celsius) 01:29:29.300 Temperature Threshold: 343 Kelvin (70 Celsius) 01:29:29.300 Available Spare: 0% 01:29:29.300 Available Spare Threshold: 0% 01:29:29.300 Life Percentage Used: 0% 01:29:29.300 Data Units Read: 1098 01:29:29.300 Data Units Written: 958 01:29:29.300 Host Read Commands: 46356 01:29:29.300 Host Write Commands: 45052 01:29:29.300 Controller Busy Time: 0 minutes 01:29:29.300 Power Cycles: 0 01:29:29.300 Power On Hours: 0 hours 01:29:29.300 Unsafe Shutdowns: 0 01:29:29.300 Unrecoverable Media Errors: 0 01:29:29.300 Lifetime Error Log Entries: 0 01:29:29.300 Warning Temperature Time: 0 minutes 01:29:29.300 Critical Temperature Time: 0 minutes 01:29:29.300 01:29:29.300 Number of Queues 01:29:29.300 ================ 01:29:29.300 Number of I/O Submission Queues: 64 01:29:29.300 Number of I/O Completion Queues: 64 01:29:29.300 01:29:29.300 ZNS Specific Controller Data 01:29:29.300 ============================ 01:29:29.300 Zone Append Size Limit: 0 01:29:29.300 01:29:29.300 01:29:29.300 Active Namespaces 01:29:29.300 ================= 01:29:29.300 Namespace ID:1 01:29:29.300 Error Recovery Timeout: Unlimited 01:29:29.300 Command Set Identifier: NVM (00h) 01:29:29.300 Deallocate: Supported 01:29:29.300 Deallocated/Unwritten Error: Supported 01:29:29.300 Deallocated Read Value: All 0x00 01:29:29.300 Deallocate in Write Zeroes: Not Supported 01:29:29.300 Deallocated Guard Field: 0xFFFF 01:29:29.300 Flush: Supported 01:29:29.300 Reservation: Not Supported 01:29:29.300 Namespace Sharing Capabilities: Private 01:29:29.300 Size (in LBAs): 1310720 (5GiB) 01:29:29.300 Capacity (in LBAs): 1310720 (5GiB) 01:29:29.300 Utilization (in LBAs): 1310720 (5GiB) 01:29:29.300 Thin Provisioning: Not Supported 01:29:29.300 Per-NS Atomic Units: No 01:29:29.300 Maximum Single Source Range Length: 128 01:29:29.300 Maximum Copy Length: 128 01:29:29.300 Maximum Source Range Count: 128 01:29:29.300 NGUID/EUI64 Never Reused: No 01:29:29.300 Namespace Write Protected: No 01:29:29.300 Number of LBA Formats: 8 01:29:29.300 Current LBA Format: LBA Format #04 01:29:29.300 LBA Format #00: Data Size: 512 Metadata Size: 0 01:29:29.300 LBA Format #01: Data Size: 512 Metadata Size: 8 01:29:29.300 LBA Format #02: Data Size: 512 Metadata Size: 16 01:29:29.300 LBA Format #03: Data Size: 512 Metadata Size: 64 01:29:29.300 LBA Format #04: Data Size: 4096 Metadata Size: 0 01:29:29.300 LBA Format #05: Data Size: 4096 Metadata Size: 8 01:29:29.300 LBA Format #06: Data Size: 4096 Metadata Size: 16 01:29:29.300 LBA Format #07: Data Size: 4096 Metadata Size: 64 01:29:29.300 01:29:29.300 NVM Specific Namespace Data 01:29:29.301 =========================== 01:29:29.301 Logical Block Storage Tag Mask: 0 01:29:29.301 Protection Information Capabilities: 01:29:29.301 16b Guard Protection Information Storage Tag Support: No 01:29:29.301 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 01:29:29.301 Storage Tag Check Read Support: No 01:29:29.301 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:29:29.301 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:29:29.301 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:29:29.301 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:29:29.301 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:29:29.301 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:29:29.301 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:29:29.301 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:29:29.301 05:24:20 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 01:29:29.301 05:24:20 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 01:29:29.579 ===================================================== 01:29:29.579 NVMe Controller at 0000:00:12.0 [1b36:0010] 01:29:29.579 ===================================================== 01:29:29.579 Controller Capabilities/Features 01:29:29.579 ================================ 01:29:29.579 Vendor ID: 1b36 01:29:29.579 Subsystem Vendor ID: 1af4 01:29:29.579 Serial Number: 12342 01:29:29.579 Model Number: QEMU NVMe Ctrl 01:29:29.579 Firmware Version: 8.0.0 01:29:29.579 Recommended Arb Burst: 6 01:29:29.579 IEEE OUI Identifier: 00 54 52 01:29:29.580 Multi-path I/O 01:29:29.580 May have multiple subsystem ports: No 01:29:29.580 May have multiple controllers: No 01:29:29.580 Associated with SR-IOV VF: No 01:29:29.580 Max Data Transfer Size: 524288 01:29:29.580 Max Number of Namespaces: 256 01:29:29.580 Max Number of I/O Queues: 64 01:29:29.580 NVMe Specification Version (VS): 1.4 01:29:29.580 NVMe Specification Version (Identify): 1.4 01:29:29.580 Maximum Queue Entries: 2048 01:29:29.580 Contiguous Queues Required: Yes 01:29:29.580 Arbitration Mechanisms Supported 01:29:29.580 Weighted Round Robin: Not Supported 01:29:29.580 Vendor Specific: Not Supported 01:29:29.580 Reset Timeout: 7500 ms 01:29:29.580 Doorbell Stride: 4 bytes 01:29:29.580 NVM Subsystem Reset: Not Supported 01:29:29.580 Command Sets Supported 01:29:29.580 NVM Command Set: Supported 01:29:29.580 Boot Partition: Not Supported 01:29:29.580 Memory Page Size Minimum: 4096 bytes 01:29:29.580 Memory Page Size Maximum: 65536 bytes 01:29:29.580 Persistent Memory Region: Not Supported 01:29:29.580 Optional Asynchronous Events Supported 01:29:29.580 Namespace Attribute Notices: Supported 01:29:29.580 Firmware Activation Notices: Not Supported 01:29:29.580 ANA Change Notices: Not Supported 01:29:29.580 PLE Aggregate Log Change Notices: Not Supported 01:29:29.580 LBA Status Info Alert Notices: Not Supported 01:29:29.580 EGE Aggregate Log Change Notices: Not Supported 01:29:29.580 Normal NVM Subsystem Shutdown event: Not Supported 01:29:29.580 Zone Descriptor Change Notices: Not Supported 01:29:29.580 Discovery Log Change Notices: Not Supported 01:29:29.580 Controller Attributes 01:29:29.580 128-bit Host Identifier: Not Supported 01:29:29.580 Non-Operational Permissive Mode: Not Supported 01:29:29.580 NVM Sets: Not Supported 01:29:29.580 Read Recovery Levels: Not Supported 01:29:29.580 Endurance Groups: Not Supported 01:29:29.580 Predictable Latency Mode: Not Supported 01:29:29.580 Traffic Based Keep ALive: Not Supported 01:29:29.580 Namespace Granularity: Not Supported 01:29:29.580 SQ Associations: Not Supported 01:29:29.580 UUID List: Not Supported 01:29:29.580 Multi-Domain Subsystem: Not Supported 01:29:29.580 Fixed Capacity Management: Not Supported 01:29:29.580 Variable Capacity Management: Not Supported 01:29:29.580 Delete Endurance Group: Not Supported 01:29:29.580 Delete NVM Set: Not Supported 01:29:29.580 Extended LBA Formats Supported: Supported 01:29:29.580 Flexible Data Placement Supported: Not Supported 01:29:29.580 01:29:29.580 Controller Memory Buffer Support 01:29:29.580 ================================ 01:29:29.580 Supported: No 01:29:29.580 01:29:29.580 Persistent Memory Region Support 01:29:29.580 ================================ 01:29:29.580 Supported: No 01:29:29.580 01:29:29.580 Admin Command Set Attributes 01:29:29.580 ============================ 01:29:29.580 Security Send/Receive: Not Supported 01:29:29.580 Format NVM: Supported 01:29:29.580 Firmware Activate/Download: Not Supported 01:29:29.580 Namespace Management: Supported 01:29:29.580 Device Self-Test: Not Supported 01:29:29.580 Directives: Supported 01:29:29.580 NVMe-MI: Not Supported 01:29:29.580 Virtualization Management: Not Supported 01:29:29.580 Doorbell Buffer Config: Supported 01:29:29.580 Get LBA Status Capability: Not Supported 01:29:29.580 Command & Feature Lockdown Capability: Not Supported 01:29:29.580 Abort Command Limit: 4 01:29:29.580 Async Event Request Limit: 4 01:29:29.580 Number of Firmware Slots: N/A 01:29:29.580 Firmware Slot 1 Read-Only: N/A 01:29:29.580 Firmware Activation Without Reset: N/A 01:29:29.580 Multiple Update Detection Support: N/A 01:29:29.580 Firmware Update Granularity: No Information Provided 01:29:29.580 Per-Namespace SMART Log: Yes 01:29:29.580 Asymmetric Namespace Access Log Page: Not Supported 01:29:29.580 Subsystem NQN: nqn.2019-08.org.qemu:12342 01:29:29.580 Command Effects Log Page: Supported 01:29:29.580 Get Log Page Extended Data: Supported 01:29:29.580 Telemetry Log Pages: Not Supported 01:29:29.580 Persistent Event Log Pages: Not Supported 01:29:29.580 Supported Log Pages Log Page: May Support 01:29:29.580 Commands Supported & Effects Log Page: Not Supported 01:29:29.580 Feature Identifiers & Effects Log Page:May Support 01:29:29.580 NVMe-MI Commands & Effects Log Page: May Support 01:29:29.580 Data Area 4 for Telemetry Log: Not Supported 01:29:29.580 Error Log Page Entries Supported: 1 01:29:29.580 Keep Alive: Not Supported 01:29:29.580 01:29:29.580 NVM Command Set Attributes 01:29:29.580 ========================== 01:29:29.580 Submission Queue Entry Size 01:29:29.580 Max: 64 01:29:29.580 Min: 64 01:29:29.580 Completion Queue Entry Size 01:29:29.580 Max: 16 01:29:29.580 Min: 16 01:29:29.580 Number of Namespaces: 256 01:29:29.580 Compare Command: Supported 01:29:29.580 Write Uncorrectable Command: Not Supported 01:29:29.580 Dataset Management Command: Supported 01:29:29.580 Write Zeroes Command: Supported 01:29:29.580 Set Features Save Field: Supported 01:29:29.580 Reservations: Not Supported 01:29:29.580 Timestamp: Supported 01:29:29.580 Copy: Supported 01:29:29.580 Volatile Write Cache: Present 01:29:29.580 Atomic Write Unit (Normal): 1 01:29:29.580 Atomic Write Unit (PFail): 1 01:29:29.580 Atomic Compare & Write Unit: 1 01:29:29.580 Fused Compare & Write: Not Supported 01:29:29.580 Scatter-Gather List 01:29:29.580 SGL Command Set: Supported 01:29:29.580 SGL Keyed: Not Supported 01:29:29.580 SGL Bit Bucket Descriptor: Not Supported 01:29:29.580 SGL Metadata Pointer: Not Supported 01:29:29.580 Oversized SGL: Not Supported 01:29:29.580 SGL Metadata Address: Not Supported 01:29:29.580 SGL Offset: Not Supported 01:29:29.580 Transport SGL Data Block: Not Supported 01:29:29.580 Replay Protected Memory Block: Not Supported 01:29:29.580 01:29:29.580 Firmware Slot Information 01:29:29.580 ========================= 01:29:29.580 Active slot: 1 01:29:29.580 Slot 1 Firmware Revision: 1.0 01:29:29.580 01:29:29.580 01:29:29.580 Commands Supported and Effects 01:29:29.580 ============================== 01:29:29.580 Admin Commands 01:29:29.580 -------------- 01:29:29.580 Delete I/O Submission Queue (00h): Supported 01:29:29.580 Create I/O Submission Queue (01h): Supported 01:29:29.580 Get Log Page (02h): Supported 01:29:29.580 Delete I/O Completion Queue (04h): Supported 01:29:29.580 Create I/O Completion Queue (05h): Supported 01:29:29.580 Identify (06h): Supported 01:29:29.580 Abort (08h): Supported 01:29:29.580 Set Features (09h): Supported 01:29:29.580 Get Features (0Ah): Supported 01:29:29.580 Asynchronous Event Request (0Ch): Supported 01:29:29.580 Namespace Attachment (15h): Supported NS-Inventory-Change 01:29:29.580 Directive Send (19h): Supported 01:29:29.580 Directive Receive (1Ah): Supported 01:29:29.580 Virtualization Management (1Ch): Supported 01:29:29.580 Doorbell Buffer Config (7Ch): Supported 01:29:29.580 Format NVM (80h): Supported LBA-Change 01:29:29.580 I/O Commands 01:29:29.580 ------------ 01:29:29.580 Flush (00h): Supported LBA-Change 01:29:29.580 Write (01h): Supported LBA-Change 01:29:29.580 Read (02h): Supported 01:29:29.580 Compare (05h): Supported 01:29:29.580 Write Zeroes (08h): Supported LBA-Change 01:29:29.580 Dataset Management (09h): Supported LBA-Change 01:29:29.580 Unknown (0Ch): Supported 01:29:29.580 Unknown (12h): Supported 01:29:29.580 Copy (19h): Supported LBA-Change 01:29:29.580 Unknown (1Dh): Supported LBA-Change 01:29:29.580 01:29:29.580 Error Log 01:29:29.580 ========= 01:29:29.580 01:29:29.580 Arbitration 01:29:29.580 =========== 01:29:29.580 Arbitration Burst: no limit 01:29:29.580 01:29:29.580 Power Management 01:29:29.580 ================ 01:29:29.580 Number of Power States: 1 01:29:29.580 Current Power State: Power State #0 01:29:29.580 Power State #0: 01:29:29.580 Max Power: 25.00 W 01:29:29.580 Non-Operational State: Operational 01:29:29.580 Entry Latency: 16 microseconds 01:29:29.580 Exit Latency: 4 microseconds 01:29:29.580 Relative Read Throughput: 0 01:29:29.580 Relative Read Latency: 0 01:29:29.580 Relative Write Throughput: 0 01:29:29.580 Relative Write Latency: 0 01:29:29.580 Idle Power: Not Reported 01:29:29.580 Active Power: Not Reported 01:29:29.580 Non-Operational Permissive Mode: Not Supported 01:29:29.580 01:29:29.580 Health Information 01:29:29.580 ================== 01:29:29.580 Critical Warnings: 01:29:29.580 Available Spare Space: OK 01:29:29.580 Temperature: OK 01:29:29.580 Device Reliability: OK 01:29:29.580 Read Only: No 01:29:29.581 Volatile Memory Backup: OK 01:29:29.581 Current Temperature: 323 Kelvin (50 Celsius) 01:29:29.581 Temperature Threshold: 343 Kelvin (70 Celsius) 01:29:29.581 Available Spare: 0% 01:29:29.581 Available Spare Threshold: 0% 01:29:29.581 Life Percentage Used: 0% 01:29:29.581 Data Units Read: 2238 01:29:29.581 Data Units Written: 2025 01:29:29.581 Host Read Commands: 96804 01:29:29.581 Host Write Commands: 95073 01:29:29.581 Controller Busy Time: 0 minutes 01:29:29.581 Power Cycles: 0 01:29:29.581 Power On Hours: 0 hours 01:29:29.581 Unsafe Shutdowns: 0 01:29:29.581 Unrecoverable Media Errors: 0 01:29:29.581 Lifetime Error Log Entries: 0 01:29:29.581 Warning Temperature Time: 0 minutes 01:29:29.581 Critical Temperature Time: 0 minutes 01:29:29.581 01:29:29.581 Number of Queues 01:29:29.581 ================ 01:29:29.581 Number of I/O Submission Queues: 64 01:29:29.581 Number of I/O Completion Queues: 64 01:29:29.581 01:29:29.581 ZNS Specific Controller Data 01:29:29.581 ============================ 01:29:29.581 Zone Append Size Limit: 0 01:29:29.581 01:29:29.581 01:29:29.581 Active Namespaces 01:29:29.581 ================= 01:29:29.581 Namespace ID:1 01:29:29.581 Error Recovery Timeout: Unlimited 01:29:29.581 Command Set Identifier: NVM (00h) 01:29:29.581 Deallocate: Supported 01:29:29.581 Deallocated/Unwritten Error: Supported 01:29:29.581 Deallocated Read Value: All 0x00 01:29:29.581 Deallocate in Write Zeroes: Not Supported 01:29:29.581 Deallocated Guard Field: 0xFFFF 01:29:29.581 Flush: Supported 01:29:29.581 Reservation: Not Supported 01:29:29.581 Namespace Sharing Capabilities: Private 01:29:29.581 Size (in LBAs): 1048576 (4GiB) 01:29:29.581 Capacity (in LBAs): 1048576 (4GiB) 01:29:29.581 Utilization (in LBAs): 1048576 (4GiB) 01:29:29.581 Thin Provisioning: Not Supported 01:29:29.581 Per-NS Atomic Units: No 01:29:29.581 Maximum Single Source Range Length: 128 01:29:29.581 Maximum Copy Length: 128 01:29:29.581 Maximum Source Range Count: 128 01:29:29.581 NGUID/EUI64 Never Reused: No 01:29:29.581 Namespace Write Protected: No 01:29:29.581 Number of LBA Formats: 8 01:29:29.581 Current LBA Format: LBA Format #04 01:29:29.581 LBA Format #00: Data Size: 512 Metadata Size: 0 01:29:29.581 LBA Format #01: Data Size: 512 Metadata Size: 8 01:29:29.581 LBA Format #02: Data Size: 512 Metadata Size: 16 01:29:29.581 LBA Format #03: Data Size: 512 Metadata Size: 64 01:29:29.581 LBA Format #04: Data Size: 4096 Metadata Size: 0 01:29:29.581 LBA Format #05: Data Size: 4096 Metadata Size: 8 01:29:29.581 LBA Format #06: Data Size: 4096 Metadata Size: 16 01:29:29.581 LBA Format #07: Data Size: 4096 Metadata Size: 64 01:29:29.581 01:29:29.581 NVM Specific Namespace Data 01:29:29.581 =========================== 01:29:29.581 Logical Block Storage Tag Mask: 0 01:29:29.581 Protection Information Capabilities: 01:29:29.581 16b Guard Protection Information Storage Tag Support: No 01:29:29.581 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 01:29:29.581 Storage Tag Check Read Support: No 01:29:29.581 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:29:29.581 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:29:29.581 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:29:29.581 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:29:29.581 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:29:29.581 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:29:29.581 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:29:29.581 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:29:29.581 Namespace ID:2 01:29:29.581 Error Recovery Timeout: Unlimited 01:29:29.581 Command Set Identifier: NVM (00h) 01:29:29.581 Deallocate: Supported 01:29:29.581 Deallocated/Unwritten Error: Supported 01:29:29.581 Deallocated Read Value: All 0x00 01:29:29.581 Deallocate in Write Zeroes: Not Supported 01:29:29.581 Deallocated Guard Field: 0xFFFF 01:29:29.581 Flush: Supported 01:29:29.581 Reservation: Not Supported 01:29:29.581 Namespace Sharing Capabilities: Private 01:29:29.581 Size (in LBAs): 1048576 (4GiB) 01:29:29.581 Capacity (in LBAs): 1048576 (4GiB) 01:29:29.581 Utilization (in LBAs): 1048576 (4GiB) 01:29:29.581 Thin Provisioning: Not Supported 01:29:29.581 Per-NS Atomic Units: No 01:29:29.581 Maximum Single Source Range Length: 128 01:29:29.581 Maximum Copy Length: 128 01:29:29.581 Maximum Source Range Count: 128 01:29:29.581 NGUID/EUI64 Never Reused: No 01:29:29.581 Namespace Write Protected: No 01:29:29.581 Number of LBA Formats: 8 01:29:29.581 Current LBA Format: LBA Format #04 01:29:29.581 LBA Format #00: Data Size: 512 Metadata Size: 0 01:29:29.581 LBA Format #01: Data Size: 512 Metadata Size: 8 01:29:29.581 LBA Format #02: Data Size: 512 Metadata Size: 16 01:29:29.581 LBA Format #03: Data Size: 512 Metadata Size: 64 01:29:29.581 LBA Format #04: Data Size: 4096 Metadata Size: 0 01:29:29.581 LBA Format #05: Data Size: 4096 Metadata Size: 8 01:29:29.581 LBA Format #06: Data Size: 4096 Metadata Size: 16 01:29:29.581 LBA Format #07: Data Size: 4096 Metadata Size: 64 01:29:29.581 01:29:29.581 NVM Specific Namespace Data 01:29:29.581 =========================== 01:29:29.581 Logical Block Storage Tag Mask: 0 01:29:29.581 Protection Information Capabilities: 01:29:29.581 16b Guard Protection Information Storage Tag Support: No 01:29:29.581 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 01:29:29.581 Storage Tag Check Read Support: No 01:29:29.581 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:29:29.581 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:29:29.581 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:29:29.581 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:29:29.581 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:29:29.581 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:29:29.581 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:29:29.581 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:29:29.581 Namespace ID:3 01:29:29.581 Error Recovery Timeout: Unlimited 01:29:29.581 Command Set Identifier: NVM (00h) 01:29:29.581 Deallocate: Supported 01:29:29.581 Deallocated/Unwritten Error: Supported 01:29:29.581 Deallocated Read Value: All 0x00 01:29:29.581 Deallocate in Write Zeroes: Not Supported 01:29:29.581 Deallocated Guard Field: 0xFFFF 01:29:29.581 Flush: Supported 01:29:29.581 Reservation: Not Supported 01:29:29.581 Namespace Sharing Capabilities: Private 01:29:29.581 Size (in LBAs): 1048576 (4GiB) 01:29:29.581 Capacity (in LBAs): 1048576 (4GiB) 01:29:29.581 Utilization (in LBAs): 1048576 (4GiB) 01:29:29.581 Thin Provisioning: Not Supported 01:29:29.581 Per-NS Atomic Units: No 01:29:29.581 Maximum Single Source Range Length: 128 01:29:29.581 Maximum Copy Length: 128 01:29:29.581 Maximum Source Range Count: 128 01:29:29.581 NGUID/EUI64 Never Reused: No 01:29:29.581 Namespace Write Protected: No 01:29:29.581 Number of LBA Formats: 8 01:29:29.581 Current LBA Format: LBA Format #04 01:29:29.581 LBA Format #00: Data Size: 512 Metadata Size: 0 01:29:29.581 LBA Format #01: Data Size: 512 Metadata Size: 8 01:29:29.581 LBA Format #02: Data Size: 512 Metadata Size: 16 01:29:29.581 LBA Format #03: Data Size: 512 Metadata Size: 64 01:29:29.581 LBA Format #04: Data Size: 4096 Metadata Size: 0 01:29:29.581 LBA Format #05: Data Size: 4096 Metadata Size: 8 01:29:29.581 LBA Format #06: Data Size: 4096 Metadata Size: 16 01:29:29.581 LBA Format #07: Data Size: 4096 Metadata Size: 64 01:29:29.581 01:29:29.581 NVM Specific Namespace Data 01:29:29.581 =========================== 01:29:29.581 Logical Block Storage Tag Mask: 0 01:29:29.581 Protection Information Capabilities: 01:29:29.581 16b Guard Protection Information Storage Tag Support: No 01:29:29.581 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 01:29:29.581 Storage Tag Check Read Support: No 01:29:29.581 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:29:29.581 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:29:29.581 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:29:29.581 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:29:29.581 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:29:29.581 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:29:29.581 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:29:29.581 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:29:29.581 05:24:21 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 01:29:29.581 05:24:21 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 01:29:29.841 ===================================================== 01:29:29.841 NVMe Controller at 0000:00:13.0 [1b36:0010] 01:29:29.841 ===================================================== 01:29:29.841 Controller Capabilities/Features 01:29:29.841 ================================ 01:29:29.841 Vendor ID: 1b36 01:29:29.841 Subsystem Vendor ID: 1af4 01:29:29.841 Serial Number: 12343 01:29:29.841 Model Number: QEMU NVMe Ctrl 01:29:29.841 Firmware Version: 8.0.0 01:29:29.841 Recommended Arb Burst: 6 01:29:29.841 IEEE OUI Identifier: 00 54 52 01:29:29.841 Multi-path I/O 01:29:29.841 May have multiple subsystem ports: No 01:29:29.841 May have multiple controllers: Yes 01:29:29.841 Associated with SR-IOV VF: No 01:29:29.841 Max Data Transfer Size: 524288 01:29:29.841 Max Number of Namespaces: 256 01:29:29.841 Max Number of I/O Queues: 64 01:29:29.841 NVMe Specification Version (VS): 1.4 01:29:29.841 NVMe Specification Version (Identify): 1.4 01:29:29.841 Maximum Queue Entries: 2048 01:29:29.841 Contiguous Queues Required: Yes 01:29:29.841 Arbitration Mechanisms Supported 01:29:29.841 Weighted Round Robin: Not Supported 01:29:29.841 Vendor Specific: Not Supported 01:29:29.841 Reset Timeout: 7500 ms 01:29:29.841 Doorbell Stride: 4 bytes 01:29:29.841 NVM Subsystem Reset: Not Supported 01:29:29.841 Command Sets Supported 01:29:29.841 NVM Command Set: Supported 01:29:29.841 Boot Partition: Not Supported 01:29:29.841 Memory Page Size Minimum: 4096 bytes 01:29:29.841 Memory Page Size Maximum: 65536 bytes 01:29:29.841 Persistent Memory Region: Not Supported 01:29:29.841 Optional Asynchronous Events Supported 01:29:29.841 Namespace Attribute Notices: Supported 01:29:29.841 Firmware Activation Notices: Not Supported 01:29:29.841 ANA Change Notices: Not Supported 01:29:29.841 PLE Aggregate Log Change Notices: Not Supported 01:29:29.841 LBA Status Info Alert Notices: Not Supported 01:29:29.841 EGE Aggregate Log Change Notices: Not Supported 01:29:29.841 Normal NVM Subsystem Shutdown event: Not Supported 01:29:29.841 Zone Descriptor Change Notices: Not Supported 01:29:29.841 Discovery Log Change Notices: Not Supported 01:29:29.841 Controller Attributes 01:29:29.841 128-bit Host Identifier: Not Supported 01:29:29.841 Non-Operational Permissive Mode: Not Supported 01:29:29.841 NVM Sets: Not Supported 01:29:29.841 Read Recovery Levels: Not Supported 01:29:29.841 Endurance Groups: Supported 01:29:29.841 Predictable Latency Mode: Not Supported 01:29:29.841 Traffic Based Keep ALive: Not Supported 01:29:29.841 Namespace Granularity: Not Supported 01:29:29.841 SQ Associations: Not Supported 01:29:29.841 UUID List: Not Supported 01:29:29.842 Multi-Domain Subsystem: Not Supported 01:29:29.842 Fixed Capacity Management: Not Supported 01:29:29.842 Variable Capacity Management: Not Supported 01:29:29.842 Delete Endurance Group: Not Supported 01:29:29.842 Delete NVM Set: Not Supported 01:29:29.842 Extended LBA Formats Supported: Supported 01:29:29.842 Flexible Data Placement Supported: Supported 01:29:29.842 01:29:29.842 Controller Memory Buffer Support 01:29:29.842 ================================ 01:29:29.842 Supported: No 01:29:29.842 01:29:29.842 Persistent Memory Region Support 01:29:29.842 ================================ 01:29:29.842 Supported: No 01:29:29.842 01:29:29.842 Admin Command Set Attributes 01:29:29.842 ============================ 01:29:29.842 Security Send/Receive: Not Supported 01:29:29.842 Format NVM: Supported 01:29:29.842 Firmware Activate/Download: Not Supported 01:29:29.842 Namespace Management: Supported 01:29:29.842 Device Self-Test: Not Supported 01:29:29.842 Directives: Supported 01:29:29.842 NVMe-MI: Not Supported 01:29:29.842 Virtualization Management: Not Supported 01:29:29.842 Doorbell Buffer Config: Supported 01:29:29.842 Get LBA Status Capability: Not Supported 01:29:29.842 Command & Feature Lockdown Capability: Not Supported 01:29:29.842 Abort Command Limit: 4 01:29:29.842 Async Event Request Limit: 4 01:29:29.842 Number of Firmware Slots: N/A 01:29:29.842 Firmware Slot 1 Read-Only: N/A 01:29:29.842 Firmware Activation Without Reset: N/A 01:29:29.842 Multiple Update Detection Support: N/A 01:29:29.842 Firmware Update Granularity: No Information Provided 01:29:29.842 Per-Namespace SMART Log: Yes 01:29:29.842 Asymmetric Namespace Access Log Page: Not Supported 01:29:29.842 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 01:29:29.842 Command Effects Log Page: Supported 01:29:29.842 Get Log Page Extended Data: Supported 01:29:29.842 Telemetry Log Pages: Not Supported 01:29:29.842 Persistent Event Log Pages: Not Supported 01:29:29.842 Supported Log Pages Log Page: May Support 01:29:29.842 Commands Supported & Effects Log Page: Not Supported 01:29:29.842 Feature Identifiers & Effects Log Page:May Support 01:29:29.842 NVMe-MI Commands & Effects Log Page: May Support 01:29:29.842 Data Area 4 for Telemetry Log: Not Supported 01:29:29.842 Error Log Page Entries Supported: 1 01:29:29.842 Keep Alive: Not Supported 01:29:29.842 01:29:29.842 NVM Command Set Attributes 01:29:29.842 ========================== 01:29:29.842 Submission Queue Entry Size 01:29:29.842 Max: 64 01:29:29.842 Min: 64 01:29:29.842 Completion Queue Entry Size 01:29:29.842 Max: 16 01:29:29.842 Min: 16 01:29:29.842 Number of Namespaces: 256 01:29:29.842 Compare Command: Supported 01:29:29.842 Write Uncorrectable Command: Not Supported 01:29:29.842 Dataset Management Command: Supported 01:29:29.842 Write Zeroes Command: Supported 01:29:29.842 Set Features Save Field: Supported 01:29:29.842 Reservations: Not Supported 01:29:29.842 Timestamp: Supported 01:29:29.842 Copy: Supported 01:29:29.842 Volatile Write Cache: Present 01:29:29.842 Atomic Write Unit (Normal): 1 01:29:29.842 Atomic Write Unit (PFail): 1 01:29:29.842 Atomic Compare & Write Unit: 1 01:29:29.842 Fused Compare & Write: Not Supported 01:29:29.842 Scatter-Gather List 01:29:29.842 SGL Command Set: Supported 01:29:29.842 SGL Keyed: Not Supported 01:29:29.842 SGL Bit Bucket Descriptor: Not Supported 01:29:29.842 SGL Metadata Pointer: Not Supported 01:29:29.842 Oversized SGL: Not Supported 01:29:29.842 SGL Metadata Address: Not Supported 01:29:29.842 SGL Offset: Not Supported 01:29:29.842 Transport SGL Data Block: Not Supported 01:29:29.842 Replay Protected Memory Block: Not Supported 01:29:29.842 01:29:29.842 Firmware Slot Information 01:29:29.842 ========================= 01:29:29.842 Active slot: 1 01:29:29.842 Slot 1 Firmware Revision: 1.0 01:29:29.842 01:29:29.842 01:29:29.842 Commands Supported and Effects 01:29:29.842 ============================== 01:29:29.842 Admin Commands 01:29:29.842 -------------- 01:29:29.842 Delete I/O Submission Queue (00h): Supported 01:29:29.842 Create I/O Submission Queue (01h): Supported 01:29:29.842 Get Log Page (02h): Supported 01:29:29.842 Delete I/O Completion Queue (04h): Supported 01:29:29.842 Create I/O Completion Queue (05h): Supported 01:29:29.842 Identify (06h): Supported 01:29:29.842 Abort (08h): Supported 01:29:29.842 Set Features (09h): Supported 01:29:29.842 Get Features (0Ah): Supported 01:29:29.842 Asynchronous Event Request (0Ch): Supported 01:29:29.842 Namespace Attachment (15h): Supported NS-Inventory-Change 01:29:29.842 Directive Send (19h): Supported 01:29:29.842 Directive Receive (1Ah): Supported 01:29:29.842 Virtualization Management (1Ch): Supported 01:29:29.842 Doorbell Buffer Config (7Ch): Supported 01:29:29.842 Format NVM (80h): Supported LBA-Change 01:29:29.842 I/O Commands 01:29:29.842 ------------ 01:29:29.842 Flush (00h): Supported LBA-Change 01:29:29.842 Write (01h): Supported LBA-Change 01:29:29.842 Read (02h): Supported 01:29:29.842 Compare (05h): Supported 01:29:29.842 Write Zeroes (08h): Supported LBA-Change 01:29:29.842 Dataset Management (09h): Supported LBA-Change 01:29:29.842 Unknown (0Ch): Supported 01:29:29.842 Unknown (12h): Supported 01:29:29.842 Copy (19h): Supported LBA-Change 01:29:29.842 Unknown (1Dh): Supported LBA-Change 01:29:29.842 01:29:29.842 Error Log 01:29:29.842 ========= 01:29:29.842 01:29:29.842 Arbitration 01:29:29.842 =========== 01:29:29.842 Arbitration Burst: no limit 01:29:29.842 01:29:29.842 Power Management 01:29:29.842 ================ 01:29:29.842 Number of Power States: 1 01:29:29.842 Current Power State: Power State #0 01:29:29.842 Power State #0: 01:29:29.842 Max Power: 25.00 W 01:29:29.842 Non-Operational State: Operational 01:29:29.842 Entry Latency: 16 microseconds 01:29:29.842 Exit Latency: 4 microseconds 01:29:29.842 Relative Read Throughput: 0 01:29:29.842 Relative Read Latency: 0 01:29:29.842 Relative Write Throughput: 0 01:29:29.842 Relative Write Latency: 0 01:29:29.842 Idle Power: Not Reported 01:29:29.842 Active Power: Not Reported 01:29:29.842 Non-Operational Permissive Mode: Not Supported 01:29:29.842 01:29:29.842 Health Information 01:29:29.842 ================== 01:29:29.842 Critical Warnings: 01:29:29.842 Available Spare Space: OK 01:29:29.842 Temperature: OK 01:29:29.842 Device Reliability: OK 01:29:29.842 Read Only: No 01:29:29.842 Volatile Memory Backup: OK 01:29:29.842 Current Temperature: 323 Kelvin (50 Celsius) 01:29:29.842 Temperature Threshold: 343 Kelvin (70 Celsius) 01:29:29.842 Available Spare: 0% 01:29:29.842 Available Spare Threshold: 0% 01:29:29.842 Life Percentage Used: 0% 01:29:29.842 Data Units Read: 796 01:29:29.842 Data Units Written: 725 01:29:29.842 Host Read Commands: 32693 01:29:29.842 Host Write Commands: 32116 01:29:29.842 Controller Busy Time: 0 minutes 01:29:29.842 Power Cycles: 0 01:29:29.842 Power On Hours: 0 hours 01:29:29.842 Unsafe Shutdowns: 0 01:29:29.842 Unrecoverable Media Errors: 0 01:29:29.842 Lifetime Error Log Entries: 0 01:29:29.842 Warning Temperature Time: 0 minutes 01:29:29.842 Critical Temperature Time: 0 minutes 01:29:29.842 01:29:29.842 Number of Queues 01:29:29.842 ================ 01:29:29.842 Number of I/O Submission Queues: 64 01:29:29.842 Number of I/O Completion Queues: 64 01:29:29.842 01:29:29.842 ZNS Specific Controller Data 01:29:29.842 ============================ 01:29:29.842 Zone Append Size Limit: 0 01:29:29.842 01:29:29.842 01:29:29.842 Active Namespaces 01:29:29.842 ================= 01:29:29.842 Namespace ID:1 01:29:29.842 Error Recovery Timeout: Unlimited 01:29:29.842 Command Set Identifier: NVM (00h) 01:29:29.842 Deallocate: Supported 01:29:29.842 Deallocated/Unwritten Error: Supported 01:29:29.842 Deallocated Read Value: All 0x00 01:29:29.842 Deallocate in Write Zeroes: Not Supported 01:29:29.842 Deallocated Guard Field: 0xFFFF 01:29:29.842 Flush: Supported 01:29:29.842 Reservation: Not Supported 01:29:29.842 Namespace Sharing Capabilities: Multiple Controllers 01:29:29.842 Size (in LBAs): 262144 (1GiB) 01:29:29.842 Capacity (in LBAs): 262144 (1GiB) 01:29:29.842 Utilization (in LBAs): 262144 (1GiB) 01:29:29.842 Thin Provisioning: Not Supported 01:29:29.842 Per-NS Atomic Units: No 01:29:29.842 Maximum Single Source Range Length: 128 01:29:29.842 Maximum Copy Length: 128 01:29:29.842 Maximum Source Range Count: 128 01:29:29.842 NGUID/EUI64 Never Reused: No 01:29:29.842 Namespace Write Protected: No 01:29:29.842 Endurance group ID: 1 01:29:29.842 Number of LBA Formats: 8 01:29:29.842 Current LBA Format: LBA Format #04 01:29:29.842 LBA Format #00: Data Size: 512 Metadata Size: 0 01:29:29.842 LBA Format #01: Data Size: 512 Metadata Size: 8 01:29:29.842 LBA Format #02: Data Size: 512 Metadata Size: 16 01:29:29.842 LBA Format #03: Data Size: 512 Metadata Size: 64 01:29:29.842 LBA Format #04: Data Size: 4096 Metadata Size: 0 01:29:29.842 LBA Format #05: Data Size: 4096 Metadata Size: 8 01:29:29.842 LBA Format #06: Data Size: 4096 Metadata Size: 16 01:29:29.842 LBA Format #07: Data Size: 4096 Metadata Size: 64 01:29:29.842 01:29:29.842 Get Feature FDP: 01:29:29.842 ================ 01:29:29.842 Enabled: Yes 01:29:29.842 FDP configuration index: 0 01:29:29.842 01:29:29.842 FDP configurations log page 01:29:29.842 =========================== 01:29:29.842 Number of FDP configurations: 1 01:29:29.842 Version: 0 01:29:29.842 Size: 112 01:29:29.842 FDP Configuration Descriptor: 0 01:29:29.842 Descriptor Size: 96 01:29:29.842 Reclaim Group Identifier format: 2 01:29:29.842 FDP Volatile Write Cache: Not Present 01:29:29.842 FDP Configuration: Valid 01:29:29.842 Vendor Specific Size: 0 01:29:29.842 Number of Reclaim Groups: 2 01:29:29.842 Number of Recalim Unit Handles: 8 01:29:29.842 Max Placement Identifiers: 128 01:29:29.842 Number of Namespaces Suppprted: 256 01:29:29.842 Reclaim unit Nominal Size: 6000000 bytes 01:29:29.842 Estimated Reclaim Unit Time Limit: Not Reported 01:29:29.842 RUH Desc #000: RUH Type: Initially Isolated 01:29:29.842 RUH Desc #001: RUH Type: Initially Isolated 01:29:29.842 RUH Desc #002: RUH Type: Initially Isolated 01:29:29.842 RUH Desc #003: RUH Type: Initially Isolated 01:29:29.842 RUH Desc #004: RUH Type: Initially Isolated 01:29:29.842 RUH Desc #005: RUH Type: Initially Isolated 01:29:29.842 RUH Desc #006: RUH Type: Initially Isolated 01:29:29.842 RUH Desc #007: RUH Type: Initially Isolated 01:29:29.842 01:29:29.842 FDP reclaim unit handle usage log page 01:29:30.100 ====================================== 01:29:30.100 Number of Reclaim Unit Handles: 8 01:29:30.100 RUH Usage Desc #000: RUH Attributes: Controller Specified 01:29:30.100 RUH Usage Desc #001: RUH Attributes: Unused 01:29:30.100 RUH Usage Desc #002: RUH Attributes: Unused 01:29:30.100 RUH Usage Desc #003: RUH Attributes: Unused 01:29:30.100 RUH Usage Desc #004: RUH Attributes: Unused 01:29:30.100 RUH Usage Desc #005: RUH Attributes: Unused 01:29:30.100 RUH Usage Desc #006: RUH Attributes: Unused 01:29:30.100 RUH Usage Desc #007: RUH Attributes: Unused 01:29:30.100 01:29:30.100 FDP statistics log page 01:29:30.100 ======================= 01:29:30.100 Host bytes with metadata written: 459382784 01:29:30.100 Media bytes with metadata written: 459448320 01:29:30.100 Media bytes erased: 0 01:29:30.100 01:29:30.100 FDP events log page 01:29:30.100 =================== 01:29:30.100 Number of FDP events: 0 01:29:30.100 01:29:30.100 NVM Specific Namespace Data 01:29:30.100 =========================== 01:29:30.100 Logical Block Storage Tag Mask: 0 01:29:30.100 Protection Information Capabilities: 01:29:30.100 16b Guard Protection Information Storage Tag Support: No 01:29:30.100 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 01:29:30.100 Storage Tag Check Read Support: No 01:29:30.100 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:29:30.100 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:29:30.100 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:29:30.100 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:29:30.100 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:29:30.100 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:29:30.100 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:29:30.100 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 01:29:30.100 01:29:30.100 real 0m2.367s 01:29:30.100 user 0m1.202s 01:29:30.100 sys 0m0.951s 01:29:30.100 05:24:21 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 01:29:30.100 05:24:21 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 01:29:30.100 ************************************ 01:29:30.100 END TEST nvme_identify 01:29:30.100 ************************************ 01:29:30.100 05:24:21 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 01:29:30.100 05:24:21 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:29:30.100 05:24:21 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 01:29:30.100 05:24:21 nvme -- common/autotest_common.sh@10 -- # set +x 01:29:30.100 ************************************ 01:29:30.100 START TEST nvme_perf 01:29:30.100 ************************************ 01:29:30.100 05:24:21 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 01:29:30.100 05:24:21 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 01:29:31.479 Initializing NVMe Controllers 01:29:31.479 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 01:29:31.479 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 01:29:31.479 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 01:29:31.479 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 01:29:31.479 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 01:29:31.479 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 01:29:31.479 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 01:29:31.479 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 01:29:31.479 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 01:29:31.479 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 01:29:31.479 Initialization complete. Launching workers. 01:29:31.479 ======================================================== 01:29:31.479 Latency(us) 01:29:31.479 Device Information : IOPS MiB/s Average min max 01:29:31.479 PCIE (0000:00:10.0) NSID 1 from core 0: 12337.33 144.58 10392.55 8433.04 50782.27 01:29:31.479 PCIE (0000:00:11.0) NSID 1 from core 0: 12337.33 144.58 10375.89 8603.40 48965.93 01:29:31.479 PCIE (0000:00:13.0) NSID 1 from core 0: 12337.33 144.58 10355.66 8546.78 48058.40 01:29:31.479 PCIE (0000:00:12.0) NSID 1 from core 0: 12337.33 144.58 10333.66 8591.09 45523.97 01:29:31.479 PCIE (0000:00:12.0) NSID 2 from core 0: 12401.25 145.33 10258.57 8568.00 34919.99 01:29:31.479 PCIE (0000:00:12.0) NSID 3 from core 0: 12401.25 145.33 10239.16 8525.28 34382.26 01:29:31.479 ======================================================== 01:29:31.479 Total : 74151.83 868.97 10325.78 8433.04 50782.27 01:29:31.479 01:29:31.479 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 01:29:31.479 ================================================================================= 01:29:31.479 1.00000% : 8757.993us 01:29:31.479 10.00000% : 9115.462us 01:29:31.479 25.00000% : 9472.931us 01:29:31.479 50.00000% : 9889.978us 01:29:31.479 75.00000% : 10366.604us 01:29:31.479 90.00000% : 11379.433us 01:29:31.479 95.00000% : 11975.215us 01:29:31.479 98.00000% : 14834.967us 01:29:31.479 99.00000% : 39798.225us 01:29:31.479 99.50000% : 48854.109us 01:29:31.479 99.90000% : 50522.298us 01:29:31.479 99.99000% : 50760.611us 01:29:31.479 99.99900% : 50998.924us 01:29:31.479 99.99990% : 50998.924us 01:29:31.479 99.99999% : 50998.924us 01:29:31.479 01:29:31.479 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 01:29:31.479 ================================================================================= 01:29:31.479 1.00000% : 8817.571us 01:29:31.479 10.00000% : 9175.040us 01:29:31.479 25.00000% : 9472.931us 01:29:31.479 50.00000% : 9830.400us 01:29:31.479 75.00000% : 10366.604us 01:29:31.479 90.00000% : 11439.011us 01:29:31.479 95.00000% : 11915.636us 01:29:31.479 98.00000% : 14656.233us 01:29:31.479 99.00000% : 37891.724us 01:29:31.479 99.50000% : 47185.920us 01:29:31.479 99.90000% : 48615.796us 01:29:31.479 99.99000% : 49092.422us 01:29:31.479 99.99900% : 49092.422us 01:29:31.479 99.99990% : 49092.422us 01:29:31.479 99.99999% : 49092.422us 01:29:31.479 01:29:31.479 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 01:29:31.479 ================================================================================= 01:29:31.479 1.00000% : 8817.571us 01:29:31.479 10.00000% : 9175.040us 01:29:31.479 25.00000% : 9472.931us 01:29:31.479 50.00000% : 9830.400us 01:29:31.479 75.00000% : 10366.604us 01:29:31.479 90.00000% : 11439.011us 01:29:31.479 95.00000% : 11915.636us 01:29:31.479 98.00000% : 13881.716us 01:29:31.479 99.00000% : 35985.222us 01:29:31.479 99.50000% : 46232.669us 01:29:31.479 99.90000% : 47900.858us 01:29:31.479 99.99000% : 48139.171us 01:29:31.479 99.99900% : 48139.171us 01:29:31.479 99.99990% : 48139.171us 01:29:31.479 99.99999% : 48139.171us 01:29:31.479 01:29:31.479 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 01:29:31.479 ================================================================================= 01:29:31.479 1.00000% : 8817.571us 01:29:31.479 10.00000% : 9175.040us 01:29:31.479 25.00000% : 9472.931us 01:29:31.479 50.00000% : 9830.400us 01:29:31.479 75.00000% : 10307.025us 01:29:31.479 90.00000% : 11439.011us 01:29:31.479 95.00000% : 11915.636us 01:29:31.479 98.00000% : 13702.982us 01:29:31.479 99.00000% : 33840.407us 01:29:31.479 99.50000% : 43611.229us 01:29:31.479 99.90000% : 45279.418us 01:29:31.479 99.99000% : 45517.731us 01:29:31.479 99.99900% : 45756.044us 01:29:31.479 99.99990% : 45756.044us 01:29:31.479 99.99999% : 45756.044us 01:29:31.480 01:29:31.480 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 01:29:31.480 ================================================================================= 01:29:31.480 1.00000% : 8817.571us 01:29:31.480 10.00000% : 9175.040us 01:29:31.480 25.00000% : 9472.931us 01:29:31.480 50.00000% : 9830.400us 01:29:31.480 75.00000% : 10307.025us 01:29:31.480 90.00000% : 11439.011us 01:29:31.480 95.00000% : 11975.215us 01:29:31.480 98.00000% : 14120.029us 01:29:31.480 99.00000% : 25737.775us 01:29:31.480 99.50000% : 32887.156us 01:29:31.480 99.90000% : 34555.345us 01:29:31.480 99.99000% : 35031.971us 01:29:31.480 99.99900% : 35031.971us 01:29:31.480 99.99990% : 35031.971us 01:29:31.480 99.99999% : 35031.971us 01:29:31.480 01:29:31.480 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 01:29:31.480 ================================================================================= 01:29:31.480 1.00000% : 8817.571us 01:29:31.480 10.00000% : 9175.040us 01:29:31.480 25.00000% : 9472.931us 01:29:31.480 50.00000% : 9830.400us 01:29:31.480 75.00000% : 10366.604us 01:29:31.480 90.00000% : 11439.011us 01:29:31.480 95.00000% : 11975.215us 01:29:31.480 98.00000% : 14358.342us 01:29:31.480 99.00000% : 23592.960us 01:29:31.480 99.50000% : 32410.531us 01:29:31.480 99.90000% : 34078.720us 01:29:31.480 99.99000% : 34555.345us 01:29:31.480 99.99900% : 34555.345us 01:29:31.480 99.99990% : 34555.345us 01:29:31.480 99.99999% : 34555.345us 01:29:31.480 01:29:31.480 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 01:29:31.480 ============================================================================== 01:29:31.480 Range in us Cumulative IO count 01:29:31.480 8400.524 - 8460.102: 0.0162% ( 2) 01:29:31.480 8460.102 - 8519.680: 0.0972% ( 10) 01:29:31.480 8519.680 - 8579.258: 0.2024% ( 13) 01:29:31.480 8579.258 - 8638.836: 0.4129% ( 26) 01:29:31.480 8638.836 - 8698.415: 0.6962% ( 35) 01:29:31.480 8698.415 - 8757.993: 1.1901% ( 61) 01:29:31.480 8757.993 - 8817.571: 2.0644% ( 108) 01:29:31.480 8817.571 - 8877.149: 3.0845% ( 126) 01:29:31.480 8877.149 - 8936.727: 4.5094% ( 176) 01:29:31.480 8936.727 - 8996.305: 6.1852% ( 207) 01:29:31.480 8996.305 - 9055.884: 7.9744% ( 221) 01:29:31.480 9055.884 - 9115.462: 10.0712% ( 259) 01:29:31.480 9115.462 - 9175.040: 12.3057% ( 276) 01:29:31.480 9175.040 - 9234.618: 14.7830% ( 306) 01:29:31.480 9234.618 - 9294.196: 17.6733% ( 357) 01:29:31.480 9294.196 - 9353.775: 20.5149% ( 351) 01:29:31.480 9353.775 - 9413.353: 23.7290% ( 397) 01:29:31.480 9413.353 - 9472.931: 27.2911% ( 440) 01:29:31.480 9472.931 - 9532.509: 30.6752% ( 418) 01:29:31.480 9532.509 - 9592.087: 34.4560% ( 467) 01:29:31.480 9592.087 - 9651.665: 38.2367% ( 467) 01:29:31.480 9651.665 - 9711.244: 42.1470% ( 483) 01:29:31.480 9711.244 - 9770.822: 45.9197% ( 466) 01:29:31.480 9770.822 - 9830.400: 49.7328% ( 471) 01:29:31.480 9830.400 - 9889.978: 53.4974% ( 465) 01:29:31.480 9889.978 - 9949.556: 56.9301% ( 424) 01:29:31.480 9949.556 - 10009.135: 60.3384% ( 421) 01:29:31.480 10009.135 - 10068.713: 63.5444% ( 396) 01:29:31.480 10068.713 - 10128.291: 66.4103% ( 354) 01:29:31.480 10128.291 - 10187.869: 68.9929% ( 319) 01:29:31.480 10187.869 - 10247.447: 71.3002% ( 285) 01:29:31.480 10247.447 - 10307.025: 73.4780% ( 269) 01:29:31.480 10307.025 - 10366.604: 75.5019% ( 250) 01:29:31.480 10366.604 - 10426.182: 77.3964% ( 234) 01:29:31.480 10426.182 - 10485.760: 79.0074% ( 199) 01:29:31.480 10485.760 - 10545.338: 80.3433% ( 165) 01:29:31.480 10545.338 - 10604.916: 81.4929% ( 142) 01:29:31.480 10604.916 - 10664.495: 82.4563% ( 119) 01:29:31.480 10664.495 - 10724.073: 83.5573% ( 136) 01:29:31.480 10724.073 - 10783.651: 84.3021% ( 92) 01:29:31.480 10783.651 - 10843.229: 84.9336% ( 78) 01:29:31.480 10843.229 - 10902.807: 85.6218% ( 85) 01:29:31.480 10902.807 - 10962.385: 86.2209% ( 74) 01:29:31.480 10962.385 - 11021.964: 86.8766% ( 81) 01:29:31.480 11021.964 - 11081.542: 87.4109% ( 66) 01:29:31.480 11081.542 - 11141.120: 88.0100% ( 74) 01:29:31.480 11141.120 - 11200.698: 88.6982% ( 85) 01:29:31.480 11200.698 - 11260.276: 89.2811% ( 72) 01:29:31.480 11260.276 - 11319.855: 89.8316% ( 68) 01:29:31.480 11319.855 - 11379.433: 90.4145% ( 72) 01:29:31.480 11379.433 - 11439.011: 91.0055% ( 73) 01:29:31.480 11439.011 - 11498.589: 91.4913% ( 60) 01:29:31.480 11498.589 - 11558.167: 92.0742% ( 72) 01:29:31.480 11558.167 - 11617.745: 92.5923% ( 64) 01:29:31.480 11617.745 - 11677.324: 93.1266% ( 66) 01:29:31.480 11677.324 - 11736.902: 93.6124% ( 60) 01:29:31.480 11736.902 - 11796.480: 94.0738% ( 57) 01:29:31.480 11796.480 - 11856.058: 94.5920% ( 64) 01:29:31.480 11856.058 - 11915.636: 94.9482% ( 44) 01:29:31.480 11915.636 - 11975.215: 95.4501% ( 62) 01:29:31.480 11975.215 - 12034.793: 95.8387% ( 48) 01:29:31.480 12034.793 - 12094.371: 96.1383% ( 37) 01:29:31.480 12094.371 - 12153.949: 96.3892% ( 31) 01:29:31.480 12153.949 - 12213.527: 96.6321% ( 30) 01:29:31.480 12213.527 - 12273.105: 96.8912% ( 32) 01:29:31.480 12273.105 - 12332.684: 97.0045% ( 14) 01:29:31.480 12332.684 - 12392.262: 97.1179% ( 14) 01:29:31.480 12392.262 - 12451.840: 97.2231% ( 13) 01:29:31.480 12451.840 - 12511.418: 97.2717% ( 6) 01:29:31.480 12511.418 - 12570.996: 97.3041% ( 4) 01:29:31.480 12570.996 - 12630.575: 97.3203% ( 2) 01:29:31.480 12630.575 - 12690.153: 97.3446% ( 3) 01:29:31.480 12690.153 - 12749.731: 97.3688% ( 3) 01:29:31.480 12749.731 - 12809.309: 97.3931% ( 3) 01:29:31.480 12809.309 - 12868.887: 97.4093% ( 2) 01:29:31.480 12928.465 - 12988.044: 97.4417% ( 4) 01:29:31.480 12988.044 - 13047.622: 97.4579% ( 2) 01:29:31.480 13047.622 - 13107.200: 97.4903% ( 4) 01:29:31.480 13107.200 - 13166.778: 97.5146% ( 3) 01:29:31.480 13166.778 - 13226.356: 97.5308% ( 2) 01:29:31.480 13226.356 - 13285.935: 97.5551% ( 3) 01:29:31.480 13285.935 - 13345.513: 97.5712% ( 2) 01:29:31.480 13345.513 - 13405.091: 97.5955% ( 3) 01:29:31.480 13405.091 - 13464.669: 97.6198% ( 3) 01:29:31.480 13464.669 - 13524.247: 97.6522% ( 4) 01:29:31.480 13524.247 - 13583.825: 97.6603% ( 1) 01:29:31.480 13583.825 - 13643.404: 97.6927% ( 4) 01:29:31.480 13643.404 - 13702.982: 97.7089% ( 2) 01:29:31.480 13702.982 - 13762.560: 97.7332% ( 3) 01:29:31.480 13762.560 - 13822.138: 97.7655% ( 4) 01:29:31.480 13822.138 - 13881.716: 97.7898% ( 3) 01:29:31.480 13881.716 - 13941.295: 97.8222% ( 4) 01:29:31.480 13941.295 - 14000.873: 97.8465% ( 3) 01:29:31.480 14000.873 - 14060.451: 97.8627% ( 2) 01:29:31.481 14060.451 - 14120.029: 97.8789% ( 2) 01:29:31.481 14120.029 - 14179.607: 97.9032% ( 3) 01:29:31.481 14179.607 - 14239.185: 97.9275% ( 3) 01:29:31.481 14656.233 - 14715.811: 97.9517% ( 3) 01:29:31.481 14715.811 - 14775.389: 97.9679% ( 2) 01:29:31.481 14775.389 - 14834.967: 98.0003% ( 4) 01:29:31.481 14834.967 - 14894.545: 98.0327% ( 4) 01:29:31.481 14894.545 - 14954.124: 98.0651% ( 4) 01:29:31.481 14954.124 - 15013.702: 98.1056% ( 5) 01:29:31.481 15013.702 - 15073.280: 98.1541% ( 6) 01:29:31.481 15073.280 - 15132.858: 98.2108% ( 7) 01:29:31.481 15132.858 - 15192.436: 98.2513% ( 5) 01:29:31.481 15192.436 - 15252.015: 98.2999% ( 6) 01:29:31.481 15252.015 - 15371.171: 98.3970% ( 12) 01:29:31.481 15371.171 - 15490.327: 98.4861% ( 11) 01:29:31.481 15490.327 - 15609.484: 98.5751% ( 11) 01:29:31.481 15609.484 - 15728.640: 98.6723% ( 12) 01:29:31.481 15728.640 - 15847.796: 98.7694% ( 12) 01:29:31.481 15847.796 - 15966.953: 98.8828% ( 14) 01:29:31.481 15966.953 - 16086.109: 98.9394% ( 7) 01:29:31.481 16086.109 - 16205.265: 98.9637% ( 3) 01:29:31.481 39559.913 - 39798.225: 99.0285% ( 8) 01:29:31.481 39798.225 - 40036.538: 99.0771% ( 6) 01:29:31.481 40036.538 - 40274.851: 99.1337% ( 7) 01:29:31.481 40274.851 - 40513.164: 99.1985% ( 8) 01:29:31.481 40513.164 - 40751.476: 99.2552% ( 7) 01:29:31.481 40751.476 - 40989.789: 99.3119% ( 7) 01:29:31.481 40989.789 - 41228.102: 99.3766% ( 8) 01:29:31.481 41228.102 - 41466.415: 99.4414% ( 8) 01:29:31.481 41466.415 - 41704.727: 99.4819% ( 5) 01:29:31.481 48615.796 - 48854.109: 99.5304% ( 6) 01:29:31.481 48854.109 - 49092.422: 99.5790% ( 6) 01:29:31.481 49092.422 - 49330.735: 99.6357% ( 7) 01:29:31.481 49330.735 - 49569.047: 99.6924% ( 7) 01:29:31.481 49569.047 - 49807.360: 99.7571% ( 8) 01:29:31.481 49807.360 - 50045.673: 99.8219% ( 8) 01:29:31.481 50045.673 - 50283.985: 99.8867% ( 8) 01:29:31.481 50283.985 - 50522.298: 99.9514% ( 8) 01:29:31.481 50522.298 - 50760.611: 99.9919% ( 5) 01:29:31.481 50760.611 - 50998.924: 100.0000% ( 1) 01:29:31.481 01:29:31.481 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 01:29:31.481 ============================================================================== 01:29:31.481 Range in us Cumulative IO count 01:29:31.481 8579.258 - 8638.836: 0.0567% ( 7) 01:29:31.481 8638.836 - 8698.415: 0.3157% ( 32) 01:29:31.481 8698.415 - 8757.993: 0.6153% ( 37) 01:29:31.481 8757.993 - 8817.571: 1.0929% ( 59) 01:29:31.481 8817.571 - 8877.149: 1.7811% ( 85) 01:29:31.481 8877.149 - 8936.727: 2.7931% ( 125) 01:29:31.481 8936.727 - 8996.305: 4.1937% ( 173) 01:29:31.481 8996.305 - 9055.884: 6.0881% ( 234) 01:29:31.481 9055.884 - 9115.462: 8.2335% ( 265) 01:29:31.481 9115.462 - 9175.040: 10.5651% ( 288) 01:29:31.481 9175.040 - 9234.618: 13.2205% ( 328) 01:29:31.481 9234.618 - 9294.196: 16.1188% ( 358) 01:29:31.481 9294.196 - 9353.775: 19.1629% ( 376) 01:29:31.481 9353.775 - 9413.353: 22.5389% ( 417) 01:29:31.481 9413.353 - 9472.931: 26.3196% ( 467) 01:29:31.481 9472.931 - 9532.509: 30.3190% ( 494) 01:29:31.481 9532.509 - 9592.087: 34.3993% ( 504) 01:29:31.481 9592.087 - 9651.665: 38.3905% ( 493) 01:29:31.481 9651.665 - 9711.244: 42.4628% ( 503) 01:29:31.481 9711.244 - 9770.822: 46.5107% ( 500) 01:29:31.481 9770.822 - 9830.400: 50.4615% ( 488) 01:29:31.481 9830.400 - 9889.978: 54.3232% ( 477) 01:29:31.481 9889.978 - 9949.556: 58.2254% ( 482) 01:29:31.481 9949.556 - 10009.135: 61.6499% ( 423) 01:29:31.481 10009.135 - 10068.713: 64.7021% ( 377) 01:29:31.481 10068.713 - 10128.291: 67.4061% ( 334) 01:29:31.481 10128.291 - 10187.869: 69.8591% ( 303) 01:29:31.481 10187.869 - 10247.447: 72.3041% ( 302) 01:29:31.481 10247.447 - 10307.025: 74.4981% ( 271) 01:29:31.481 10307.025 - 10366.604: 76.4330% ( 239) 01:29:31.481 10366.604 - 10426.182: 78.1736% ( 215) 01:29:31.481 10426.182 - 10485.760: 79.5823% ( 174) 01:29:31.481 10485.760 - 10545.338: 80.6671% ( 134) 01:29:31.481 10545.338 - 10604.916: 81.6953% ( 127) 01:29:31.481 10604.916 - 10664.495: 82.5777% ( 109) 01:29:31.481 10664.495 - 10724.073: 83.4683% ( 110) 01:29:31.481 10724.073 - 10783.651: 84.2778% ( 100) 01:29:31.481 10783.651 - 10843.229: 84.8688% ( 73) 01:29:31.481 10843.229 - 10902.807: 85.5003% ( 78) 01:29:31.481 10902.807 - 10962.385: 85.9861% ( 60) 01:29:31.481 10962.385 - 11021.964: 86.4880% ( 62) 01:29:31.481 11021.964 - 11081.542: 87.0547% ( 70) 01:29:31.481 11081.542 - 11141.120: 87.6214% ( 70) 01:29:31.481 11141.120 - 11200.698: 88.1477% ( 65) 01:29:31.481 11200.698 - 11260.276: 88.7063% ( 69) 01:29:31.481 11260.276 - 11319.855: 89.3378% ( 78) 01:29:31.481 11319.855 - 11379.433: 89.9773% ( 79) 01:29:31.481 11379.433 - 11439.011: 90.6088% ( 78) 01:29:31.481 11439.011 - 11498.589: 91.2079% ( 74) 01:29:31.481 11498.589 - 11558.167: 91.8718% ( 82) 01:29:31.481 11558.167 - 11617.745: 92.5032% ( 78) 01:29:31.481 11617.745 - 11677.324: 93.1266% ( 77) 01:29:31.481 11677.324 - 11736.902: 93.7257% ( 74) 01:29:31.481 11736.902 - 11796.480: 94.3410% ( 76) 01:29:31.481 11796.480 - 11856.058: 94.8591% ( 64) 01:29:31.481 11856.058 - 11915.636: 95.3125% ( 56) 01:29:31.481 11915.636 - 11975.215: 95.7254% ( 51) 01:29:31.481 11975.215 - 12034.793: 96.0816% ( 44) 01:29:31.481 12034.793 - 12094.371: 96.3973% ( 39) 01:29:31.481 12094.371 - 12153.949: 96.6564% ( 32) 01:29:31.481 12153.949 - 12213.527: 96.8507% ( 24) 01:29:31.481 12213.527 - 12273.105: 97.0207% ( 21) 01:29:31.481 12273.105 - 12332.684: 97.1503% ( 16) 01:29:31.481 12332.684 - 12392.262: 97.2312% ( 10) 01:29:31.481 12392.262 - 12451.840: 97.2879% ( 7) 01:29:31.481 12451.840 - 12511.418: 97.3284% ( 5) 01:29:31.481 12511.418 - 12570.996: 97.4012% ( 9) 01:29:31.481 12570.996 - 12630.575: 97.4093% ( 1) 01:29:31.481 12988.044 - 13047.622: 97.4336% ( 3) 01:29:31.481 13047.622 - 13107.200: 97.4579% ( 3) 01:29:31.481 13107.200 - 13166.778: 97.4903% ( 4) 01:29:31.481 13166.778 - 13226.356: 97.5227% ( 4) 01:29:31.481 13226.356 - 13285.935: 97.5551% ( 4) 01:29:31.481 13285.935 - 13345.513: 97.5874% ( 4) 01:29:31.481 13345.513 - 13405.091: 97.6036% ( 2) 01:29:31.481 13405.091 - 13464.669: 97.6360% ( 4) 01:29:31.481 13464.669 - 13524.247: 97.6684% ( 4) 01:29:31.481 13524.247 - 13583.825: 97.7008% ( 4) 01:29:31.481 13583.825 - 13643.404: 97.7332% ( 4) 01:29:31.481 13643.404 - 13702.982: 97.7655% ( 4) 01:29:31.481 13702.982 - 13762.560: 97.7979% ( 4) 01:29:31.481 13762.560 - 13822.138: 97.8303% ( 4) 01:29:31.481 13822.138 - 13881.716: 97.8546% ( 3) 01:29:31.481 13881.716 - 13941.295: 97.8789% ( 3) 01:29:31.481 13941.295 - 14000.873: 97.9032% ( 3) 01:29:31.481 14000.873 - 14060.451: 97.9275% ( 3) 01:29:31.481 14358.342 - 14417.920: 97.9356% ( 1) 01:29:31.482 14417.920 - 14477.498: 97.9517% ( 2) 01:29:31.482 14477.498 - 14537.076: 97.9598% ( 1) 01:29:31.482 14537.076 - 14596.655: 97.9841% ( 3) 01:29:31.482 14596.655 - 14656.233: 98.0084% ( 3) 01:29:31.482 14656.233 - 14715.811: 98.0408% ( 4) 01:29:31.482 14715.811 - 14775.389: 98.0732% ( 4) 01:29:31.482 14775.389 - 14834.967: 98.1137% ( 5) 01:29:31.482 14834.967 - 14894.545: 98.1460% ( 4) 01:29:31.482 14894.545 - 14954.124: 98.2027% ( 7) 01:29:31.482 14954.124 - 15013.702: 98.2513% ( 6) 01:29:31.482 15013.702 - 15073.280: 98.3242% ( 9) 01:29:31.482 15073.280 - 15132.858: 98.3727% ( 6) 01:29:31.482 15132.858 - 15192.436: 98.4456% ( 9) 01:29:31.482 15192.436 - 15252.015: 98.5023% ( 7) 01:29:31.482 15252.015 - 15371.171: 98.6237% ( 15) 01:29:31.482 15371.171 - 15490.327: 98.7209% ( 12) 01:29:31.482 15490.327 - 15609.484: 98.7937% ( 9) 01:29:31.482 15609.484 - 15728.640: 98.8585% ( 8) 01:29:31.482 15728.640 - 15847.796: 98.8990% ( 5) 01:29:31.482 15847.796 - 15966.953: 98.9475% ( 6) 01:29:31.482 15966.953 - 16086.109: 98.9637% ( 2) 01:29:31.482 37415.098 - 37653.411: 98.9880% ( 3) 01:29:31.482 37653.411 - 37891.724: 99.0447% ( 7) 01:29:31.482 37891.724 - 38130.036: 99.1014% ( 7) 01:29:31.482 38130.036 - 38368.349: 99.1661% ( 8) 01:29:31.482 38368.349 - 38606.662: 99.2309% ( 8) 01:29:31.482 38606.662 - 38844.975: 99.2957% ( 8) 01:29:31.482 38844.975 - 39083.287: 99.3604% ( 8) 01:29:31.482 39083.287 - 39321.600: 99.4252% ( 8) 01:29:31.482 39321.600 - 39559.913: 99.4819% ( 7) 01:29:31.482 46947.607 - 47185.920: 99.5142% ( 4) 01:29:31.482 47185.920 - 47424.233: 99.5790% ( 8) 01:29:31.482 47424.233 - 47662.545: 99.6438% ( 8) 01:29:31.482 47662.545 - 47900.858: 99.7005% ( 7) 01:29:31.482 47900.858 - 48139.171: 99.7652% ( 8) 01:29:31.482 48139.171 - 48377.484: 99.8381% ( 9) 01:29:31.482 48377.484 - 48615.796: 99.9028% ( 8) 01:29:31.482 48615.796 - 48854.109: 99.9676% ( 8) 01:29:31.482 48854.109 - 49092.422: 100.0000% ( 4) 01:29:31.482 01:29:31.482 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 01:29:31.482 ============================================================================== 01:29:31.482 Range in us Cumulative IO count 01:29:31.482 8519.680 - 8579.258: 0.0567% ( 7) 01:29:31.482 8579.258 - 8638.836: 0.1700% ( 14) 01:29:31.482 8638.836 - 8698.415: 0.3481% ( 22) 01:29:31.482 8698.415 - 8757.993: 0.7529% ( 50) 01:29:31.482 8757.993 - 8817.571: 1.3358% ( 72) 01:29:31.482 8817.571 - 8877.149: 2.1778% ( 104) 01:29:31.482 8877.149 - 8936.727: 3.2464% ( 132) 01:29:31.482 8936.727 - 8996.305: 4.4527% ( 149) 01:29:31.482 8996.305 - 9055.884: 5.9990% ( 191) 01:29:31.482 9055.884 - 9115.462: 8.0149% ( 249) 01:29:31.482 9115.462 - 9175.040: 10.2898% ( 281) 01:29:31.482 9175.040 - 9234.618: 12.8967% ( 322) 01:29:31.482 9234.618 - 9294.196: 15.7707% ( 355) 01:29:31.482 9294.196 - 9353.775: 19.1548% ( 418) 01:29:31.482 9353.775 - 9413.353: 22.7089% ( 439) 01:29:31.482 9413.353 - 9472.931: 26.6435% ( 486) 01:29:31.482 9472.931 - 9532.509: 30.7157% ( 503) 01:29:31.482 9532.509 - 9592.087: 34.8203% ( 507) 01:29:31.482 9592.087 - 9651.665: 38.9734% ( 513) 01:29:31.482 9651.665 - 9711.244: 43.1266% ( 513) 01:29:31.482 9711.244 - 9770.822: 47.0207% ( 481) 01:29:31.482 9770.822 - 9830.400: 50.8015% ( 467) 01:29:31.482 9830.400 - 9889.978: 54.5580% ( 464) 01:29:31.482 9889.978 - 9949.556: 58.3225% ( 465) 01:29:31.482 9949.556 - 10009.135: 61.9171% ( 444) 01:29:31.482 10009.135 - 10068.713: 65.3578% ( 425) 01:29:31.482 10068.713 - 10128.291: 68.1023% ( 339) 01:29:31.482 10128.291 - 10187.869: 70.5878% ( 307) 01:29:31.482 10187.869 - 10247.447: 72.8222% ( 276) 01:29:31.482 10247.447 - 10307.025: 74.8705% ( 253) 01:29:31.482 10307.025 - 10366.604: 76.7163% ( 228) 01:29:31.482 10366.604 - 10426.182: 78.2869% ( 194) 01:29:31.482 10426.182 - 10485.760: 79.5661% ( 158) 01:29:31.482 10485.760 - 10545.338: 80.6671% ( 136) 01:29:31.482 10545.338 - 10604.916: 81.6791% ( 125) 01:29:31.482 10604.916 - 10664.495: 82.6344% ( 118) 01:29:31.482 10664.495 - 10724.073: 83.4440% ( 100) 01:29:31.482 10724.073 - 10783.651: 84.2131% ( 95) 01:29:31.482 10783.651 - 10843.229: 84.9012% ( 85) 01:29:31.482 10843.229 - 10902.807: 85.5084% ( 75) 01:29:31.482 10902.807 - 10962.385: 85.9618% ( 56) 01:29:31.482 10962.385 - 11021.964: 86.5366% ( 71) 01:29:31.482 11021.964 - 11081.542: 87.0871% ( 68) 01:29:31.482 11081.542 - 11141.120: 87.5891% ( 62) 01:29:31.482 11141.120 - 11200.698: 88.1315% ( 67) 01:29:31.482 11200.698 - 11260.276: 88.7306% ( 74) 01:29:31.482 11260.276 - 11319.855: 89.3216% ( 73) 01:29:31.482 11319.855 - 11379.433: 89.9126% ( 73) 01:29:31.482 11379.433 - 11439.011: 90.5683% ( 81) 01:29:31.482 11439.011 - 11498.589: 91.2484% ( 84) 01:29:31.482 11498.589 - 11558.167: 91.9203% ( 83) 01:29:31.482 11558.167 - 11617.745: 92.5599% ( 79) 01:29:31.482 11617.745 - 11677.324: 93.2157% ( 81) 01:29:31.482 11677.324 - 11736.902: 93.8795% ( 82) 01:29:31.482 11736.902 - 11796.480: 94.4786% ( 74) 01:29:31.482 11796.480 - 11856.058: 94.9887% ( 63) 01:29:31.482 11856.058 - 11915.636: 95.4339% ( 55) 01:29:31.482 11915.636 - 11975.215: 95.7416% ( 38) 01:29:31.482 11975.215 - 12034.793: 96.0168% ( 34) 01:29:31.482 12034.793 - 12094.371: 96.2840% ( 33) 01:29:31.482 12094.371 - 12153.949: 96.5026% ( 27) 01:29:31.482 12153.949 - 12213.527: 96.6645% ( 20) 01:29:31.482 12213.527 - 12273.105: 96.7940% ( 16) 01:29:31.482 12273.105 - 12332.684: 96.9155% ( 15) 01:29:31.482 12332.684 - 12392.262: 96.9641% ( 6) 01:29:31.482 12392.262 - 12451.840: 96.9964% ( 4) 01:29:31.482 12451.840 - 12511.418: 97.0288% ( 4) 01:29:31.482 12511.418 - 12570.996: 97.0612% ( 4) 01:29:31.482 12570.996 - 12630.575: 97.0855% ( 3) 01:29:31.482 12630.575 - 12690.153: 97.1179% ( 4) 01:29:31.482 12690.153 - 12749.731: 97.1503% ( 4) 01:29:31.482 12749.731 - 12809.309: 97.1745% ( 3) 01:29:31.482 12809.309 - 12868.887: 97.2150% ( 5) 01:29:31.482 12868.887 - 12928.465: 97.2555% ( 5) 01:29:31.482 12928.465 - 12988.044: 97.3284% ( 9) 01:29:31.482 12988.044 - 13047.622: 97.3931% ( 8) 01:29:31.482 13047.622 - 13107.200: 97.4498% ( 7) 01:29:31.482 13107.200 - 13166.778: 97.5146% ( 8) 01:29:31.482 13166.778 - 13226.356: 97.5631% ( 6) 01:29:31.482 13226.356 - 13285.935: 97.6198% ( 7) 01:29:31.482 13285.935 - 13345.513: 97.6522% ( 4) 01:29:31.482 13345.513 - 13405.091: 97.6846% ( 4) 01:29:31.482 13405.091 - 13464.669: 97.7170% ( 4) 01:29:31.482 13464.669 - 13524.247: 97.7494% ( 4) 01:29:31.483 13524.247 - 13583.825: 97.7817% ( 4) 01:29:31.483 13583.825 - 13643.404: 97.8141% ( 4) 01:29:31.483 13643.404 - 13702.982: 97.8627% ( 6) 01:29:31.483 13702.982 - 13762.560: 97.9275% ( 8) 01:29:31.483 13762.560 - 13822.138: 97.9841% ( 7) 01:29:31.483 13822.138 - 13881.716: 98.0408% ( 7) 01:29:31.483 13881.716 - 13941.295: 98.0732% ( 4) 01:29:31.483 13941.295 - 14000.873: 98.1056% ( 4) 01:29:31.483 14000.873 - 14060.451: 98.1299% ( 3) 01:29:31.483 14060.451 - 14120.029: 98.1703% ( 5) 01:29:31.483 14120.029 - 14179.607: 98.2027% ( 4) 01:29:31.483 14179.607 - 14239.185: 98.2351% ( 4) 01:29:31.483 14239.185 - 14298.764: 98.2675% ( 4) 01:29:31.483 14298.764 - 14358.342: 98.2918% ( 3) 01:29:31.483 14358.342 - 14417.920: 98.3242% ( 4) 01:29:31.483 14417.920 - 14477.498: 98.3565% ( 4) 01:29:31.483 14477.498 - 14537.076: 98.3970% ( 5) 01:29:31.483 14537.076 - 14596.655: 98.4294% ( 4) 01:29:31.483 14596.655 - 14656.233: 98.4375% ( 1) 01:29:31.483 14656.233 - 14715.811: 98.4456% ( 1) 01:29:31.483 15132.858 - 15192.436: 98.4537% ( 1) 01:29:31.483 15192.436 - 15252.015: 98.4780% ( 3) 01:29:31.483 15252.015 - 15371.171: 98.5427% ( 8) 01:29:31.483 15371.171 - 15490.327: 98.6075% ( 8) 01:29:31.483 15490.327 - 15609.484: 98.6723% ( 8) 01:29:31.483 15609.484 - 15728.640: 98.7370% ( 8) 01:29:31.483 15728.640 - 15847.796: 98.8018% ( 8) 01:29:31.483 15847.796 - 15966.953: 98.8666% ( 8) 01:29:31.483 15966.953 - 16086.109: 98.9394% ( 9) 01:29:31.483 16086.109 - 16205.265: 98.9637% ( 3) 01:29:31.483 35746.909 - 35985.222: 99.0042% ( 5) 01:29:31.483 35985.222 - 36223.535: 99.0609% ( 7) 01:29:31.483 36223.535 - 36461.847: 99.1176% ( 7) 01:29:31.483 36461.847 - 36700.160: 99.1742% ( 7) 01:29:31.483 36700.160 - 36938.473: 99.2147% ( 5) 01:29:31.483 36938.473 - 37176.785: 99.2795% ( 8) 01:29:31.483 37176.785 - 37415.098: 99.3361% ( 7) 01:29:31.483 37415.098 - 37653.411: 99.3928% ( 7) 01:29:31.483 37653.411 - 37891.724: 99.4576% ( 8) 01:29:31.483 37891.724 - 38130.036: 99.4819% ( 3) 01:29:31.483 45994.356 - 46232.669: 99.5304% ( 6) 01:29:31.483 46232.669 - 46470.982: 99.5871% ( 7) 01:29:31.483 46470.982 - 46709.295: 99.6438% ( 7) 01:29:31.483 46709.295 - 46947.607: 99.7085% ( 8) 01:29:31.483 46947.607 - 47185.920: 99.7733% ( 8) 01:29:31.483 47185.920 - 47424.233: 99.8300% ( 7) 01:29:31.483 47424.233 - 47662.545: 99.8867% ( 7) 01:29:31.483 47662.545 - 47900.858: 99.9514% ( 8) 01:29:31.483 47900.858 - 48139.171: 100.0000% ( 6) 01:29:31.483 01:29:31.483 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 01:29:31.483 ============================================================================== 01:29:31.483 Range in us Cumulative IO count 01:29:31.483 8579.258 - 8638.836: 0.0972% ( 12) 01:29:31.483 8638.836 - 8698.415: 0.2510% ( 19) 01:29:31.483 8698.415 - 8757.993: 0.6315% ( 47) 01:29:31.483 8757.993 - 8817.571: 1.0687% ( 54) 01:29:31.483 8817.571 - 8877.149: 2.0078% ( 116) 01:29:31.483 8877.149 - 8936.727: 2.9469% ( 116) 01:29:31.483 8936.727 - 8996.305: 4.3151% ( 169) 01:29:31.483 8996.305 - 9055.884: 5.9262% ( 199) 01:29:31.483 9055.884 - 9115.462: 7.8125% ( 233) 01:29:31.483 9115.462 - 9175.040: 10.2736% ( 304) 01:29:31.483 9175.040 - 9234.618: 12.9534% ( 331) 01:29:31.483 9234.618 - 9294.196: 16.0136% ( 378) 01:29:31.483 9294.196 - 9353.775: 19.1143% ( 383) 01:29:31.483 9353.775 - 9413.353: 22.6684% ( 439) 01:29:31.483 9413.353 - 9472.931: 26.5544% ( 480) 01:29:31.483 9472.931 - 9532.509: 30.6833% ( 510) 01:29:31.483 9532.509 - 9592.087: 34.7636% ( 504) 01:29:31.483 9592.087 - 9651.665: 38.8196% ( 501) 01:29:31.483 9651.665 - 9711.244: 42.8999% ( 504) 01:29:31.483 9711.244 - 9770.822: 46.8993% ( 494) 01:29:31.483 9770.822 - 9830.400: 50.9310% ( 498) 01:29:31.483 9830.400 - 9889.978: 54.7037% ( 466) 01:29:31.483 9889.978 - 9949.556: 58.4278% ( 460) 01:29:31.483 9949.556 - 10009.135: 61.9738% ( 438) 01:29:31.483 10009.135 - 10068.713: 65.2607% ( 406) 01:29:31.483 10068.713 - 10128.291: 68.1671% ( 359) 01:29:31.483 10128.291 - 10187.869: 70.6606% ( 308) 01:29:31.483 10187.869 - 10247.447: 72.8627% ( 272) 01:29:31.483 10247.447 - 10307.025: 75.0162% ( 266) 01:29:31.483 10307.025 - 10366.604: 76.9349% ( 237) 01:29:31.483 10366.604 - 10426.182: 78.4812% ( 191) 01:29:31.483 10426.182 - 10485.760: 79.8656% ( 171) 01:29:31.483 10485.760 - 10545.338: 80.9990% ( 140) 01:29:31.483 10545.338 - 10604.916: 81.9867% ( 122) 01:29:31.483 10604.916 - 10664.495: 82.7882% ( 99) 01:29:31.483 10664.495 - 10724.073: 83.5168% ( 90) 01:29:31.483 10724.073 - 10783.651: 84.1969% ( 84) 01:29:31.483 10783.651 - 10843.229: 84.7717% ( 71) 01:29:31.483 10843.229 - 10902.807: 85.4275% ( 81) 01:29:31.483 10902.807 - 10962.385: 86.0023% ( 71) 01:29:31.483 10962.385 - 11021.964: 86.5285% ( 65) 01:29:31.483 11021.964 - 11081.542: 87.0547% ( 65) 01:29:31.483 11081.542 - 11141.120: 87.6052% ( 68) 01:29:31.483 11141.120 - 11200.698: 88.1072% ( 62) 01:29:31.483 11200.698 - 11260.276: 88.7063% ( 74) 01:29:31.483 11260.276 - 11319.855: 89.3297% ( 77) 01:29:31.483 11319.855 - 11379.433: 89.9773% ( 80) 01:29:31.483 11379.433 - 11439.011: 90.5521% ( 71) 01:29:31.483 11439.011 - 11498.589: 91.1836% ( 78) 01:29:31.483 11498.589 - 11558.167: 91.8313% ( 80) 01:29:31.483 11558.167 - 11617.745: 92.4547% ( 77) 01:29:31.483 11617.745 - 11677.324: 93.0457% ( 73) 01:29:31.483 11677.324 - 11736.902: 93.6448% ( 74) 01:29:31.483 11736.902 - 11796.480: 94.2196% ( 71) 01:29:31.483 11796.480 - 11856.058: 94.7701% ( 68) 01:29:31.483 11856.058 - 11915.636: 95.1668% ( 49) 01:29:31.483 11915.636 - 11975.215: 95.5797% ( 51) 01:29:31.483 11975.215 - 12034.793: 95.8630% ( 35) 01:29:31.483 12034.793 - 12094.371: 96.1464% ( 35) 01:29:31.483 12094.371 - 12153.949: 96.3488% ( 25) 01:29:31.483 12153.949 - 12213.527: 96.5026% ( 19) 01:29:31.483 12213.527 - 12273.105: 96.6321% ( 16) 01:29:31.483 12273.105 - 12332.684: 96.7455% ( 14) 01:29:31.483 12332.684 - 12392.262: 96.8102% ( 8) 01:29:31.483 12392.262 - 12451.840: 96.8588% ( 6) 01:29:31.483 12451.840 - 12511.418: 96.8831% ( 3) 01:29:31.483 12511.418 - 12570.996: 96.8912% ( 1) 01:29:31.483 12630.575 - 12690.153: 96.9074% ( 2) 01:29:31.483 12690.153 - 12749.731: 96.9317% ( 3) 01:29:31.483 12749.731 - 12809.309: 96.9641% ( 4) 01:29:31.483 12809.309 - 12868.887: 96.9883% ( 3) 01:29:31.483 12868.887 - 12928.465: 97.0450% ( 7) 01:29:31.483 12928.465 - 12988.044: 97.0855% ( 5) 01:29:31.483 12988.044 - 13047.622: 97.1422% ( 7) 01:29:31.483 13047.622 - 13107.200: 97.1826% ( 5) 01:29:31.483 13107.200 - 13166.778: 97.2474% ( 8) 01:29:31.483 13166.778 - 13226.356: 97.3284% ( 10) 01:29:31.483 13226.356 - 13285.935: 97.4012% ( 9) 01:29:31.483 13285.935 - 13345.513: 97.4984% ( 12) 01:29:31.483 13345.513 - 13405.091: 97.5874% ( 11) 01:29:31.484 13405.091 - 13464.669: 97.6765% ( 11) 01:29:31.484 13464.669 - 13524.247: 97.7736% ( 12) 01:29:31.484 13524.247 - 13583.825: 97.8627% ( 11) 01:29:31.484 13583.825 - 13643.404: 97.9517% ( 11) 01:29:31.484 13643.404 - 13702.982: 98.0408% ( 11) 01:29:31.484 13702.982 - 13762.560: 98.1218% ( 10) 01:29:31.484 13762.560 - 13822.138: 98.1784% ( 7) 01:29:31.484 13822.138 - 13881.716: 98.2432% ( 8) 01:29:31.484 13881.716 - 13941.295: 98.3080% ( 8) 01:29:31.484 13941.295 - 14000.873: 98.3403% ( 4) 01:29:31.484 14000.873 - 14060.451: 98.3727% ( 4) 01:29:31.484 14060.451 - 14120.029: 98.3970% ( 3) 01:29:31.484 14120.029 - 14179.607: 98.4213% ( 3) 01:29:31.484 14179.607 - 14239.185: 98.4456% ( 3) 01:29:31.484 15371.171 - 15490.327: 98.4861% ( 5) 01:29:31.484 15490.327 - 15609.484: 98.5185% ( 4) 01:29:31.484 15609.484 - 15728.640: 98.5832% ( 8) 01:29:31.484 15728.640 - 15847.796: 98.6399% ( 7) 01:29:31.484 15847.796 - 15966.953: 98.7128% ( 9) 01:29:31.484 15966.953 - 16086.109: 98.7775% ( 8) 01:29:31.484 16086.109 - 16205.265: 98.8423% ( 8) 01:29:31.484 16205.265 - 16324.422: 98.9071% ( 8) 01:29:31.484 16324.422 - 16443.578: 98.9637% ( 7) 01:29:31.484 33363.782 - 33602.095: 98.9880% ( 3) 01:29:31.484 33602.095 - 33840.407: 99.0447% ( 7) 01:29:31.484 33840.407 - 34078.720: 99.0933% ( 6) 01:29:31.484 34078.720 - 34317.033: 99.1580% ( 8) 01:29:31.484 34317.033 - 34555.345: 99.2147% ( 7) 01:29:31.484 34555.345 - 34793.658: 99.2714% ( 7) 01:29:31.484 34793.658 - 35031.971: 99.3361% ( 8) 01:29:31.484 35031.971 - 35270.284: 99.4009% ( 8) 01:29:31.484 35270.284 - 35508.596: 99.4576% ( 7) 01:29:31.484 35508.596 - 35746.909: 99.4819% ( 3) 01:29:31.484 43372.916 - 43611.229: 99.5142% ( 4) 01:29:31.484 43611.229 - 43849.542: 99.5790% ( 8) 01:29:31.484 43849.542 - 44087.855: 99.6276% ( 6) 01:29:31.484 44087.855 - 44326.167: 99.7005% ( 9) 01:29:31.484 44326.167 - 44564.480: 99.7571% ( 7) 01:29:31.484 44564.480 - 44802.793: 99.8138% ( 7) 01:29:31.484 44802.793 - 45041.105: 99.8786% ( 8) 01:29:31.484 45041.105 - 45279.418: 99.9352% ( 7) 01:29:31.484 45279.418 - 45517.731: 99.9919% ( 7) 01:29:31.484 45517.731 - 45756.044: 100.0000% ( 1) 01:29:31.484 01:29:31.484 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 01:29:31.484 ============================================================================== 01:29:31.484 Range in us Cumulative IO count 01:29:31.484 8519.680 - 8579.258: 0.0161% ( 2) 01:29:31.484 8579.258 - 8638.836: 0.1208% ( 13) 01:29:31.484 8638.836 - 8698.415: 0.2819% ( 20) 01:29:31.484 8698.415 - 8757.993: 0.5880% ( 38) 01:29:31.484 8757.993 - 8817.571: 1.0390% ( 56) 01:29:31.484 8817.571 - 8877.149: 1.8847% ( 105) 01:29:31.484 8877.149 - 8936.727: 2.8995% ( 126) 01:29:31.484 8936.727 - 8996.305: 4.1962% ( 161) 01:29:31.484 8996.305 - 9055.884: 5.8392% ( 204) 01:29:31.484 9055.884 - 9115.462: 7.7722% ( 240) 01:29:31.484 9115.462 - 9175.040: 10.1482% ( 295) 01:29:31.484 9175.040 - 9234.618: 12.7336% ( 321) 01:29:31.484 9234.618 - 9294.196: 15.7136% ( 370) 01:29:31.484 9294.196 - 9353.775: 18.9916% ( 407) 01:29:31.484 9353.775 - 9413.353: 22.4307% ( 427) 01:29:31.484 9413.353 - 9472.931: 26.3450% ( 486) 01:29:31.484 9472.931 - 9532.509: 30.2835% ( 489) 01:29:31.484 9532.509 - 9592.087: 34.2703% ( 495) 01:29:31.484 9592.087 - 9651.665: 38.2974% ( 500) 01:29:31.484 9651.665 - 9711.244: 42.4452% ( 515) 01:29:31.484 9711.244 - 9770.822: 46.5528% ( 510) 01:29:31.484 9770.822 - 9830.400: 50.4994% ( 490) 01:29:31.484 9830.400 - 9889.978: 54.5103% ( 498) 01:29:31.484 9889.978 - 9949.556: 58.0380% ( 438) 01:29:31.484 9949.556 - 10009.135: 61.5577% ( 437) 01:29:31.484 10009.135 - 10068.713: 64.9001% ( 415) 01:29:31.484 10068.713 - 10128.291: 67.8479% ( 366) 01:29:31.484 10128.291 - 10187.869: 70.4655% ( 325) 01:29:31.484 10187.869 - 10247.447: 72.8898% ( 301) 01:29:31.484 10247.447 - 10307.025: 75.0081% ( 263) 01:29:31.484 10307.025 - 10366.604: 76.7639% ( 218) 01:29:31.484 10366.604 - 10426.182: 78.2700% ( 187) 01:29:31.484 10426.182 - 10485.760: 79.5103% ( 154) 01:29:31.484 10485.760 - 10545.338: 80.6701% ( 144) 01:29:31.484 10545.338 - 10604.916: 81.6769% ( 125) 01:29:31.484 10604.916 - 10664.495: 82.5467% ( 108) 01:29:31.484 10664.495 - 10724.073: 83.3360% ( 98) 01:29:31.484 10724.073 - 10783.651: 83.9562% ( 77) 01:29:31.484 10783.651 - 10843.229: 84.5844% ( 78) 01:29:31.484 10843.229 - 10902.807: 85.0596% ( 59) 01:29:31.484 10902.807 - 10962.385: 85.5428% ( 60) 01:29:31.484 10962.385 - 11021.964: 86.0503% ( 63) 01:29:31.484 11021.964 - 11081.542: 86.6302% ( 72) 01:29:31.484 11081.542 - 11141.120: 87.1537% ( 65) 01:29:31.484 11141.120 - 11200.698: 87.6852% ( 66) 01:29:31.484 11200.698 - 11260.276: 88.3457% ( 82) 01:29:31.484 11260.276 - 11319.855: 88.9014% ( 69) 01:29:31.484 11319.855 - 11379.433: 89.4330% ( 66) 01:29:31.484 11379.433 - 11439.011: 90.0370% ( 75) 01:29:31.484 11439.011 - 11498.589: 90.6331% ( 74) 01:29:31.484 11498.589 - 11558.167: 91.3015% ( 83) 01:29:31.484 11558.167 - 11617.745: 91.9459% ( 80) 01:29:31.484 11617.745 - 11677.324: 92.5902% ( 80) 01:29:31.484 11677.324 - 11736.902: 93.1701% ( 72) 01:29:31.484 11736.902 - 11796.480: 93.6856% ( 64) 01:29:31.484 11796.480 - 11856.058: 94.1527% ( 58) 01:29:31.484 11856.058 - 11915.636: 94.6118% ( 57) 01:29:31.484 11915.636 - 11975.215: 95.0064% ( 49) 01:29:31.484 11975.215 - 12034.793: 95.3447% ( 42) 01:29:31.484 12034.793 - 12094.371: 95.6508% ( 38) 01:29:31.484 12094.371 - 12153.949: 95.8038% ( 19) 01:29:31.484 12153.949 - 12213.527: 95.9568% ( 19) 01:29:31.484 12213.527 - 12273.105: 96.0615% ( 13) 01:29:31.484 12273.105 - 12332.684: 96.1421% ( 10) 01:29:31.484 12332.684 - 12392.262: 96.1904% ( 6) 01:29:31.484 12392.262 - 12451.840: 96.2065% ( 2) 01:29:31.484 12451.840 - 12511.418: 96.2307% ( 3) 01:29:31.484 12511.418 - 12570.996: 96.2709% ( 5) 01:29:31.484 12570.996 - 12630.575: 96.3193% ( 6) 01:29:31.484 12630.575 - 12690.153: 96.3595% ( 5) 01:29:31.484 12690.153 - 12749.731: 96.4240% ( 8) 01:29:31.484 12749.731 - 12809.309: 96.4965% ( 9) 01:29:31.484 12809.309 - 12868.887: 96.5448% ( 6) 01:29:31.484 12868.887 - 12928.465: 96.6253% ( 10) 01:29:31.484 12928.465 - 12988.044: 96.6898% ( 8) 01:29:31.484 12988.044 - 13047.622: 96.7703% ( 10) 01:29:31.484 13047.622 - 13107.200: 96.8347% ( 8) 01:29:31.484 13107.200 - 13166.778: 96.8911% ( 7) 01:29:31.484 13166.778 - 13226.356: 96.9555% ( 8) 01:29:31.484 13226.356 - 13285.935: 97.0280% ( 9) 01:29:31.484 13285.935 - 13345.513: 97.1166% ( 11) 01:29:31.484 13345.513 - 13405.091: 97.2052% ( 11) 01:29:31.484 13405.091 - 13464.669: 97.2858% ( 10) 01:29:31.484 13464.669 - 13524.247: 97.3744% ( 11) 01:29:31.484 13524.247 - 13583.825: 97.4468% ( 9) 01:29:31.484 13583.825 - 13643.404: 97.5435% ( 12) 01:29:31.484 13643.404 - 13702.982: 97.6240% ( 10) 01:29:31.484 13702.982 - 13762.560: 97.7046% ( 10) 01:29:31.484 13762.560 - 13822.138: 97.7690% ( 8) 01:29:31.485 13822.138 - 13881.716: 97.8173% ( 6) 01:29:31.485 13881.716 - 13941.295: 97.8818% ( 8) 01:29:31.485 13941.295 - 14000.873: 97.9301% ( 6) 01:29:31.485 14000.873 - 14060.451: 97.9945% ( 8) 01:29:31.485 14060.451 - 14120.029: 98.0509% ( 7) 01:29:31.485 14120.029 - 14179.607: 98.1073% ( 7) 01:29:31.485 14179.607 - 14239.185: 98.1637% ( 7) 01:29:31.485 14239.185 - 14298.764: 98.2200% ( 7) 01:29:31.485 14298.764 - 14358.342: 98.2764% ( 7) 01:29:31.485 14358.342 - 14417.920: 98.3247% ( 6) 01:29:31.485 14417.920 - 14477.498: 98.3731% ( 6) 01:29:31.485 14477.498 - 14537.076: 98.4053% ( 4) 01:29:31.485 14537.076 - 14596.655: 98.4375% ( 4) 01:29:31.485 14596.655 - 14656.233: 98.4536% ( 2) 01:29:31.485 15609.484 - 15728.640: 98.5100% ( 7) 01:29:31.485 15728.640 - 15847.796: 98.5664% ( 7) 01:29:31.485 15847.796 - 15966.953: 98.6308% ( 8) 01:29:31.485 15966.953 - 16086.109: 98.6952% ( 8) 01:29:31.485 16086.109 - 16205.265: 98.7597% ( 8) 01:29:31.485 16205.265 - 16324.422: 98.8241% ( 8) 01:29:31.485 16324.422 - 16443.578: 98.8885% ( 8) 01:29:31.485 16443.578 - 16562.735: 98.9449% ( 7) 01:29:31.485 16562.735 - 16681.891: 98.9691% ( 3) 01:29:31.485 25499.462 - 25618.618: 98.9852% ( 2) 01:29:31.485 25618.618 - 25737.775: 99.0093% ( 3) 01:29:31.485 25737.775 - 25856.931: 99.0416% ( 4) 01:29:31.485 25856.931 - 25976.087: 99.0738% ( 4) 01:29:31.485 25976.087 - 26095.244: 99.1060% ( 4) 01:29:31.485 26095.244 - 26214.400: 99.1302% ( 3) 01:29:31.485 26214.400 - 26333.556: 99.1624% ( 4) 01:29:31.485 26333.556 - 26452.713: 99.1865% ( 3) 01:29:31.485 26452.713 - 26571.869: 99.2188% ( 4) 01:29:31.485 26571.869 - 26691.025: 99.2429% ( 3) 01:29:31.485 26691.025 - 26810.182: 99.2751% ( 4) 01:29:31.485 26810.182 - 26929.338: 99.3073% ( 4) 01:29:31.485 26929.338 - 27048.495: 99.3315% ( 3) 01:29:31.485 27048.495 - 27167.651: 99.3557% ( 3) 01:29:31.485 27167.651 - 27286.807: 99.3879% ( 4) 01:29:31.485 27286.807 - 27405.964: 99.4201% ( 4) 01:29:31.485 27405.964 - 27525.120: 99.4443% ( 3) 01:29:31.485 27525.120 - 27644.276: 99.4684% ( 3) 01:29:31.485 27644.276 - 27763.433: 99.4845% ( 2) 01:29:31.485 32648.844 - 32887.156: 99.5087% ( 3) 01:29:31.485 32887.156 - 33125.469: 99.5731% ( 8) 01:29:31.485 33125.469 - 33363.782: 99.6134% ( 5) 01:29:31.485 33363.782 - 33602.095: 99.6778% ( 8) 01:29:31.485 33602.095 - 33840.407: 99.7342% ( 7) 01:29:31.485 33840.407 - 34078.720: 99.7906% ( 7) 01:29:31.485 34078.720 - 34317.033: 99.8550% ( 8) 01:29:31.485 34317.033 - 34555.345: 99.9114% ( 7) 01:29:31.485 34555.345 - 34793.658: 99.9678% ( 7) 01:29:31.485 34793.658 - 35031.971: 100.0000% ( 4) 01:29:31.485 01:29:31.485 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 01:29:31.485 ============================================================================== 01:29:31.485 Range in us Cumulative IO count 01:29:31.485 8519.680 - 8579.258: 0.0644% ( 8) 01:29:31.485 8579.258 - 8638.836: 0.1772% ( 14) 01:29:31.485 8638.836 - 8698.415: 0.3866% ( 26) 01:29:31.485 8698.415 - 8757.993: 0.7329% ( 43) 01:29:31.485 8757.993 - 8817.571: 1.2645% ( 66) 01:29:31.485 8817.571 - 8877.149: 2.0377% ( 96) 01:29:31.485 8877.149 - 8936.727: 3.0284% ( 123) 01:29:31.485 8936.727 - 8996.305: 4.3734% ( 167) 01:29:31.485 8996.305 - 9055.884: 5.9439% ( 195) 01:29:31.485 9055.884 - 9115.462: 7.9414% ( 248) 01:29:31.485 9115.462 - 9175.040: 10.2448% ( 286) 01:29:31.485 9175.040 - 9234.618: 12.8141% ( 319) 01:29:31.485 9234.618 - 9294.196: 15.7458% ( 364) 01:29:31.485 9294.196 - 9353.775: 19.0722% ( 413) 01:29:31.485 9353.775 - 9413.353: 22.4146% ( 415) 01:29:31.485 9413.353 - 9472.931: 25.9343% ( 437) 01:29:31.485 9472.931 - 9532.509: 29.6633% ( 463) 01:29:31.485 9532.509 - 9592.087: 33.5454% ( 482) 01:29:31.485 9592.087 - 9651.665: 37.5564% ( 498) 01:29:31.485 9651.665 - 9711.244: 41.8895% ( 538) 01:29:31.485 9711.244 - 9770.822: 46.1501% ( 529) 01:29:31.485 9770.822 - 9830.400: 50.3061% ( 516) 01:29:31.485 9830.400 - 9889.978: 54.1157% ( 473) 01:29:31.485 9889.978 - 9949.556: 57.9091% ( 471) 01:29:31.485 9949.556 - 10009.135: 61.4369% ( 438) 01:29:31.485 10009.135 - 10068.713: 64.6746% ( 402) 01:29:31.485 10068.713 - 10128.291: 67.6144% ( 365) 01:29:31.485 10128.291 - 10187.869: 70.3125% ( 335) 01:29:31.485 10187.869 - 10247.447: 72.8334% ( 313) 01:29:31.485 10247.447 - 10307.025: 74.8550% ( 251) 01:29:31.485 10307.025 - 10366.604: 76.6672% ( 225) 01:29:31.485 10366.604 - 10426.182: 78.2700% ( 199) 01:29:31.485 10426.182 - 10485.760: 79.7519% ( 184) 01:29:31.485 10485.760 - 10545.338: 80.9681% ( 151) 01:29:31.485 10545.338 - 10604.916: 81.9829% ( 126) 01:29:31.485 10604.916 - 10664.495: 82.7803% ( 99) 01:29:31.485 10664.495 - 10724.073: 83.4649% ( 85) 01:29:31.485 10724.073 - 10783.651: 84.1656% ( 87) 01:29:31.485 10783.651 - 10843.229: 84.6891% ( 65) 01:29:31.485 10843.229 - 10902.807: 85.1965% ( 63) 01:29:31.485 10902.807 - 10962.385: 85.6798% ( 60) 01:29:31.485 10962.385 - 11021.964: 86.2597% ( 72) 01:29:31.485 11021.964 - 11081.542: 86.8396% ( 72) 01:29:31.485 11081.542 - 11141.120: 87.4517% ( 76) 01:29:31.485 11141.120 - 11200.698: 88.0235% ( 71) 01:29:31.485 11200.698 - 11260.276: 88.6034% ( 72) 01:29:31.485 11260.276 - 11319.855: 89.2155% ( 76) 01:29:31.485 11319.855 - 11379.433: 89.8679% ( 81) 01:29:31.485 11379.433 - 11439.011: 90.4881% ( 77) 01:29:31.485 11439.011 - 11498.589: 91.0921% ( 75) 01:29:31.485 11498.589 - 11558.167: 91.7445% ( 81) 01:29:31.485 11558.167 - 11617.745: 92.3244% ( 72) 01:29:31.485 11617.745 - 11677.324: 92.9446% ( 77) 01:29:31.485 11677.324 - 11736.902: 93.4842% ( 67) 01:29:31.485 11736.902 - 11796.480: 93.9997% ( 64) 01:29:31.485 11796.480 - 11856.058: 94.4990% ( 62) 01:29:31.485 11856.058 - 11915.636: 94.9340% ( 54) 01:29:31.485 11915.636 - 11975.215: 95.3528% ( 52) 01:29:31.485 11975.215 - 12034.793: 95.7555% ( 50) 01:29:31.485 12034.793 - 12094.371: 96.0374% ( 35) 01:29:31.485 12094.371 - 12153.949: 96.2226% ( 23) 01:29:31.485 12153.949 - 12213.527: 96.3676% ( 18) 01:29:31.485 12213.527 - 12273.105: 96.4723% ( 13) 01:29:31.485 12273.105 - 12332.684: 96.5931% ( 15) 01:29:31.485 12332.684 - 12392.262: 96.6656% ( 9) 01:29:31.485 12392.262 - 12451.840: 96.7381% ( 9) 01:29:31.485 12451.840 - 12511.418: 96.7784% ( 5) 01:29:31.485 12511.418 - 12570.996: 96.8186% ( 5) 01:29:31.485 12570.996 - 12630.575: 96.8669% ( 6) 01:29:31.485 12630.575 - 12690.153: 96.9233% ( 7) 01:29:31.485 12690.153 - 12749.731: 96.9716% ( 6) 01:29:31.485 12749.731 - 12809.309: 97.0200% ( 6) 01:29:31.485 12809.309 - 12868.887: 97.0683% ( 6) 01:29:31.485 12868.887 - 12928.465: 97.1327% ( 8) 01:29:31.485 12928.465 - 12988.044: 97.1649% ( 4) 01:29:31.485 12988.044 - 13047.622: 97.1972% ( 4) 01:29:31.485 13047.622 - 13107.200: 97.2374% ( 5) 01:29:31.485 13107.200 - 13166.778: 97.2858% ( 6) 01:29:31.485 13166.778 - 13226.356: 97.3421% ( 7) 01:29:31.485 13226.356 - 13285.935: 97.4146% ( 9) 01:29:31.485 13285.935 - 13345.513: 97.4549% ( 5) 01:29:31.485 13345.513 - 13405.091: 97.4871% ( 4) 01:29:31.486 13405.091 - 13464.669: 97.5354% ( 6) 01:29:31.486 13464.669 - 13524.247: 97.5596% ( 3) 01:29:31.486 13524.247 - 13583.825: 97.5838% ( 3) 01:29:31.486 13583.825 - 13643.404: 97.6079% ( 3) 01:29:31.486 13643.404 - 13702.982: 97.6321% ( 3) 01:29:31.486 13702.982 - 13762.560: 97.6643% ( 4) 01:29:31.486 13762.560 - 13822.138: 97.6885% ( 3) 01:29:31.486 13822.138 - 13881.716: 97.7126% ( 3) 01:29:31.486 13881.716 - 13941.295: 97.7287% ( 2) 01:29:31.486 13941.295 - 14000.873: 97.7610% ( 4) 01:29:31.486 14000.873 - 14060.451: 97.7851% ( 3) 01:29:31.486 14060.451 - 14120.029: 97.8173% ( 4) 01:29:31.486 14120.029 - 14179.607: 97.8657% ( 6) 01:29:31.486 14179.607 - 14239.185: 97.9140% ( 6) 01:29:31.486 14239.185 - 14298.764: 97.9704% ( 7) 01:29:31.486 14298.764 - 14358.342: 98.0026% ( 4) 01:29:31.486 14358.342 - 14417.920: 98.0509% ( 6) 01:29:31.486 14417.920 - 14477.498: 98.0912% ( 5) 01:29:31.486 14477.498 - 14537.076: 98.1234% ( 4) 01:29:31.486 14537.076 - 14596.655: 98.1476% ( 3) 01:29:31.486 14596.655 - 14656.233: 98.1717% ( 3) 01:29:31.486 14656.233 - 14715.811: 98.2039% ( 4) 01:29:31.486 14715.811 - 14775.389: 98.2361% ( 4) 01:29:31.486 14775.389 - 14834.967: 98.2684% ( 4) 01:29:31.486 14834.967 - 14894.545: 98.3006% ( 4) 01:29:31.486 14894.545 - 14954.124: 98.3328% ( 4) 01:29:31.486 14954.124 - 15013.702: 98.3650% ( 4) 01:29:31.486 15013.702 - 15073.280: 98.3972% ( 4) 01:29:31.486 15073.280 - 15132.858: 98.4214% ( 3) 01:29:31.486 15132.858 - 15192.436: 98.4375% ( 2) 01:29:31.486 15192.436 - 15252.015: 98.4536% ( 2) 01:29:31.486 15609.484 - 15728.640: 98.4778% ( 3) 01:29:31.486 15728.640 - 15847.796: 98.5019% ( 3) 01:29:31.486 15847.796 - 15966.953: 98.5583% ( 7) 01:29:31.486 15966.953 - 16086.109: 98.6227% ( 8) 01:29:31.486 16086.109 - 16205.265: 98.6952% ( 9) 01:29:31.486 16205.265 - 16324.422: 98.7516% ( 7) 01:29:31.486 16324.422 - 16443.578: 98.8160% ( 8) 01:29:31.486 16443.578 - 16562.735: 98.8805% ( 8) 01:29:31.486 16562.735 - 16681.891: 98.9530% ( 9) 01:29:31.486 16681.891 - 16801.047: 98.9691% ( 2) 01:29:31.486 23354.647 - 23473.804: 98.9852% ( 2) 01:29:31.486 23473.804 - 23592.960: 99.0093% ( 3) 01:29:31.486 23592.960 - 23712.116: 99.0416% ( 4) 01:29:31.486 23712.116 - 23831.273: 99.0657% ( 3) 01:29:31.486 23831.273 - 23950.429: 99.0899% ( 3) 01:29:31.486 23950.429 - 24069.585: 99.1140% ( 3) 01:29:31.486 24069.585 - 24188.742: 99.1382% ( 3) 01:29:31.486 24188.742 - 24307.898: 99.1704% ( 4) 01:29:31.486 24307.898 - 24427.055: 99.1946% ( 3) 01:29:31.486 24427.055 - 24546.211: 99.2188% ( 3) 01:29:31.486 24546.211 - 24665.367: 99.2429% ( 3) 01:29:31.486 24665.367 - 24784.524: 99.2751% ( 4) 01:29:31.486 24784.524 - 24903.680: 99.3073% ( 4) 01:29:31.486 24903.680 - 25022.836: 99.3315% ( 3) 01:29:31.486 25022.836 - 25141.993: 99.3637% ( 4) 01:29:31.486 25141.993 - 25261.149: 99.3879% ( 3) 01:29:31.486 25261.149 - 25380.305: 99.4120% ( 3) 01:29:31.486 25380.305 - 25499.462: 99.4443% ( 4) 01:29:31.486 25499.462 - 25618.618: 99.4684% ( 3) 01:29:31.486 25618.618 - 25737.775: 99.4845% ( 2) 01:29:31.486 32172.218 - 32410.531: 99.5006% ( 2) 01:29:31.486 32410.531 - 32648.844: 99.5651% ( 8) 01:29:31.486 32648.844 - 32887.156: 99.6295% ( 8) 01:29:31.486 32887.156 - 33125.469: 99.6859% ( 7) 01:29:31.486 33125.469 - 33363.782: 99.7423% ( 7) 01:29:31.486 33363.782 - 33602.095: 99.8067% ( 8) 01:29:31.486 33602.095 - 33840.407: 99.8550% ( 6) 01:29:31.486 33840.407 - 34078.720: 99.9195% ( 8) 01:29:31.486 34078.720 - 34317.033: 99.9758% ( 7) 01:29:31.486 34317.033 - 34555.345: 100.0000% ( 3) 01:29:31.486 01:29:31.486 05:24:23 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 01:29:32.935 Initializing NVMe Controllers 01:29:32.935 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 01:29:32.935 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 01:29:32.935 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 01:29:32.935 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 01:29:32.935 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 01:29:32.935 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 01:29:32.935 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 01:29:32.935 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 01:29:32.935 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 01:29:32.935 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 01:29:32.935 Initialization complete. Launching workers. 01:29:32.935 ======================================================== 01:29:32.935 Latency(us) 01:29:32.935 Device Information : IOPS MiB/s Average min max 01:29:32.935 PCIE (0000:00:10.0) NSID 1 from core 0: 12247.57 143.53 10479.30 8002.04 43084.80 01:29:32.935 PCIE (0000:00:11.0) NSID 1 from core 0: 12247.57 143.53 10459.78 8240.56 40484.47 01:29:32.935 PCIE (0000:00:13.0) NSID 1 from core 0: 12247.57 143.53 10440.27 8281.60 39169.06 01:29:32.935 PCIE (0000:00:12.0) NSID 1 from core 0: 12247.57 143.53 10420.63 8340.79 36903.71 01:29:32.935 PCIE (0000:00:12.0) NSID 2 from core 0: 12311.36 144.27 10346.95 8200.54 28739.66 01:29:32.935 PCIE (0000:00:12.0) NSID 3 from core 0: 12311.36 144.27 10326.76 8370.40 26114.86 01:29:32.935 ======================================================== 01:29:32.935 Total : 73613.00 862.65 10412.15 8002.04 43084.80 01:29:32.935 01:29:32.935 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 01:29:32.935 ================================================================================= 01:29:32.935 1.00000% : 8519.680us 01:29:32.936 10.00000% : 8936.727us 01:29:32.936 25.00000% : 9353.775us 01:29:32.936 50.00000% : 10068.713us 01:29:32.936 75.00000% : 10962.385us 01:29:32.936 90.00000% : 11736.902us 01:29:32.936 95.00000% : 12332.684us 01:29:32.936 98.00000% : 13166.778us 01:29:32.936 99.00000% : 33602.095us 01:29:32.936 99.50000% : 40989.789us 01:29:32.936 99.90000% : 42657.978us 01:29:32.936 99.99000% : 43134.604us 01:29:32.936 99.99900% : 43134.604us 01:29:32.936 99.99990% : 43134.604us 01:29:32.936 99.99999% : 43134.604us 01:29:32.936 01:29:32.936 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 01:29:32.936 ================================================================================= 01:29:32.936 1.00000% : 8638.836us 01:29:32.936 10.00000% : 8996.305us 01:29:32.936 25.00000% : 9353.775us 01:29:32.936 50.00000% : 10068.713us 01:29:32.936 75.00000% : 10962.385us 01:29:32.936 90.00000% : 11677.324us 01:29:32.936 95.00000% : 12332.684us 01:29:32.936 98.00000% : 13285.935us 01:29:32.936 99.00000% : 31457.280us 01:29:32.936 99.50000% : 38606.662us 01:29:32.936 99.90000% : 40274.851us 01:29:32.936 99.99000% : 40513.164us 01:29:32.936 99.99900% : 40513.164us 01:29:32.936 99.99990% : 40513.164us 01:29:32.936 99.99999% : 40513.164us 01:29:32.936 01:29:32.936 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 01:29:32.936 ================================================================================= 01:29:32.936 1.00000% : 8638.836us 01:29:32.936 10.00000% : 9055.884us 01:29:32.936 25.00000% : 9353.775us 01:29:32.936 50.00000% : 10068.713us 01:29:32.936 75.00000% : 10962.385us 01:29:32.936 90.00000% : 11736.902us 01:29:32.936 95.00000% : 12273.105us 01:29:32.936 98.00000% : 12928.465us 01:29:32.936 99.00000% : 30027.404us 01:29:32.936 99.50000% : 37415.098us 01:29:32.936 99.90000% : 38844.975us 01:29:32.936 99.99000% : 39321.600us 01:29:32.936 99.99900% : 39321.600us 01:29:32.936 99.99990% : 39321.600us 01:29:32.936 99.99999% : 39321.600us 01:29:32.936 01:29:32.936 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 01:29:32.936 ================================================================================= 01:29:32.936 1.00000% : 8698.415us 01:29:32.936 10.00000% : 8996.305us 01:29:32.936 25.00000% : 9353.775us 01:29:32.936 50.00000% : 10068.713us 01:29:32.936 75.00000% : 10962.385us 01:29:32.936 90.00000% : 11736.902us 01:29:32.936 95.00000% : 12273.105us 01:29:32.936 98.00000% : 12928.465us 01:29:32.936 99.00000% : 27644.276us 01:29:32.936 99.50000% : 35031.971us 01:29:32.936 99.90000% : 36700.160us 01:29:32.936 99.99000% : 36938.473us 01:29:32.936 99.99900% : 36938.473us 01:29:32.936 99.99990% : 36938.473us 01:29:32.936 99.99999% : 36938.473us 01:29:32.936 01:29:32.936 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 01:29:32.936 ================================================================================= 01:29:32.936 1.00000% : 8698.415us 01:29:32.936 10.00000% : 8996.305us 01:29:32.936 25.00000% : 9353.775us 01:29:32.936 50.00000% : 10068.713us 01:29:32.936 75.00000% : 10962.385us 01:29:32.936 90.00000% : 11677.324us 01:29:32.936 95.00000% : 12332.684us 01:29:32.936 98.00000% : 13047.622us 01:29:32.936 99.00000% : 19660.800us 01:29:32.936 99.50000% : 26571.869us 01:29:32.936 99.90000% : 28478.371us 01:29:32.936 99.99000% : 28716.684us 01:29:32.936 99.99900% : 28835.840us 01:29:32.936 99.99990% : 28835.840us 01:29:32.936 99.99999% : 28835.840us 01:29:32.936 01:29:32.936 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 01:29:32.936 ================================================================================= 01:29:32.936 1.00000% : 8638.836us 01:29:32.936 10.00000% : 8996.305us 01:29:32.936 25.00000% : 9353.775us 01:29:32.936 50.00000% : 10068.713us 01:29:32.936 75.00000% : 10962.385us 01:29:32.936 90.00000% : 11677.324us 01:29:32.936 95.00000% : 12451.840us 01:29:32.936 98.00000% : 13285.935us 01:29:32.936 99.00000% : 17992.611us 01:29:32.936 99.50000% : 23235.491us 01:29:32.936 99.90000% : 25737.775us 01:29:32.936 99.99000% : 26095.244us 01:29:32.936 99.99900% : 26214.400us 01:29:32.936 99.99990% : 26214.400us 01:29:32.936 99.99999% : 26214.400us 01:29:32.936 01:29:32.936 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 01:29:32.936 ============================================================================== 01:29:32.936 Range in us Cumulative IO count 01:29:32.936 7983.476 - 8043.055: 0.0163% ( 2) 01:29:32.936 8043.055 - 8102.633: 0.0732% ( 7) 01:29:32.936 8102.633 - 8162.211: 0.0895% ( 2) 01:29:32.936 8162.211 - 8221.789: 0.1139% ( 3) 01:29:32.936 8221.789 - 8281.367: 0.1383% ( 3) 01:29:32.936 8281.367 - 8340.945: 0.2116% ( 9) 01:29:32.936 8340.945 - 8400.524: 0.3906% ( 22) 01:29:32.936 8400.524 - 8460.102: 0.7812% ( 48) 01:29:32.936 8460.102 - 8519.680: 1.0417% ( 32) 01:29:32.936 8519.680 - 8579.258: 1.4486% ( 50) 01:29:32.936 8579.258 - 8638.836: 2.1159% ( 82) 01:29:32.936 8638.836 - 8698.415: 3.4261% ( 161) 01:29:32.936 8698.415 - 8757.993: 4.5980% ( 144) 01:29:32.936 8757.993 - 8817.571: 6.2500% ( 203) 01:29:32.936 8817.571 - 8877.149: 8.2926% ( 251) 01:29:32.936 8877.149 - 8936.727: 10.3190% ( 249) 01:29:32.936 8936.727 - 8996.305: 12.6383% ( 285) 01:29:32.936 8996.305 - 9055.884: 14.6810% ( 251) 01:29:32.936 9055.884 - 9115.462: 16.8783% ( 270) 01:29:32.936 9115.462 - 9175.040: 19.4092% ( 311) 01:29:32.936 9175.040 - 9234.618: 21.8424% ( 299) 01:29:32.936 9234.618 - 9294.196: 23.8525% ( 247) 01:29:32.936 9294.196 - 9353.775: 26.0010% ( 264) 01:29:32.936 9353.775 - 9413.353: 28.3040% ( 283) 01:29:32.936 9413.353 - 9472.931: 30.2816% ( 243) 01:29:32.936 9472.931 - 9532.509: 32.1370% ( 228) 01:29:32.936 9532.509 - 9592.087: 34.2041% ( 254) 01:29:32.936 9592.087 - 9651.665: 36.1654% ( 241) 01:29:32.936 9651.665 - 9711.244: 38.2731% ( 259) 01:29:32.936 9711.244 - 9770.822: 40.8854% ( 321) 01:29:32.936 9770.822 - 9830.400: 43.4896% ( 320) 01:29:32.936 9830.400 - 9889.978: 45.4427% ( 240) 01:29:32.936 9889.978 - 9949.556: 47.3551% ( 235) 01:29:32.936 9949.556 - 10009.135: 49.0234% ( 205) 01:29:32.936 10009.135 - 10068.713: 50.6673% ( 202) 01:29:32.936 10068.713 - 10128.291: 52.1566% ( 183) 01:29:32.936 10128.291 - 10187.869: 53.8167% ( 204) 01:29:32.936 10187.869 - 10247.447: 55.2979% ( 182) 01:29:32.936 10247.447 - 10307.025: 57.0150% ( 211) 01:29:32.936 10307.025 - 10366.604: 58.8949% ( 231) 01:29:32.936 10366.604 - 10426.182: 61.2956% ( 295) 01:29:32.936 10426.182 - 10485.760: 63.0208% ( 212) 01:29:32.936 10485.760 - 10545.338: 64.6240% ( 197) 01:29:32.936 10545.338 - 10604.916: 66.3411% ( 211) 01:29:32.936 10604.916 - 10664.495: 68.0094% ( 205) 01:29:32.936 10664.495 - 10724.073: 69.5882% ( 194) 01:29:32.936 10724.073 - 10783.651: 71.1263% ( 189) 01:29:32.936 10783.651 - 10843.229: 72.8109% ( 207) 01:29:32.936 10843.229 - 10902.807: 74.5280% ( 211) 01:29:32.936 10902.807 - 10962.385: 76.1882% ( 204) 01:29:32.936 10962.385 - 11021.964: 77.5798% ( 171) 01:29:32.936 11021.964 - 11081.542: 79.2074% ( 200) 01:29:32.936 11081.542 - 11141.120: 80.4281% ( 150) 01:29:32.936 11141.120 - 11200.698: 81.7057% ( 157) 01:29:32.936 11200.698 - 11260.276: 83.1136% ( 173) 01:29:32.936 11260.276 - 11319.855: 84.1878% ( 132) 01:29:32.936 11319.855 - 11379.433: 85.3109% ( 138) 01:29:32.936 11379.433 - 11439.011: 86.0921% ( 96) 01:29:32.936 11439.011 - 11498.589: 86.9954% ( 111) 01:29:32.936 11498.589 - 11558.167: 87.8337% ( 103) 01:29:32.936 11558.167 - 11617.745: 88.6149% ( 96) 01:29:32.936 11617.745 - 11677.324: 89.4368% ( 101) 01:29:32.936 11677.324 - 11736.902: 90.1449% ( 87) 01:29:32.936 11736.902 - 11796.480: 90.6738% ( 65) 01:29:32.936 11796.480 - 11856.058: 91.2923% ( 76) 01:29:32.936 11856.058 - 11915.636: 91.9596% ( 82) 01:29:32.936 11915.636 - 11975.215: 92.6432% ( 84) 01:29:32.936 11975.215 - 12034.793: 93.1966% ( 68) 01:29:32.936 12034.793 - 12094.371: 93.7419% ( 67) 01:29:32.936 12094.371 - 12153.949: 94.2220% ( 59) 01:29:32.936 12153.949 - 12213.527: 94.6370% ( 51) 01:29:32.936 12213.527 - 12273.105: 94.9870% ( 43) 01:29:32.936 12273.105 - 12332.684: 95.3532% ( 45) 01:29:32.936 12332.684 - 12392.262: 95.5485% ( 24) 01:29:32.936 12392.262 - 12451.840: 95.7357% ( 23) 01:29:32.936 12451.840 - 12511.418: 95.8984% ( 20) 01:29:32.936 12511.418 - 12570.996: 96.1263% ( 28) 01:29:32.936 12570.996 - 12630.575: 96.3216% ( 24) 01:29:32.936 12630.575 - 12690.153: 96.5251% ( 25) 01:29:32.936 12690.153 - 12749.731: 96.7285% ( 25) 01:29:32.936 12749.731 - 12809.309: 96.8750% ( 18) 01:29:32.936 12809.309 - 12868.887: 97.0215% ( 18) 01:29:32.936 12868.887 - 12928.465: 97.2005% ( 22) 01:29:32.936 12928.465 - 12988.044: 97.4040% ( 25) 01:29:32.936 12988.044 - 13047.622: 97.6237% ( 27) 01:29:32.936 13047.622 - 13107.200: 97.8027% ( 22) 01:29:32.936 13107.200 - 13166.778: 98.0794% ( 34) 01:29:32.936 13166.778 - 13226.356: 98.3154% ( 29) 01:29:32.936 13226.356 - 13285.935: 98.4375% ( 15) 01:29:32.936 13285.935 - 13345.513: 98.4863% ( 6) 01:29:32.936 13345.513 - 13405.091: 98.5026% ( 2) 01:29:32.936 13405.091 - 13464.669: 98.5352% ( 4) 01:29:32.936 13464.669 - 13524.247: 98.5840% ( 6) 01:29:32.936 13524.247 - 13583.825: 98.6165% ( 4) 01:29:32.936 13583.825 - 13643.404: 98.6410% ( 3) 01:29:32.936 13643.404 - 13702.982: 98.6898% ( 6) 01:29:32.936 13702.982 - 13762.560: 98.7142% ( 3) 01:29:32.936 13762.560 - 13822.138: 98.7549% ( 5) 01:29:32.936 13822.138 - 13881.716: 98.7874% ( 4) 01:29:32.936 13881.716 - 13941.295: 98.8444% ( 7) 01:29:32.936 14060.451 - 14120.029: 98.8525% ( 1) 01:29:32.937 14120.029 - 14179.607: 98.8688% ( 2) 01:29:32.937 14179.607 - 14239.185: 98.8770% ( 1) 01:29:32.937 14239.185 - 14298.764: 98.8932% ( 2) 01:29:32.937 14298.764 - 14358.342: 98.9095% ( 2) 01:29:32.937 14358.342 - 14417.920: 98.9258% ( 2) 01:29:32.937 14417.920 - 14477.498: 98.9421% ( 2) 01:29:32.937 14477.498 - 14537.076: 98.9583% ( 2) 01:29:32.937 33363.782 - 33602.095: 99.0072% ( 6) 01:29:32.937 33602.095 - 33840.407: 99.0560% ( 6) 01:29:32.937 33840.407 - 34078.720: 99.1048% ( 6) 01:29:32.937 34078.720 - 34317.033: 99.1618% ( 7) 01:29:32.937 34317.033 - 34555.345: 99.2106% ( 6) 01:29:32.937 34555.345 - 34793.658: 99.2513% ( 5) 01:29:32.937 34793.658 - 35031.971: 99.3164% ( 8) 01:29:32.937 35031.971 - 35270.284: 99.3571% ( 5) 01:29:32.937 35270.284 - 35508.596: 99.3978% ( 5) 01:29:32.937 35508.596 - 35746.909: 99.4466% ( 6) 01:29:32.937 35746.909 - 35985.222: 99.4792% ( 4) 01:29:32.937 40751.476 - 40989.789: 99.5280% ( 6) 01:29:32.937 40989.789 - 41228.102: 99.5850% ( 7) 01:29:32.937 41228.102 - 41466.415: 99.6338% ( 6) 01:29:32.937 41466.415 - 41704.727: 99.6908% ( 7) 01:29:32.937 41704.727 - 41943.040: 99.7396% ( 6) 01:29:32.937 41943.040 - 42181.353: 99.8047% ( 8) 01:29:32.937 42181.353 - 42419.665: 99.8535% ( 6) 01:29:32.937 42419.665 - 42657.978: 99.9023% ( 6) 01:29:32.937 42657.978 - 42896.291: 99.9593% ( 7) 01:29:32.937 42896.291 - 43134.604: 100.0000% ( 5) 01:29:32.937 01:29:32.937 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 01:29:32.937 ============================================================================== 01:29:32.937 Range in us Cumulative IO count 01:29:32.937 8221.789 - 8281.367: 0.0081% ( 1) 01:29:32.937 8281.367 - 8340.945: 0.0163% ( 1) 01:29:32.937 8400.524 - 8460.102: 0.0570% ( 5) 01:29:32.937 8460.102 - 8519.680: 0.2848% ( 28) 01:29:32.937 8519.680 - 8579.258: 0.7812% ( 61) 01:29:32.937 8579.258 - 8638.836: 1.2207% ( 54) 01:29:32.937 8638.836 - 8698.415: 1.8880% ( 82) 01:29:32.937 8698.415 - 8757.993: 2.8971% ( 124) 01:29:32.937 8757.993 - 8817.571: 4.3376% ( 177) 01:29:32.937 8817.571 - 8877.149: 6.1523% ( 223) 01:29:32.937 8877.149 - 8936.727: 8.2438% ( 257) 01:29:32.937 8936.727 - 8996.305: 10.2702% ( 249) 01:29:32.937 8996.305 - 9055.884: 12.5895% ( 285) 01:29:32.937 9055.884 - 9115.462: 15.6169% ( 372) 01:29:32.937 9115.462 - 9175.040: 18.4001% ( 342) 01:29:32.937 9175.040 - 9234.618: 21.2728% ( 353) 01:29:32.937 9234.618 - 9294.196: 23.7386% ( 303) 01:29:32.937 9294.196 - 9353.775: 26.4079% ( 328) 01:29:32.937 9353.775 - 9413.353: 28.8574% ( 301) 01:29:32.937 9413.353 - 9472.931: 31.1035% ( 276) 01:29:32.937 9472.931 - 9532.509: 33.5612% ( 302) 01:29:32.937 9532.509 - 9592.087: 35.6038% ( 251) 01:29:32.937 9592.087 - 9651.665: 37.3454% ( 214) 01:29:32.937 9651.665 - 9711.244: 39.0381% ( 208) 01:29:32.937 9711.244 - 9770.822: 41.1133% ( 255) 01:29:32.937 9770.822 - 9830.400: 42.9606% ( 227) 01:29:32.937 9830.400 - 9889.978: 44.5719% ( 198) 01:29:32.937 9889.978 - 9949.556: 46.5658% ( 245) 01:29:32.937 9949.556 - 10009.135: 48.5921% ( 249) 01:29:32.937 10009.135 - 10068.713: 50.2279% ( 201) 01:29:32.937 10068.713 - 10128.291: 52.4089% ( 268) 01:29:32.937 10128.291 - 10187.869: 54.0527% ( 202) 01:29:32.937 10187.869 - 10247.447: 55.4769% ( 175) 01:29:32.937 10247.447 - 10307.025: 57.3893% ( 235) 01:29:32.937 10307.025 - 10366.604: 59.0739% ( 207) 01:29:32.937 10366.604 - 10426.182: 60.8480% ( 218) 01:29:32.937 10426.182 - 10485.760: 62.6546% ( 222) 01:29:32.937 10485.760 - 10545.338: 64.3229% ( 205) 01:29:32.937 10545.338 - 10604.916: 65.7715% ( 178) 01:29:32.937 10604.916 - 10664.495: 67.2933% ( 187) 01:29:32.937 10664.495 - 10724.073: 68.9697% ( 206) 01:29:32.937 10724.073 - 10783.651: 70.4590% ( 183) 01:29:32.937 10783.651 - 10843.229: 72.3063% ( 227) 01:29:32.937 10843.229 - 10902.807: 74.1536% ( 227) 01:29:32.937 10902.807 - 10962.385: 75.9928% ( 226) 01:29:32.937 10962.385 - 11021.964: 78.0192% ( 249) 01:29:32.937 11021.964 - 11081.542: 79.8584% ( 226) 01:29:32.937 11081.542 - 11141.120: 81.7952% ( 238) 01:29:32.937 11141.120 - 11200.698: 83.2031% ( 173) 01:29:32.937 11200.698 - 11260.276: 84.4645% ( 155) 01:29:32.937 11260.276 - 11319.855: 85.5713% ( 136) 01:29:32.937 11319.855 - 11379.433: 86.5641% ( 122) 01:29:32.937 11379.433 - 11439.011: 87.4674% ( 111) 01:29:32.937 11439.011 - 11498.589: 88.2324% ( 94) 01:29:32.937 11498.589 - 11558.167: 88.9893% ( 93) 01:29:32.937 11558.167 - 11617.745: 89.6810% ( 85) 01:29:32.937 11617.745 - 11677.324: 90.5029% ( 101) 01:29:32.937 11677.324 - 11736.902: 90.9912% ( 60) 01:29:32.937 11736.902 - 11796.480: 91.4144% ( 52) 01:29:32.937 11796.480 - 11856.058: 91.8294% ( 51) 01:29:32.937 11856.058 - 11915.636: 92.2607% ( 53) 01:29:32.937 11915.636 - 11975.215: 92.8385% ( 71) 01:29:32.937 11975.215 - 12034.793: 93.1641% ( 40) 01:29:32.937 12034.793 - 12094.371: 93.4326% ( 33) 01:29:32.937 12094.371 - 12153.949: 93.7337% ( 37) 01:29:32.937 12153.949 - 12213.527: 94.2383% ( 62) 01:29:32.937 12213.527 - 12273.105: 94.7510% ( 63) 01:29:32.937 12273.105 - 12332.684: 95.1009% ( 43) 01:29:32.937 12332.684 - 12392.262: 95.3939% ( 36) 01:29:32.937 12392.262 - 12451.840: 95.6136% ( 27) 01:29:32.937 12451.840 - 12511.418: 95.8171% ( 25) 01:29:32.937 12511.418 - 12570.996: 96.0693% ( 31) 01:29:32.937 12570.996 - 12630.575: 96.3298% ( 32) 01:29:32.937 12630.575 - 12690.153: 96.9076% ( 71) 01:29:32.937 12690.153 - 12749.731: 97.1110% ( 25) 01:29:32.937 12749.731 - 12809.309: 97.5016% ( 48) 01:29:32.937 12809.309 - 12868.887: 97.6888% ( 23) 01:29:32.937 12868.887 - 12928.465: 97.7539% ( 8) 01:29:32.937 12928.465 - 12988.044: 97.7865% ( 4) 01:29:32.937 12988.044 - 13047.622: 97.8353% ( 6) 01:29:32.937 13047.622 - 13107.200: 97.8923% ( 7) 01:29:32.937 13107.200 - 13166.778: 97.9492% ( 7) 01:29:32.937 13166.778 - 13226.356: 97.9818% ( 4) 01:29:32.937 13226.356 - 13285.935: 98.0062% ( 3) 01:29:32.937 13285.935 - 13345.513: 98.0306% ( 3) 01:29:32.937 13345.513 - 13405.091: 98.0550% ( 3) 01:29:32.937 13405.091 - 13464.669: 98.1283% ( 9) 01:29:32.937 13464.669 - 13524.247: 98.2503% ( 15) 01:29:32.937 13524.247 - 13583.825: 98.3887% ( 17) 01:29:32.937 13583.825 - 13643.404: 98.4375% ( 6) 01:29:32.937 13643.404 - 13702.982: 98.4945% ( 7) 01:29:32.937 13702.982 - 13762.560: 98.5514% ( 7) 01:29:32.937 13762.560 - 13822.138: 98.6003% ( 6) 01:29:32.937 13822.138 - 13881.716: 98.6247% ( 3) 01:29:32.937 13881.716 - 13941.295: 98.6491% ( 3) 01:29:32.937 13941.295 - 14000.873: 98.6735% ( 3) 01:29:32.937 14000.873 - 14060.451: 98.6979% ( 3) 01:29:32.937 14060.451 - 14120.029: 98.7793% ( 10) 01:29:32.937 14120.029 - 14179.607: 98.8363% ( 7) 01:29:32.937 14179.607 - 14239.185: 98.8688% ( 4) 01:29:32.937 14239.185 - 14298.764: 98.9014% ( 4) 01:29:32.937 14298.764 - 14358.342: 98.9258% ( 3) 01:29:32.937 14358.342 - 14417.920: 98.9339% ( 1) 01:29:32.937 14417.920 - 14477.498: 98.9502% ( 2) 01:29:32.937 14477.498 - 14537.076: 98.9583% ( 1) 01:29:32.937 30980.655 - 31218.967: 98.9665% ( 1) 01:29:32.937 31218.967 - 31457.280: 99.0234% ( 7) 01:29:32.937 31457.280 - 31695.593: 99.0723% ( 6) 01:29:32.937 31695.593 - 31933.905: 99.1211% ( 6) 01:29:32.937 31933.905 - 32172.218: 99.1699% ( 6) 01:29:32.937 32172.218 - 32410.531: 99.2269% ( 7) 01:29:32.937 32410.531 - 32648.844: 99.2839% ( 7) 01:29:32.937 32648.844 - 32887.156: 99.3408% ( 7) 01:29:32.937 32887.156 - 33125.469: 99.3978% ( 7) 01:29:32.937 33125.469 - 33363.782: 99.4466% ( 6) 01:29:32.937 33363.782 - 33602.095: 99.4792% ( 4) 01:29:32.937 38368.349 - 38606.662: 99.5117% ( 4) 01:29:32.937 38606.662 - 38844.975: 99.5850% ( 9) 01:29:32.937 38844.975 - 39083.287: 99.6338% ( 6) 01:29:32.937 39083.287 - 39321.600: 99.6908% ( 7) 01:29:32.937 39321.600 - 39559.913: 99.7559% ( 8) 01:29:32.937 39559.913 - 39798.225: 99.8128% ( 7) 01:29:32.937 39798.225 - 40036.538: 99.8779% ( 8) 01:29:32.937 40036.538 - 40274.851: 99.9430% ( 8) 01:29:32.937 40274.851 - 40513.164: 100.0000% ( 7) 01:29:32.937 01:29:32.937 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 01:29:32.937 ============================================================================== 01:29:32.937 Range in us Cumulative IO count 01:29:32.937 8281.367 - 8340.945: 0.0407% ( 5) 01:29:32.937 8340.945 - 8400.524: 0.1139% ( 9) 01:29:32.937 8400.524 - 8460.102: 0.2197% ( 13) 01:29:32.937 8460.102 - 8519.680: 0.4557% ( 29) 01:29:32.937 8519.680 - 8579.258: 0.6510% ( 24) 01:29:32.937 8579.258 - 8638.836: 1.0905% ( 54) 01:29:32.937 8638.836 - 8698.415: 1.7008% ( 75) 01:29:32.937 8698.415 - 8757.993: 2.8646% ( 143) 01:29:32.937 8757.993 - 8817.571: 4.1341% ( 156) 01:29:32.937 8817.571 - 8877.149: 5.9489% ( 223) 01:29:32.937 8877.149 - 8936.727: 7.6497% ( 209) 01:29:32.937 8936.727 - 8996.305: 9.8714% ( 273) 01:29:32.937 8996.305 - 9055.884: 12.7197% ( 350) 01:29:32.937 9055.884 - 9115.462: 15.1855% ( 303) 01:29:32.937 9115.462 - 9175.040: 18.2699% ( 379) 01:29:32.937 9175.040 - 9234.618: 21.4030% ( 385) 01:29:32.937 9234.618 - 9294.196: 24.3245% ( 359) 01:29:32.937 9294.196 - 9353.775: 27.0996% ( 341) 01:29:32.937 9353.775 - 9413.353: 29.7607% ( 327) 01:29:32.937 9413.353 - 9472.931: 31.6732% ( 235) 01:29:32.937 9472.931 - 9532.509: 33.5856% ( 235) 01:29:32.937 9532.509 - 9592.087: 35.2458% ( 204) 01:29:32.937 9592.087 - 9651.665: 37.0036% ( 216) 01:29:32.937 9651.665 - 9711.244: 39.1439% ( 263) 01:29:32.937 9711.244 - 9770.822: 40.6901% ( 190) 01:29:32.938 9770.822 - 9830.400: 42.9199% ( 274) 01:29:32.938 9830.400 - 9889.978: 44.7673% ( 227) 01:29:32.938 9889.978 - 9949.556: 46.6309% ( 229) 01:29:32.938 9949.556 - 10009.135: 48.3643% ( 213) 01:29:32.938 10009.135 - 10068.713: 50.5778% ( 272) 01:29:32.938 10068.713 - 10128.291: 52.6123% ( 250) 01:29:32.938 10128.291 - 10187.869: 55.0049% ( 294) 01:29:32.938 10187.869 - 10247.447: 56.5023% ( 184) 01:29:32.938 10247.447 - 10307.025: 57.6497% ( 141) 01:29:32.938 10307.025 - 10366.604: 59.1390% ( 183) 01:29:32.938 10366.604 - 10426.182: 60.5306% ( 171) 01:29:32.938 10426.182 - 10485.760: 61.9222% ( 171) 01:29:32.938 10485.760 - 10545.338: 63.9160% ( 245) 01:29:32.938 10545.338 - 10604.916: 65.3402% ( 175) 01:29:32.938 10604.916 - 10664.495: 66.8213% ( 182) 01:29:32.938 10664.495 - 10724.073: 68.8151% ( 245) 01:29:32.938 10724.073 - 10783.651: 70.4834% ( 205) 01:29:32.938 10783.651 - 10843.229: 72.0133% ( 188) 01:29:32.938 10843.229 - 10902.807: 73.6572% ( 202) 01:29:32.938 10902.807 - 10962.385: 76.0742% ( 297) 01:29:32.938 10962.385 - 11021.964: 78.2308% ( 265) 01:29:32.938 11021.964 - 11081.542: 79.9561% ( 212) 01:29:32.938 11081.542 - 11141.120: 81.7708% ( 223) 01:29:32.938 11141.120 - 11200.698: 83.0160% ( 153) 01:29:32.938 11200.698 - 11260.276: 84.1553% ( 140) 01:29:32.938 11260.276 - 11319.855: 85.1644% ( 124) 01:29:32.938 11319.855 - 11379.433: 86.1003% ( 115) 01:29:32.938 11379.433 - 11439.011: 86.8896% ( 97) 01:29:32.938 11439.011 - 11498.589: 87.5814% ( 85) 01:29:32.938 11498.589 - 11558.167: 88.4277% ( 104) 01:29:32.938 11558.167 - 11617.745: 89.0625% ( 78) 01:29:32.938 11617.745 - 11677.324: 89.7949% ( 90) 01:29:32.938 11677.324 - 11736.902: 90.5192% ( 89) 01:29:32.938 11736.902 - 11796.480: 91.1214% ( 74) 01:29:32.938 11796.480 - 11856.058: 91.6260% ( 62) 01:29:32.938 11856.058 - 11915.636: 92.3503% ( 89) 01:29:32.938 11915.636 - 11975.215: 93.0094% ( 81) 01:29:32.938 11975.215 - 12034.793: 93.6117% ( 74) 01:29:32.938 12034.793 - 12094.371: 93.9535% ( 42) 01:29:32.938 12094.371 - 12153.949: 94.2627% ( 38) 01:29:32.938 12153.949 - 12213.527: 94.8649% ( 74) 01:29:32.938 12213.527 - 12273.105: 95.1497% ( 35) 01:29:32.938 12273.105 - 12332.684: 95.3776% ( 28) 01:29:32.938 12332.684 - 12392.262: 95.7194% ( 42) 01:29:32.938 12392.262 - 12451.840: 95.9391% ( 27) 01:29:32.938 12451.840 - 12511.418: 96.1914% ( 31) 01:29:32.938 12511.418 - 12570.996: 96.7448% ( 68) 01:29:32.938 12570.996 - 12630.575: 97.0378% ( 36) 01:29:32.938 12630.575 - 12690.153: 97.2900% ( 31) 01:29:32.938 12690.153 - 12749.731: 97.6725% ( 47) 01:29:32.938 12749.731 - 12809.309: 97.8109% ( 17) 01:29:32.938 12809.309 - 12868.887: 97.9248% ( 14) 01:29:32.938 12868.887 - 12928.465: 98.1283% ( 25) 01:29:32.938 12928.465 - 12988.044: 98.2259% ( 12) 01:29:32.938 12988.044 - 13047.622: 98.2747% ( 6) 01:29:32.938 13047.622 - 13107.200: 98.3236% ( 6) 01:29:32.938 13107.200 - 13166.778: 98.3561% ( 4) 01:29:32.938 13166.778 - 13226.356: 98.4049% ( 6) 01:29:32.938 13226.356 - 13285.935: 98.4538% ( 6) 01:29:32.938 13285.935 - 13345.513: 98.4782% ( 3) 01:29:32.938 13345.513 - 13405.091: 98.5107% ( 4) 01:29:32.938 13405.091 - 13464.669: 98.5352% ( 3) 01:29:32.938 13464.669 - 13524.247: 98.5514% ( 2) 01:29:32.938 13524.247 - 13583.825: 98.5840% ( 4) 01:29:32.938 13583.825 - 13643.404: 98.6003% ( 2) 01:29:32.938 13643.404 - 13702.982: 98.6247% ( 3) 01:29:32.938 13702.982 - 13762.560: 98.6491% ( 3) 01:29:32.938 13762.560 - 13822.138: 98.6735% ( 3) 01:29:32.938 13822.138 - 13881.716: 98.7223% ( 6) 01:29:32.938 13881.716 - 13941.295: 98.8037% ( 10) 01:29:32.938 13941.295 - 14000.873: 98.8363% ( 4) 01:29:32.938 14000.873 - 14060.451: 98.8688% ( 4) 01:29:32.938 14060.451 - 14120.029: 98.9014% ( 4) 01:29:32.938 14120.029 - 14179.607: 98.9258% ( 3) 01:29:32.938 14179.607 - 14239.185: 98.9421% ( 2) 01:29:32.938 14239.185 - 14298.764: 98.9502% ( 1) 01:29:32.938 14298.764 - 14358.342: 98.9583% ( 1) 01:29:32.938 29669.935 - 29789.091: 98.9665% ( 1) 01:29:32.938 29789.091 - 29908.247: 98.9990% ( 4) 01:29:32.938 29908.247 - 30027.404: 99.0234% ( 3) 01:29:32.938 30027.404 - 30146.560: 99.0479% ( 3) 01:29:32.938 30146.560 - 30265.716: 99.0804% ( 4) 01:29:32.938 30265.716 - 30384.873: 99.1048% ( 3) 01:29:32.938 30384.873 - 30504.029: 99.1292% ( 3) 01:29:32.938 30504.029 - 30742.342: 99.1781% ( 6) 01:29:32.938 30742.342 - 30980.655: 99.2350% ( 7) 01:29:32.938 30980.655 - 31218.967: 99.3001% ( 8) 01:29:32.938 31218.967 - 31457.280: 99.3490% ( 6) 01:29:32.938 31457.280 - 31695.593: 99.4141% ( 8) 01:29:32.938 31695.593 - 31933.905: 99.4710% ( 7) 01:29:32.938 31933.905 - 32172.218: 99.4792% ( 1) 01:29:32.938 36938.473 - 37176.785: 99.4954% ( 2) 01:29:32.938 37176.785 - 37415.098: 99.5524% ( 7) 01:29:32.938 37415.098 - 37653.411: 99.6094% ( 7) 01:29:32.938 37653.411 - 37891.724: 99.6663% ( 7) 01:29:32.938 37891.724 - 38130.036: 99.7314% ( 8) 01:29:32.938 38130.036 - 38368.349: 99.7965% ( 8) 01:29:32.938 38368.349 - 38606.662: 99.8535% ( 7) 01:29:32.938 38606.662 - 38844.975: 99.9105% ( 7) 01:29:32.938 38844.975 - 39083.287: 99.9756% ( 8) 01:29:32.938 39083.287 - 39321.600: 100.0000% ( 3) 01:29:32.938 01:29:32.938 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 01:29:32.938 ============================================================================== 01:29:32.938 Range in us Cumulative IO count 01:29:32.938 8281.367 - 8340.945: 0.0081% ( 1) 01:29:32.938 8340.945 - 8400.524: 0.0163% ( 1) 01:29:32.938 8400.524 - 8460.102: 0.0326% ( 2) 01:29:32.938 8460.102 - 8519.680: 0.1302% ( 12) 01:29:32.938 8519.680 - 8579.258: 0.3662% ( 29) 01:29:32.938 8579.258 - 8638.836: 0.6673% ( 37) 01:29:32.938 8638.836 - 8698.415: 1.3672% ( 86) 01:29:32.938 8698.415 - 8757.993: 2.5635% ( 147) 01:29:32.938 8757.993 - 8817.571: 4.1585% ( 196) 01:29:32.938 8817.571 - 8877.149: 6.3721% ( 272) 01:29:32.938 8877.149 - 8936.727: 8.5205% ( 264) 01:29:32.938 8936.727 - 8996.305: 10.6934% ( 267) 01:29:32.938 8996.305 - 9055.884: 13.0859% ( 294) 01:29:32.938 9055.884 - 9115.462: 15.2018% ( 260) 01:29:32.938 9115.462 - 9175.040: 17.8467% ( 325) 01:29:32.938 9175.040 - 9234.618: 20.6462% ( 344) 01:29:32.938 9234.618 - 9294.196: 23.2910% ( 325) 01:29:32.938 9294.196 - 9353.775: 25.7406% ( 301) 01:29:32.938 9353.775 - 9413.353: 28.3040% ( 315) 01:29:32.938 9413.353 - 9472.931: 30.8350% ( 311) 01:29:32.938 9472.931 - 9532.509: 32.8939% ( 253) 01:29:32.938 9532.509 - 9592.087: 34.8470% ( 240) 01:29:32.938 9592.087 - 9651.665: 36.9548% ( 259) 01:29:32.938 9651.665 - 9711.244: 38.9811% ( 249) 01:29:32.938 9711.244 - 9770.822: 41.1947% ( 272) 01:29:32.938 9770.822 - 9830.400: 43.1641% ( 242) 01:29:32.938 9830.400 - 9889.978: 45.0846% ( 236) 01:29:32.938 9889.978 - 9949.556: 47.0459% ( 241) 01:29:32.938 9949.556 - 10009.135: 49.0885% ( 251) 01:29:32.938 10009.135 - 10068.713: 51.0742% ( 244) 01:29:32.938 10068.713 - 10128.291: 53.5238% ( 301) 01:29:32.938 10128.291 - 10187.869: 55.5908% ( 254) 01:29:32.938 10187.869 - 10247.447: 57.2835% ( 208) 01:29:32.938 10247.447 - 10307.025: 58.3903% ( 136) 01:29:32.938 10307.025 - 10366.604: 59.8714% ( 182) 01:29:32.938 10366.604 - 10426.182: 61.2386% ( 168) 01:29:32.938 10426.182 - 10485.760: 62.6790% ( 177) 01:29:32.938 10485.760 - 10545.338: 63.8590% ( 145) 01:29:32.938 10545.338 - 10604.916: 65.2832% ( 175) 01:29:32.938 10604.916 - 10664.495: 66.7399% ( 179) 01:29:32.938 10664.495 - 10724.073: 68.4163% ( 206) 01:29:32.938 10724.073 - 10783.651: 70.1660% ( 215) 01:29:32.938 10783.651 - 10843.229: 71.8180% ( 203) 01:29:32.938 10843.229 - 10902.807: 73.4863% ( 205) 01:29:32.938 10902.807 - 10962.385: 75.4232% ( 238) 01:29:32.938 10962.385 - 11021.964: 77.4821% ( 253) 01:29:32.938 11021.964 - 11081.542: 79.2155% ( 213) 01:29:32.938 11081.542 - 11141.120: 80.6396% ( 175) 01:29:32.938 11141.120 - 11200.698: 82.3812% ( 214) 01:29:32.938 11200.698 - 11260.276: 83.7321% ( 166) 01:29:32.938 11260.276 - 11319.855: 84.7738% ( 128) 01:29:32.938 11319.855 - 11379.433: 85.9049% ( 139) 01:29:32.938 11379.433 - 11439.011: 86.9629% ( 130) 01:29:32.938 11439.011 - 11498.589: 87.8092% ( 104) 01:29:32.938 11498.589 - 11558.167: 88.4440% ( 78) 01:29:32.938 11558.167 - 11617.745: 89.1927% ( 92) 01:29:32.938 11617.745 - 11677.324: 89.8844% ( 85) 01:29:32.938 11677.324 - 11736.902: 90.4704% ( 72) 01:29:32.938 11736.902 - 11796.480: 91.2842% ( 100) 01:29:32.938 11796.480 - 11856.058: 91.9434% ( 81) 01:29:32.938 11856.058 - 11915.636: 92.4154% ( 58) 01:29:32.938 11915.636 - 11975.215: 92.8630% ( 55) 01:29:32.938 11975.215 - 12034.793: 93.4163% ( 68) 01:29:32.938 12034.793 - 12094.371: 93.7907% ( 46) 01:29:32.938 12094.371 - 12153.949: 94.1813% ( 48) 01:29:32.938 12153.949 - 12213.527: 94.8893% ( 87) 01:29:32.938 12213.527 - 12273.105: 95.0846% ( 24) 01:29:32.938 12273.105 - 12332.684: 95.3369% ( 31) 01:29:32.938 12332.684 - 12392.262: 95.5811% ( 30) 01:29:32.938 12392.262 - 12451.840: 95.8740% ( 36) 01:29:32.938 12451.840 - 12511.418: 96.0856% ( 26) 01:29:32.938 12511.418 - 12570.996: 96.4355% ( 43) 01:29:32.938 12570.996 - 12630.575: 96.8831% ( 55) 01:29:32.938 12630.575 - 12690.153: 97.0947% ( 26) 01:29:32.938 12690.153 - 12749.731: 97.5667% ( 58) 01:29:32.938 12749.731 - 12809.309: 97.8760% ( 38) 01:29:32.938 12809.309 - 12868.887: 97.9980% ( 15) 01:29:32.938 12868.887 - 12928.465: 98.1038% ( 13) 01:29:32.938 12928.465 - 12988.044: 98.2096% ( 13) 01:29:32.939 12988.044 - 13047.622: 98.3073% ( 12) 01:29:32.939 13047.622 - 13107.200: 98.3805% ( 9) 01:29:32.939 13107.200 - 13166.778: 98.4212% ( 5) 01:29:32.939 13166.778 - 13226.356: 98.4375% ( 2) 01:29:32.939 13524.247 - 13583.825: 98.4538% ( 2) 01:29:32.939 13583.825 - 13643.404: 98.4701% ( 2) 01:29:32.939 13643.404 - 13702.982: 98.5026% ( 4) 01:29:32.939 13702.982 - 13762.560: 98.5189% ( 2) 01:29:32.939 13762.560 - 13822.138: 98.5433% ( 3) 01:29:32.939 13822.138 - 13881.716: 98.5514% ( 1) 01:29:32.939 13881.716 - 13941.295: 98.5840% ( 4) 01:29:32.939 13941.295 - 14000.873: 98.6003% ( 2) 01:29:32.939 14000.873 - 14060.451: 98.6328% ( 4) 01:29:32.939 14060.451 - 14120.029: 98.6410% ( 1) 01:29:32.939 14120.029 - 14179.607: 98.6654% ( 3) 01:29:32.939 14179.607 - 14239.185: 98.6898% ( 3) 01:29:32.939 14239.185 - 14298.764: 98.7467% ( 7) 01:29:32.939 14298.764 - 14358.342: 98.8037% ( 7) 01:29:32.939 14358.342 - 14417.920: 98.8932% ( 11) 01:29:32.939 14417.920 - 14477.498: 98.9176% ( 3) 01:29:32.939 14477.498 - 14537.076: 98.9258% ( 1) 01:29:32.939 14537.076 - 14596.655: 98.9421% ( 2) 01:29:32.939 14596.655 - 14656.233: 98.9583% ( 2) 01:29:32.939 27286.807 - 27405.964: 98.9665% ( 1) 01:29:32.939 27405.964 - 27525.120: 98.9909% ( 3) 01:29:32.939 27525.120 - 27644.276: 99.0234% ( 4) 01:29:32.939 27644.276 - 27763.433: 99.0479% ( 3) 01:29:32.939 27763.433 - 27882.589: 99.0804% ( 4) 01:29:32.939 27882.589 - 28001.745: 99.1048% ( 3) 01:29:32.939 28001.745 - 28120.902: 99.1374% ( 4) 01:29:32.939 28120.902 - 28240.058: 99.1699% ( 4) 01:29:32.939 28240.058 - 28359.215: 99.1943% ( 3) 01:29:32.939 28359.215 - 28478.371: 99.2188% ( 3) 01:29:32.939 28478.371 - 28597.527: 99.2432% ( 3) 01:29:32.939 28597.527 - 28716.684: 99.2757% ( 4) 01:29:32.939 28716.684 - 28835.840: 99.3001% ( 3) 01:29:32.939 28835.840 - 28954.996: 99.3245% ( 3) 01:29:32.939 28954.996 - 29074.153: 99.3571% ( 4) 01:29:32.939 29074.153 - 29193.309: 99.3815% ( 3) 01:29:32.939 29193.309 - 29312.465: 99.4141% ( 4) 01:29:32.939 29312.465 - 29431.622: 99.4385% ( 3) 01:29:32.939 29431.622 - 29550.778: 99.4710% ( 4) 01:29:32.939 29550.778 - 29669.935: 99.4792% ( 1) 01:29:32.939 34555.345 - 34793.658: 99.4873% ( 1) 01:29:32.939 34793.658 - 35031.971: 99.5524% ( 8) 01:29:32.939 35031.971 - 35270.284: 99.6094% ( 7) 01:29:32.939 35270.284 - 35508.596: 99.6663% ( 7) 01:29:32.939 35508.596 - 35746.909: 99.7314% ( 8) 01:29:32.939 35746.909 - 35985.222: 99.7884% ( 7) 01:29:32.939 35985.222 - 36223.535: 99.8372% ( 6) 01:29:32.939 36223.535 - 36461.847: 99.8942% ( 7) 01:29:32.939 36461.847 - 36700.160: 99.9430% ( 6) 01:29:32.939 36700.160 - 36938.473: 100.0000% ( 7) 01:29:32.939 01:29:32.939 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 01:29:32.939 ============================================================================== 01:29:32.939 Range in us Cumulative IO count 01:29:32.939 8162.211 - 8221.789: 0.0081% ( 1) 01:29:32.939 8400.524 - 8460.102: 0.0648% ( 7) 01:29:32.939 8460.102 - 8519.680: 0.2186% ( 19) 01:29:32.939 8519.680 - 8579.258: 0.5019% ( 35) 01:29:32.939 8579.258 - 8638.836: 0.9958% ( 61) 01:29:32.939 8638.836 - 8698.415: 1.7892% ( 98) 01:29:32.939 8698.415 - 8757.993: 3.1979% ( 174) 01:29:32.939 8757.993 - 8817.571: 4.5499% ( 167) 01:29:32.939 8817.571 - 8877.149: 6.2095% ( 205) 01:29:32.939 8877.149 - 8936.727: 8.1525% ( 240) 01:29:32.939 8936.727 - 8996.305: 10.6056% ( 303) 01:29:32.939 8996.305 - 9055.884: 12.5243% ( 237) 01:29:32.939 9055.884 - 9115.462: 14.6292% ( 260) 01:29:32.939 9115.462 - 9175.040: 17.1551% ( 312) 01:29:32.939 9175.040 - 9234.618: 20.2477% ( 382) 01:29:32.939 9234.618 - 9294.196: 22.7817% ( 313) 01:29:32.939 9294.196 - 9353.775: 25.5991% ( 348) 01:29:32.939 9353.775 - 9413.353: 28.2707% ( 330) 01:29:32.939 9413.353 - 9472.931: 30.7966% ( 312) 01:29:32.939 9472.931 - 9532.509: 33.0311% ( 276) 01:29:32.939 9532.509 - 9592.087: 35.1036% ( 256) 01:29:32.939 9592.087 - 9651.665: 36.9981% ( 234) 01:29:32.939 9651.665 - 9711.244: 38.8763% ( 232) 01:29:32.939 9711.244 - 9770.822: 40.9084% ( 251) 01:29:32.939 9770.822 - 9830.400: 42.8109% ( 235) 01:29:32.939 9830.400 - 9889.978: 44.7134% ( 235) 01:29:32.939 9889.978 - 9949.556: 46.5269% ( 224) 01:29:32.939 9949.556 - 10009.135: 48.3080% ( 220) 01:29:32.939 10009.135 - 10068.713: 50.2186% ( 236) 01:29:32.939 10068.713 - 10128.291: 52.4288% ( 273) 01:29:32.939 10128.291 - 10187.869: 54.4041% ( 244) 01:29:32.939 10187.869 - 10247.447: 55.9747% ( 194) 01:29:32.939 10247.447 - 10307.025: 57.4563% ( 183) 01:29:32.939 10307.025 - 10366.604: 59.0269% ( 194) 01:29:32.939 10366.604 - 10426.182: 60.6541% ( 201) 01:29:32.939 10426.182 - 10485.760: 62.1195% ( 181) 01:29:32.939 10485.760 - 10545.338: 63.7468% ( 201) 01:29:32.939 10545.338 - 10604.916: 65.7950% ( 253) 01:29:32.939 10604.916 - 10664.495: 67.2037% ( 174) 01:29:32.939 10664.495 - 10724.073: 68.8310% ( 201) 01:29:32.939 10724.073 - 10783.651: 70.6930% ( 230) 01:29:32.939 10783.651 - 10843.229: 72.0126% ( 163) 01:29:32.939 10843.229 - 10902.807: 73.8747% ( 230) 01:29:32.939 10902.807 - 10962.385: 75.6477% ( 219) 01:29:32.939 10962.385 - 11021.964: 77.3964% ( 216) 01:29:32.939 11021.964 - 11081.542: 79.4041% ( 248) 01:29:32.939 11081.542 - 11141.120: 80.8695% ( 181) 01:29:32.939 11141.120 - 11200.698: 82.2296% ( 168) 01:29:32.939 11200.698 - 11260.276: 83.5087% ( 158) 01:29:32.939 11260.276 - 11319.855: 84.4479% ( 116) 01:29:32.939 11319.855 - 11379.433: 85.3789% ( 115) 01:29:32.939 11379.433 - 11439.011: 86.2775% ( 111) 01:29:32.939 11439.011 - 11498.589: 87.0709% ( 98) 01:29:32.939 11498.589 - 11558.167: 87.9696% ( 111) 01:29:32.939 11558.167 - 11617.745: 88.9330% ( 119) 01:29:32.939 11617.745 - 11677.324: 90.2283% ( 160) 01:29:32.939 11677.324 - 11736.902: 90.9893% ( 94) 01:29:32.939 11736.902 - 11796.480: 91.7989% ( 100) 01:29:32.939 11796.480 - 11856.058: 92.2037% ( 50) 01:29:32.939 11856.058 - 11915.636: 92.5275% ( 40) 01:29:32.939 11915.636 - 11975.215: 92.8433% ( 39) 01:29:32.939 11975.215 - 12034.793: 93.2885% ( 55) 01:29:32.939 12034.793 - 12094.371: 93.8067% ( 64) 01:29:32.939 12094.371 - 12153.949: 94.0981% ( 36) 01:29:32.939 12153.949 - 12213.527: 94.5515% ( 56) 01:29:32.939 12213.527 - 12273.105: 94.9401% ( 48) 01:29:32.939 12273.105 - 12332.684: 95.1020% ( 20) 01:29:32.939 12332.684 - 12392.262: 95.2720% ( 21) 01:29:32.939 12392.262 - 12451.840: 95.4906% ( 27) 01:29:32.939 12451.840 - 12511.418: 95.7497% ( 32) 01:29:32.939 12511.418 - 12570.996: 96.2678% ( 64) 01:29:32.939 12570.996 - 12630.575: 96.7293% ( 57) 01:29:32.939 12630.575 - 12690.153: 97.0369% ( 38) 01:29:32.939 12690.153 - 12749.731: 97.2312% ( 24) 01:29:32.939 12749.731 - 12809.309: 97.4903% ( 32) 01:29:32.939 12809.309 - 12868.887: 97.6441% ( 19) 01:29:32.939 12868.887 - 12928.465: 97.8222% ( 22) 01:29:32.939 12928.465 - 12988.044: 97.9922% ( 21) 01:29:32.939 12988.044 - 13047.622: 98.1137% ( 15) 01:29:32.939 13047.622 - 13107.200: 98.2351% ( 15) 01:29:32.939 13107.200 - 13166.778: 98.3161% ( 10) 01:29:32.939 13166.778 - 13226.356: 98.3646% ( 6) 01:29:32.939 13226.356 - 13285.935: 98.3889% ( 3) 01:29:32.939 13285.935 - 13345.513: 98.4132% ( 3) 01:29:32.939 13345.513 - 13405.091: 98.4456% ( 4) 01:29:32.939 13702.982 - 13762.560: 98.4537% ( 1) 01:29:32.939 13881.716 - 13941.295: 98.4618% ( 1) 01:29:32.939 14060.451 - 14120.029: 98.4780% ( 2) 01:29:32.939 14120.029 - 14179.607: 98.5023% ( 3) 01:29:32.939 14179.607 - 14239.185: 98.5266% ( 3) 01:29:32.939 14239.185 - 14298.764: 98.5427% ( 2) 01:29:32.939 14298.764 - 14358.342: 98.5670% ( 3) 01:29:32.939 14358.342 - 14417.920: 98.5832% ( 2) 01:29:32.939 14417.920 - 14477.498: 98.6075% ( 3) 01:29:32.939 14477.498 - 14537.076: 98.6237% ( 2) 01:29:32.939 14537.076 - 14596.655: 98.6561% ( 4) 01:29:32.939 14596.655 - 14656.233: 98.6966% ( 5) 01:29:32.939 14656.233 - 14715.811: 98.7532% ( 7) 01:29:32.939 14715.811 - 14775.389: 98.8018% ( 6) 01:29:32.939 14775.389 - 14834.967: 98.8342% ( 4) 01:29:32.939 14834.967 - 14894.545: 98.8666% ( 4) 01:29:32.939 14894.545 - 14954.124: 98.8909% ( 3) 01:29:32.939 14954.124 - 15013.702: 98.9233% ( 4) 01:29:32.939 15013.702 - 15073.280: 98.9556% ( 4) 01:29:32.939 15073.280 - 15132.858: 98.9637% ( 1) 01:29:32.939 19422.487 - 19541.644: 98.9880% ( 3) 01:29:32.939 19541.644 - 19660.800: 99.0204% ( 4) 01:29:32.939 19660.800 - 19779.956: 99.0528% ( 4) 01:29:32.939 19779.956 - 19899.113: 99.0771% ( 3) 01:29:32.939 19899.113 - 20018.269: 99.1014% ( 3) 01:29:32.939 20018.269 - 20137.425: 99.1337% ( 4) 01:29:32.939 20137.425 - 20256.582: 99.1661% ( 4) 01:29:32.940 20256.582 - 20375.738: 99.1985% ( 4) 01:29:32.940 20375.738 - 20494.895: 99.2228% ( 3) 01:29:32.940 20494.895 - 20614.051: 99.2552% ( 4) 01:29:32.940 20614.051 - 20733.207: 99.2795% ( 3) 01:29:32.940 20733.207 - 20852.364: 99.3119% ( 4) 01:29:32.940 20852.364 - 20971.520: 99.3442% ( 4) 01:29:32.940 20971.520 - 21090.676: 99.3766% ( 4) 01:29:32.940 21090.676 - 21209.833: 99.4009% ( 3) 01:29:32.940 21209.833 - 21328.989: 99.4333% ( 4) 01:29:32.940 21328.989 - 21448.145: 99.4657% ( 4) 01:29:32.940 21448.145 - 21567.302: 99.4819% ( 2) 01:29:32.940 26452.713 - 26571.869: 99.5062% ( 3) 01:29:32.940 26571.869 - 26691.025: 99.5304% ( 3) 01:29:32.940 26691.025 - 26810.182: 99.5547% ( 3) 01:29:32.940 26810.182 - 26929.338: 99.5709% ( 2) 01:29:32.940 26929.338 - 27048.495: 99.6033% ( 4) 01:29:32.940 27048.495 - 27167.651: 99.6114% ( 1) 01:29:32.940 27167.651 - 27286.807: 99.6438% ( 4) 01:29:32.940 27286.807 - 27405.964: 99.6762% ( 4) 01:29:32.940 27405.964 - 27525.120: 99.7005% ( 3) 01:29:32.940 27525.120 - 27644.276: 99.7328% ( 4) 01:29:32.940 27644.276 - 27763.433: 99.7571% ( 3) 01:29:32.940 27763.433 - 27882.589: 99.7895% ( 4) 01:29:32.940 27882.589 - 28001.745: 99.8138% ( 3) 01:29:32.940 28001.745 - 28120.902: 99.8462% ( 4) 01:29:32.940 28120.902 - 28240.058: 99.8705% ( 3) 01:29:32.940 28240.058 - 28359.215: 99.8948% ( 3) 01:29:32.940 28359.215 - 28478.371: 99.9271% ( 4) 01:29:32.940 28478.371 - 28597.527: 99.9595% ( 4) 01:29:32.940 28597.527 - 28716.684: 99.9919% ( 4) 01:29:32.940 28716.684 - 28835.840: 100.0000% ( 1) 01:29:32.940 01:29:32.940 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 01:29:32.940 ============================================================================== 01:29:32.940 Range in us Cumulative IO count 01:29:32.940 8340.945 - 8400.524: 0.0162% ( 2) 01:29:32.940 8400.524 - 8460.102: 0.0810% ( 8) 01:29:32.940 8460.102 - 8519.680: 0.2510% ( 21) 01:29:32.940 8519.680 - 8579.258: 0.6881% ( 54) 01:29:32.940 8579.258 - 8638.836: 1.1820% ( 61) 01:29:32.940 8638.836 - 8698.415: 1.8459% ( 82) 01:29:32.940 8698.415 - 8757.993: 2.8174% ( 120) 01:29:32.940 8757.993 - 8817.571: 3.8779% ( 131) 01:29:32.940 8817.571 - 8877.149: 5.5295% ( 204) 01:29:32.940 8877.149 - 8936.727: 7.8125% ( 282) 01:29:32.940 8936.727 - 8996.305: 10.1522% ( 289) 01:29:32.940 8996.305 - 9055.884: 12.7105% ( 316) 01:29:32.940 9055.884 - 9115.462: 15.3012% ( 320) 01:29:32.940 9115.462 - 9175.040: 18.4828% ( 393) 01:29:32.940 9175.040 - 9234.618: 20.9035% ( 299) 01:29:32.940 9234.618 - 9294.196: 23.6723% ( 342) 01:29:32.940 9294.196 - 9353.775: 26.2063% ( 313) 01:29:32.940 9353.775 - 9413.353: 28.6836% ( 306) 01:29:32.940 9413.353 - 9472.931: 31.0233% ( 289) 01:29:32.940 9472.931 - 9532.509: 33.1444% ( 262) 01:29:32.940 9532.509 - 9592.087: 34.7231% ( 195) 01:29:32.940 9592.087 - 9651.665: 36.0670% ( 166) 01:29:32.940 9651.665 - 9711.244: 37.6943% ( 201) 01:29:32.940 9711.244 - 9770.822: 39.2811% ( 196) 01:29:32.940 9770.822 - 9830.400: 41.2160% ( 239) 01:29:32.940 9830.400 - 9889.978: 43.1347% ( 237) 01:29:32.940 9889.978 - 9949.556: 45.2477% ( 261) 01:29:32.940 9949.556 - 10009.135: 47.8951% ( 327) 01:29:32.940 10009.135 - 10068.713: 50.4858% ( 320) 01:29:32.940 10068.713 - 10128.291: 53.0845% ( 321) 01:29:32.940 10128.291 - 10187.869: 54.9142% ( 226) 01:29:32.940 10187.869 - 10247.447: 56.6872% ( 219) 01:29:32.940 10247.447 - 10307.025: 58.0311% ( 166) 01:29:32.940 10307.025 - 10366.604: 59.5612% ( 189) 01:29:32.940 10366.604 - 10426.182: 61.4556% ( 234) 01:29:32.940 10426.182 - 10485.760: 63.2043% ( 216) 01:29:32.940 10485.760 - 10545.338: 64.4592% ( 155) 01:29:32.940 10545.338 - 10604.916: 65.8679% ( 174) 01:29:32.940 10604.916 - 10664.495: 67.1632% ( 160) 01:29:32.940 10664.495 - 10724.073: 68.5962% ( 177) 01:29:32.940 10724.073 - 10783.651: 70.2558% ( 205) 01:29:32.940 10783.651 - 10843.229: 71.6321% ( 170) 01:29:32.940 10843.229 - 10902.807: 73.5185% ( 233) 01:29:32.940 10902.807 - 10962.385: 75.3238% ( 223) 01:29:32.940 10962.385 - 11021.964: 76.6920% ( 169) 01:29:32.940 11021.964 - 11081.542: 78.3274% ( 202) 01:29:32.940 11081.542 - 11141.120: 80.4647% ( 264) 01:29:32.940 11141.120 - 11200.698: 81.8167% ( 167) 01:29:32.940 11200.698 - 11260.276: 83.2578% ( 178) 01:29:32.940 11260.276 - 11319.855: 84.4236% ( 144) 01:29:32.940 11319.855 - 11379.433: 85.5489% ( 139) 01:29:32.940 11379.433 - 11439.011: 86.5609% ( 125) 01:29:32.940 11439.011 - 11498.589: 87.4514% ( 110) 01:29:32.940 11498.589 - 11558.167: 88.5444% ( 135) 01:29:32.940 11558.167 - 11617.745: 89.6859% ( 141) 01:29:32.940 11617.745 - 11677.324: 90.5036% ( 101) 01:29:32.940 11677.324 - 11736.902: 91.0865% ( 72) 01:29:32.940 11736.902 - 11796.480: 91.8070% ( 89) 01:29:32.940 11796.480 - 11856.058: 92.5032% ( 86) 01:29:32.940 11856.058 - 11915.636: 92.9566% ( 56) 01:29:32.940 11915.636 - 11975.215: 93.2723% ( 39) 01:29:32.940 11975.215 - 12034.793: 93.6124% ( 42) 01:29:32.940 12034.793 - 12094.371: 93.7905% ( 22) 01:29:32.940 12094.371 - 12153.949: 93.9848% ( 24) 01:29:32.940 12153.949 - 12213.527: 94.1953% ( 26) 01:29:32.940 12213.527 - 12273.105: 94.5677% ( 46) 01:29:32.940 12273.105 - 12332.684: 94.7377% ( 21) 01:29:32.940 12332.684 - 12392.262: 94.9644% ( 28) 01:29:32.940 12392.262 - 12451.840: 95.5068% ( 67) 01:29:32.940 12451.840 - 12511.418: 95.8387% ( 41) 01:29:32.940 12511.418 - 12570.996: 96.1545% ( 39) 01:29:32.940 12570.996 - 12630.575: 96.4945% ( 42) 01:29:32.940 12630.575 - 12690.153: 96.9883% ( 61) 01:29:32.940 12690.153 - 12749.731: 97.1422% ( 19) 01:29:32.940 12749.731 - 12809.309: 97.2879% ( 18) 01:29:32.940 12809.309 - 12868.887: 97.4012% ( 14) 01:29:32.940 12868.887 - 12928.465: 97.5389% ( 17) 01:29:32.940 12928.465 - 12988.044: 97.6360% ( 12) 01:29:32.940 12988.044 - 13047.622: 97.7413% ( 13) 01:29:32.940 13047.622 - 13107.200: 97.7898% ( 6) 01:29:32.940 13107.200 - 13166.778: 97.8546% ( 8) 01:29:32.940 13166.778 - 13226.356: 97.9679% ( 14) 01:29:32.940 13226.356 - 13285.935: 98.0489% ( 10) 01:29:32.940 13285.935 - 13345.513: 98.1380% ( 11) 01:29:32.940 13345.513 - 13405.091: 98.2351% ( 12) 01:29:32.940 13405.091 - 13464.669: 98.3242% ( 11) 01:29:32.940 13464.669 - 13524.247: 98.3646% ( 5) 01:29:32.940 13524.247 - 13583.825: 98.4051% ( 5) 01:29:32.940 13583.825 - 13643.404: 98.4375% ( 4) 01:29:32.940 13643.404 - 13702.982: 98.4456% ( 1) 01:29:32.940 14358.342 - 14417.920: 98.4618% ( 2) 01:29:32.940 14417.920 - 14477.498: 98.4699% ( 1) 01:29:32.940 14477.498 - 14537.076: 98.5023% ( 4) 01:29:32.940 14537.076 - 14596.655: 98.5185% ( 2) 01:29:32.940 14596.655 - 14656.233: 98.5347% ( 2) 01:29:32.940 14656.233 - 14715.811: 98.5589% ( 3) 01:29:32.940 14715.811 - 14775.389: 98.5994% ( 5) 01:29:32.940 14775.389 - 14834.967: 98.6075% ( 1) 01:29:32.940 14834.967 - 14894.545: 98.6318% ( 3) 01:29:32.940 14894.545 - 14954.124: 98.6480% ( 2) 01:29:32.940 14954.124 - 15013.702: 98.6642% ( 2) 01:29:32.940 15013.702 - 15073.280: 98.7047% ( 5) 01:29:32.940 15073.280 - 15132.858: 98.7290% ( 3) 01:29:32.940 15132.858 - 15192.436: 98.8180% ( 11) 01:29:32.940 15192.436 - 15252.015: 98.8585% ( 5) 01:29:32.940 15252.015 - 15371.171: 98.9313% ( 9) 01:29:32.940 15371.171 - 15490.327: 98.9475% ( 2) 01:29:32.940 15490.327 - 15609.484: 98.9637% ( 2) 01:29:32.940 17873.455 - 17992.611: 99.0285% ( 8) 01:29:32.940 17992.611 - 18111.767: 99.0852% ( 7) 01:29:32.940 18111.767 - 18230.924: 99.1256% ( 5) 01:29:32.940 18230.924 - 18350.080: 99.1499% ( 3) 01:29:32.940 18350.080 - 18469.236: 99.1823% ( 4) 01:29:32.940 18469.236 - 18588.393: 99.2066% ( 3) 01:29:32.940 18588.393 - 18707.549: 99.2390% ( 4) 01:29:32.940 18707.549 - 18826.705: 99.2633% ( 3) 01:29:32.940 18826.705 - 18945.862: 99.2876% ( 3) 01:29:32.940 18945.862 - 19065.018: 99.3119% ( 3) 01:29:32.940 19065.018 - 19184.175: 99.3442% ( 4) 01:29:32.940 19184.175 - 19303.331: 99.3685% ( 3) 01:29:32.940 19303.331 - 19422.487: 99.3928% ( 3) 01:29:32.940 19422.487 - 19541.644: 99.4171% ( 3) 01:29:32.940 19541.644 - 19660.800: 99.4495% ( 4) 01:29:32.940 19660.800 - 19779.956: 99.4738% ( 3) 01:29:32.940 19779.956 - 19899.113: 99.4819% ( 1) 01:29:32.940 22997.178 - 23116.335: 99.4900% ( 1) 01:29:32.941 23116.335 - 23235.491: 99.5547% ( 8) 01:29:32.941 23235.491 - 23354.647: 99.6762% ( 15) 01:29:32.941 23354.647 - 23473.804: 99.7247% ( 6) 01:29:32.941 24903.680 - 25022.836: 99.7571% ( 4) 01:29:32.941 25022.836 - 25141.993: 99.7814% ( 3) 01:29:32.941 25141.993 - 25261.149: 99.8057% ( 3) 01:29:32.941 25261.149 - 25380.305: 99.8300% ( 3) 01:29:32.941 25380.305 - 25499.462: 99.8543% ( 3) 01:29:32.941 25499.462 - 25618.618: 99.8786% ( 3) 01:29:32.941 25618.618 - 25737.775: 99.9109% ( 4) 01:29:32.941 25737.775 - 25856.931: 99.9352% ( 3) 01:29:32.941 25856.931 - 25976.087: 99.9676% ( 4) 01:29:32.941 25976.087 - 26095.244: 99.9919% ( 3) 01:29:32.941 26095.244 - 26214.400: 100.0000% ( 1) 01:29:32.941 01:29:33.199 05:24:24 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 01:29:33.199 01:29:33.199 real 0m2.980s 01:29:33.199 user 0m2.535s 01:29:33.199 sys 0m0.333s 01:29:33.199 05:24:24 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 01:29:33.199 05:24:24 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 01:29:33.199 ************************************ 01:29:33.199 END TEST nvme_perf 01:29:33.199 ************************************ 01:29:33.199 05:24:24 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 01:29:33.199 05:24:24 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:29:33.199 05:24:24 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 01:29:33.199 05:24:24 nvme -- common/autotest_common.sh@10 -- # set +x 01:29:33.199 ************************************ 01:29:33.199 START TEST nvme_hello_world 01:29:33.199 ************************************ 01:29:33.199 05:24:24 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 01:29:33.457 Initializing NVMe Controllers 01:29:33.457 Attached to 0000:00:10.0 01:29:33.457 Namespace ID: 1 size: 6GB 01:29:33.457 Attached to 0000:00:11.0 01:29:33.457 Namespace ID: 1 size: 5GB 01:29:33.457 Attached to 0000:00:13.0 01:29:33.457 Namespace ID: 1 size: 1GB 01:29:33.457 Attached to 0000:00:12.0 01:29:33.457 Namespace ID: 1 size: 4GB 01:29:33.457 Namespace ID: 2 size: 4GB 01:29:33.457 Namespace ID: 3 size: 4GB 01:29:33.457 Initialization complete. 01:29:33.457 INFO: using host memory buffer for IO 01:29:33.457 Hello world! 01:29:33.457 INFO: using host memory buffer for IO 01:29:33.457 Hello world! 01:29:33.457 INFO: using host memory buffer for IO 01:29:33.457 Hello world! 01:29:33.457 INFO: using host memory buffer for IO 01:29:33.457 Hello world! 01:29:33.457 INFO: using host memory buffer for IO 01:29:33.457 Hello world! 01:29:33.457 INFO: using host memory buffer for IO 01:29:33.457 Hello world! 01:29:33.716 01:29:33.716 real 0m0.484s 01:29:33.716 user 0m0.250s 01:29:33.716 sys 0m0.183s 01:29:33.716 05:24:25 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 01:29:33.716 ************************************ 01:29:33.716 END TEST nvme_hello_world 01:29:33.716 ************************************ 01:29:33.716 05:24:25 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 01:29:33.716 05:24:25 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 01:29:33.716 05:24:25 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:29:33.716 05:24:25 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 01:29:33.716 05:24:25 nvme -- common/autotest_common.sh@10 -- # set +x 01:29:33.716 ************************************ 01:29:33.716 START TEST nvme_sgl 01:29:33.716 ************************************ 01:29:33.716 05:24:25 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 01:29:33.975 0000:00:10.0: build_io_request_0 Invalid IO length parameter 01:29:33.975 0000:00:10.0: build_io_request_1 Invalid IO length parameter 01:29:33.975 0000:00:10.0: build_io_request_3 Invalid IO length parameter 01:29:33.975 0000:00:10.0: build_io_request_8 Invalid IO length parameter 01:29:33.975 0000:00:10.0: build_io_request_9 Invalid IO length parameter 01:29:33.975 0000:00:10.0: build_io_request_11 Invalid IO length parameter 01:29:33.975 0000:00:11.0: build_io_request_0 Invalid IO length parameter 01:29:33.975 0000:00:11.0: build_io_request_1 Invalid IO length parameter 01:29:33.975 0000:00:11.0: build_io_request_3 Invalid IO length parameter 01:29:33.975 0000:00:11.0: build_io_request_8 Invalid IO length parameter 01:29:33.975 0000:00:11.0: build_io_request_9 Invalid IO length parameter 01:29:33.975 0000:00:11.0: build_io_request_11 Invalid IO length parameter 01:29:33.975 0000:00:13.0: build_io_request_0 Invalid IO length parameter 01:29:33.975 0000:00:13.0: build_io_request_1 Invalid IO length parameter 01:29:33.975 0000:00:13.0: build_io_request_2 Invalid IO length parameter 01:29:33.975 0000:00:13.0: build_io_request_3 Invalid IO length parameter 01:29:33.975 0000:00:13.0: build_io_request_4 Invalid IO length parameter 01:29:33.975 0000:00:13.0: build_io_request_5 Invalid IO length parameter 01:29:33.975 0000:00:13.0: build_io_request_6 Invalid IO length parameter 01:29:33.975 0000:00:13.0: build_io_request_7 Invalid IO length parameter 01:29:33.975 0000:00:13.0: build_io_request_8 Invalid IO length parameter 01:29:33.975 0000:00:13.0: build_io_request_9 Invalid IO length parameter 01:29:33.975 0000:00:13.0: build_io_request_10 Invalid IO length parameter 01:29:33.975 0000:00:13.0: build_io_request_11 Invalid IO length parameter 01:29:33.975 0000:00:12.0: build_io_request_0 Invalid IO length parameter 01:29:33.975 0000:00:12.0: build_io_request_1 Invalid IO length parameter 01:29:33.975 0000:00:12.0: build_io_request_2 Invalid IO length parameter 01:29:33.975 0000:00:12.0: build_io_request_3 Invalid IO length parameter 01:29:33.975 0000:00:12.0: build_io_request_4 Invalid IO length parameter 01:29:33.975 0000:00:12.0: build_io_request_5 Invalid IO length parameter 01:29:33.975 0000:00:12.0: build_io_request_6 Invalid IO length parameter 01:29:33.975 0000:00:12.0: build_io_request_7 Invalid IO length parameter 01:29:33.975 0000:00:12.0: build_io_request_8 Invalid IO length parameter 01:29:33.975 0000:00:12.0: build_io_request_9 Invalid IO length parameter 01:29:33.975 0000:00:12.0: build_io_request_10 Invalid IO length parameter 01:29:33.975 0000:00:12.0: build_io_request_11 Invalid IO length parameter 01:29:34.233 NVMe Readv/Writev Request test 01:29:34.233 Attached to 0000:00:10.0 01:29:34.233 Attached to 0000:00:11.0 01:29:34.233 Attached to 0000:00:13.0 01:29:34.233 Attached to 0000:00:12.0 01:29:34.233 0000:00:10.0: build_io_request_2 test passed 01:29:34.233 0000:00:10.0: build_io_request_4 test passed 01:29:34.233 0000:00:10.0: build_io_request_5 test passed 01:29:34.233 0000:00:10.0: build_io_request_6 test passed 01:29:34.233 0000:00:10.0: build_io_request_7 test passed 01:29:34.233 0000:00:10.0: build_io_request_10 test passed 01:29:34.233 0000:00:11.0: build_io_request_2 test passed 01:29:34.233 0000:00:11.0: build_io_request_4 test passed 01:29:34.233 0000:00:11.0: build_io_request_5 test passed 01:29:34.233 0000:00:11.0: build_io_request_6 test passed 01:29:34.233 0000:00:11.0: build_io_request_7 test passed 01:29:34.233 0000:00:11.0: build_io_request_10 test passed 01:29:34.233 Cleaning up... 01:29:34.233 01:29:34.233 real 0m0.445s 01:29:34.234 user 0m0.215s 01:29:34.234 sys 0m0.178s 01:29:34.234 05:24:25 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 01:29:34.234 ************************************ 01:29:34.234 05:24:25 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 01:29:34.234 END TEST nvme_sgl 01:29:34.234 ************************************ 01:29:34.234 05:24:25 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 01:29:34.234 05:24:25 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:29:34.234 05:24:25 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 01:29:34.234 05:24:25 nvme -- common/autotest_common.sh@10 -- # set +x 01:29:34.234 ************************************ 01:29:34.234 START TEST nvme_e2edp 01:29:34.234 ************************************ 01:29:34.234 05:24:25 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 01:29:34.492 NVMe Write/Read with End-to-End data protection test 01:29:34.492 Attached to 0000:00:10.0 01:29:34.492 Attached to 0000:00:11.0 01:29:34.492 Attached to 0000:00:13.0 01:29:34.492 Attached to 0000:00:12.0 01:29:34.492 Cleaning up... 01:29:34.492 01:29:34.493 real 0m0.341s 01:29:34.493 user 0m0.111s 01:29:34.493 sys 0m0.183s 01:29:34.493 05:24:25 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 01:29:34.493 05:24:25 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 01:29:34.493 ************************************ 01:29:34.493 END TEST nvme_e2edp 01:29:34.493 ************************************ 01:29:34.493 05:24:26 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 01:29:34.493 05:24:26 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:29:34.493 05:24:26 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 01:29:34.493 05:24:26 nvme -- common/autotest_common.sh@10 -- # set +x 01:29:34.493 ************************************ 01:29:34.493 START TEST nvme_reserve 01:29:34.493 ************************************ 01:29:34.493 05:24:26 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 01:29:35.060 ===================================================== 01:29:35.060 NVMe Controller at PCI bus 0, device 16, function 0 01:29:35.060 ===================================================== 01:29:35.060 Reservations: Not Supported 01:29:35.060 ===================================================== 01:29:35.060 NVMe Controller at PCI bus 0, device 17, function 0 01:29:35.060 ===================================================== 01:29:35.060 Reservations: Not Supported 01:29:35.060 ===================================================== 01:29:35.060 NVMe Controller at PCI bus 0, device 19, function 0 01:29:35.060 ===================================================== 01:29:35.060 Reservations: Not Supported 01:29:35.060 ===================================================== 01:29:35.060 NVMe Controller at PCI bus 0, device 18, function 0 01:29:35.060 ===================================================== 01:29:35.060 Reservations: Not Supported 01:29:35.060 Reservation test passed 01:29:35.060 01:29:35.060 real 0m0.376s 01:29:35.060 user 0m0.129s 01:29:35.060 sys 0m0.200s 01:29:35.060 05:24:26 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 01:29:35.060 ************************************ 01:29:35.060 END TEST nvme_reserve 01:29:35.060 05:24:26 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 01:29:35.060 ************************************ 01:29:35.060 05:24:26 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 01:29:35.060 05:24:26 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:29:35.060 05:24:26 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 01:29:35.060 05:24:26 nvme -- common/autotest_common.sh@10 -- # set +x 01:29:35.060 ************************************ 01:29:35.060 START TEST nvme_err_injection 01:29:35.060 ************************************ 01:29:35.060 05:24:26 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 01:29:35.320 NVMe Error Injection test 01:29:35.320 Attached to 0000:00:10.0 01:29:35.320 Attached to 0000:00:11.0 01:29:35.320 Attached to 0000:00:13.0 01:29:35.320 Attached to 0000:00:12.0 01:29:35.320 0000:00:10.0: get features failed as expected 01:29:35.320 0000:00:11.0: get features failed as expected 01:29:35.320 0000:00:13.0: get features failed as expected 01:29:35.320 0000:00:12.0: get features failed as expected 01:29:35.320 0000:00:11.0: get features successfully as expected 01:29:35.320 0000:00:13.0: get features successfully as expected 01:29:35.320 0000:00:12.0: get features successfully as expected 01:29:35.320 0000:00:10.0: get features successfully as expected 01:29:35.320 0000:00:10.0: read failed as expected 01:29:35.320 0000:00:11.0: read failed as expected 01:29:35.320 0000:00:12.0: read failed as expected 01:29:35.320 0000:00:13.0: read failed as expected 01:29:35.320 0000:00:10.0: read successfully as expected 01:29:35.320 0000:00:11.0: read successfully as expected 01:29:35.321 0000:00:13.0: read successfully as expected 01:29:35.321 0000:00:12.0: read successfully as expected 01:29:35.321 Cleaning up... 01:29:35.321 01:29:35.321 real 0m0.373s 01:29:35.321 user 0m0.158s 01:29:35.321 sys 0m0.165s 01:29:35.321 05:24:26 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 01:29:35.321 ************************************ 01:29:35.321 05:24:26 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 01:29:35.321 END TEST nvme_err_injection 01:29:35.321 ************************************ 01:29:35.321 05:24:26 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 01:29:35.321 05:24:26 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 01:29:35.321 05:24:26 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 01:29:35.321 05:24:26 nvme -- common/autotest_common.sh@10 -- # set +x 01:29:35.321 ************************************ 01:29:35.321 START TEST nvme_overhead 01:29:35.321 ************************************ 01:29:35.321 05:24:26 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 01:29:36.697 Initializing NVMe Controllers 01:29:36.697 Attached to 0000:00:10.0 01:29:36.697 Attached to 0000:00:11.0 01:29:36.697 Attached to 0000:00:13.0 01:29:36.697 Attached to 0000:00:12.0 01:29:36.697 Initialization complete. Launching workers. 01:29:36.698 submit (in ns) avg, min, max = 16203.9, 12094.5, 61386.8 01:29:36.698 complete (in ns) avg, min, max = 11420.9, 8477.3, 103331.4 01:29:36.698 01:29:36.698 Submit histogram 01:29:36.698 ================ 01:29:36.698 Range in us Cumulative Count 01:29:36.698 12.044 - 12.102: 0.0257% ( 2) 01:29:36.698 12.102 - 12.160: 0.0514% ( 2) 01:29:36.698 12.160 - 12.218: 0.0643% ( 1) 01:29:36.698 12.218 - 12.276: 0.2056% ( 11) 01:29:36.698 12.276 - 12.335: 0.3341% ( 10) 01:29:36.698 12.335 - 12.393: 0.5012% ( 13) 01:29:36.698 12.393 - 12.451: 0.7197% ( 17) 01:29:36.698 12.451 - 12.509: 1.1438% ( 33) 01:29:36.698 12.509 - 12.567: 1.6964% ( 43) 01:29:36.698 12.567 - 12.625: 2.1720% ( 37) 01:29:36.698 12.625 - 12.684: 2.7888% ( 48) 01:29:36.698 12.684 - 12.742: 3.2772% ( 38) 01:29:36.698 12.742 - 12.800: 4.1897% ( 71) 01:29:36.698 12.800 - 12.858: 5.3592% ( 91) 01:29:36.698 12.858 - 12.916: 6.6187% ( 98) 01:29:36.698 12.916 - 12.975: 7.8653% ( 97) 01:29:36.698 12.975 - 13.033: 8.8678% ( 78) 01:29:36.698 13.033 - 13.091: 9.8573% ( 77) 01:29:36.698 13.091 - 13.149: 10.7827% ( 72) 01:29:36.698 13.149 - 13.207: 12.0679% ( 100) 01:29:36.698 13.207 - 13.265: 13.4430% ( 107) 01:29:36.698 13.265 - 13.324: 14.8439% ( 109) 01:29:36.698 13.324 - 13.382: 15.9363% ( 85) 01:29:36.698 13.382 - 13.440: 17.4913% ( 121) 01:29:36.698 13.440 - 13.498: 19.8304% ( 182) 01:29:36.698 13.498 - 13.556: 22.4393% ( 203) 01:29:36.698 13.556 - 13.615: 25.2153% ( 216) 01:29:36.698 13.615 - 13.673: 28.0427% ( 220) 01:29:36.698 13.673 - 13.731: 30.3560% ( 180) 01:29:36.698 13.731 - 13.789: 32.0524% ( 132) 01:29:36.698 13.789 - 13.847: 33.5690% ( 118) 01:29:36.698 13.847 - 13.905: 34.9569% ( 108) 01:29:36.698 13.905 - 13.964: 36.3192% ( 106) 01:29:36.698 13.964 - 14.022: 37.6815% ( 106) 01:29:36.698 14.022 - 14.080: 38.7868% ( 86) 01:29:36.698 14.080 - 14.138: 39.8535% ( 83) 01:29:36.698 14.138 - 14.196: 40.9587% ( 86) 01:29:36.698 14.196 - 14.255: 42.0254% ( 83) 01:29:36.698 14.255 - 14.313: 43.4649% ( 112) 01:29:36.698 14.313 - 14.371: 45.1613% ( 132) 01:29:36.698 14.371 - 14.429: 47.1148% ( 152) 01:29:36.698 14.429 - 14.487: 49.4152% ( 179) 01:29:36.698 14.487 - 14.545: 51.8442% ( 189) 01:29:36.698 14.545 - 14.604: 54.8130% ( 231) 01:29:36.698 14.604 - 14.662: 57.8974% ( 240) 01:29:36.698 14.662 - 14.720: 60.3393% ( 190) 01:29:36.698 14.720 - 14.778: 62.3313% ( 155) 01:29:36.698 14.778 - 14.836: 63.8864% ( 121) 01:29:36.698 14.836 - 14.895: 65.0816% ( 93) 01:29:36.698 14.895 - 15.011: 66.9580% ( 146) 01:29:36.698 15.011 - 15.127: 68.3460% ( 108) 01:29:36.698 15.127 - 15.244: 69.4384% ( 85) 01:29:36.698 15.244 - 15.360: 70.2609% ( 64) 01:29:36.698 15.360 - 15.476: 71.2376% ( 76) 01:29:36.698 15.476 - 15.593: 72.0087% ( 60) 01:29:36.698 15.593 - 15.709: 72.6899% ( 53) 01:29:36.698 15.709 - 15.825: 73.1911% ( 39) 01:29:36.698 15.825 - 15.942: 73.5638% ( 29) 01:29:36.698 15.942 - 16.058: 73.7823% ( 17) 01:29:36.698 16.058 - 16.175: 73.9879% ( 16) 01:29:36.698 16.175 - 16.291: 74.1293% ( 11) 01:29:36.698 16.291 - 16.407: 74.2193% ( 7) 01:29:36.698 16.407 - 16.524: 74.3606% ( 11) 01:29:36.698 16.524 - 16.640: 74.4249% ( 5) 01:29:36.698 16.640 - 16.756: 74.5148% ( 7) 01:29:36.698 16.756 - 16.873: 74.6177% ( 8) 01:29:36.698 16.873 - 16.989: 74.6691% ( 4) 01:29:36.698 16.989 - 17.105: 74.6819% ( 1) 01:29:36.698 17.105 - 17.222: 74.7333% ( 4) 01:29:36.698 17.222 - 17.338: 74.8361% ( 8) 01:29:36.698 17.338 - 17.455: 75.2602% ( 33) 01:29:36.698 17.455 - 17.571: 76.0185% ( 59) 01:29:36.698 17.571 - 17.687: 77.3679% ( 105) 01:29:36.698 17.687 - 17.804: 79.0130% ( 128) 01:29:36.698 17.804 - 17.920: 80.7480% ( 135) 01:29:36.698 17.920 - 18.036: 81.9818% ( 96) 01:29:36.698 18.036 - 18.153: 82.9585% ( 76) 01:29:36.698 18.153 - 18.269: 83.5240% ( 44) 01:29:36.698 18.269 - 18.385: 83.9609% ( 34) 01:29:36.698 18.385 - 18.502: 84.3079% ( 27) 01:29:36.698 18.502 - 18.618: 84.7706% ( 36) 01:29:36.698 18.618 - 18.735: 85.2975% ( 41) 01:29:36.698 18.735 - 18.851: 85.7859% ( 38) 01:29:36.698 18.851 - 18.967: 86.0429% ( 20) 01:29:36.698 18.967 - 19.084: 86.2486% ( 16) 01:29:36.698 19.084 - 19.200: 86.4542% ( 16) 01:29:36.698 19.200 - 19.316: 86.6084% ( 12) 01:29:36.698 19.316 - 19.433: 86.8140% ( 16) 01:29:36.698 19.433 - 19.549: 86.9683% ( 12) 01:29:36.698 19.549 - 19.665: 87.0839% ( 9) 01:29:36.698 19.665 - 19.782: 87.2896% ( 16) 01:29:36.698 19.782 - 19.898: 87.4438% ( 12) 01:29:36.698 19.898 - 20.015: 87.5980% ( 12) 01:29:36.698 20.015 - 20.131: 87.7265% ( 10) 01:29:36.698 20.131 - 20.247: 87.8936% ( 13) 01:29:36.698 20.247 - 20.364: 88.0992% ( 16) 01:29:36.698 20.364 - 20.480: 88.2920% ( 15) 01:29:36.698 20.480 - 20.596: 88.4591% ( 13) 01:29:36.698 20.596 - 20.713: 88.6775% ( 17) 01:29:36.698 20.713 - 20.829: 88.8061% ( 10) 01:29:36.698 20.829 - 20.945: 88.9089% ( 8) 01:29:36.698 20.945 - 21.062: 89.1017% ( 15) 01:29:36.698 21.062 - 21.178: 89.1916% ( 7) 01:29:36.698 21.178 - 21.295: 89.2687% ( 6) 01:29:36.698 21.295 - 21.411: 89.3844% ( 9) 01:29:36.698 21.411 - 21.527: 89.5129% ( 10) 01:29:36.698 21.527 - 21.644: 89.6029% ( 7) 01:29:36.698 21.644 - 21.760: 89.6800% ( 6) 01:29:36.698 21.760 - 21.876: 89.7957% ( 9) 01:29:36.698 21.876 - 21.993: 89.8856% ( 7) 01:29:36.698 21.993 - 22.109: 90.0270% ( 11) 01:29:36.698 22.109 - 22.225: 90.1427% ( 9) 01:29:36.698 22.225 - 22.342: 90.2326% ( 7) 01:29:36.698 22.342 - 22.458: 90.3483% ( 9) 01:29:36.698 22.458 - 22.575: 90.4511% ( 8) 01:29:36.698 22.575 - 22.691: 90.5539% ( 8) 01:29:36.698 22.691 - 22.807: 90.6053% ( 4) 01:29:36.698 22.807 - 22.924: 90.6953% ( 7) 01:29:36.698 22.924 - 23.040: 90.9009% ( 16) 01:29:36.698 23.040 - 23.156: 90.9780% ( 6) 01:29:36.698 23.156 - 23.273: 91.0037% ( 2) 01:29:36.698 23.273 - 23.389: 91.0551% ( 4) 01:29:36.698 23.389 - 23.505: 91.1322% ( 6) 01:29:36.698 23.505 - 23.622: 91.2094% ( 6) 01:29:36.698 23.622 - 23.738: 91.2865% ( 6) 01:29:36.698 23.738 - 23.855: 91.3636% ( 6) 01:29:36.698 23.855 - 23.971: 91.4150% ( 4) 01:29:36.698 23.971 - 24.087: 91.5178% ( 8) 01:29:36.698 24.087 - 24.204: 91.6078% ( 7) 01:29:36.698 24.204 - 24.320: 91.6592% ( 4) 01:29:36.698 24.320 - 24.436: 91.8391% ( 14) 01:29:36.698 24.436 - 24.553: 91.9291% ( 7) 01:29:36.698 24.553 - 24.669: 92.0062% ( 6) 01:29:36.698 24.669 - 24.785: 92.1347% ( 10) 01:29:36.698 24.785 - 24.902: 92.2375% ( 8) 01:29:36.698 24.902 - 25.018: 92.3275% ( 7) 01:29:36.698 25.018 - 25.135: 92.4303% ( 8) 01:29:36.698 25.135 - 25.251: 92.4817% ( 4) 01:29:36.698 25.251 - 25.367: 92.5459% ( 5) 01:29:36.698 25.367 - 25.484: 92.5974% ( 4) 01:29:36.698 25.484 - 25.600: 92.6102% ( 1) 01:29:36.698 25.600 - 25.716: 92.6488% ( 3) 01:29:36.698 25.716 - 25.833: 92.7259% ( 6) 01:29:36.698 25.833 - 25.949: 92.7901% ( 5) 01:29:36.698 25.949 - 26.065: 92.8158% ( 2) 01:29:36.698 26.065 - 26.182: 92.8544% ( 3) 01:29:36.698 26.182 - 26.298: 92.8672% ( 1) 01:29:36.698 26.298 - 26.415: 92.8801% ( 1) 01:29:36.698 26.415 - 26.531: 92.9058% ( 2) 01:29:36.698 26.531 - 26.647: 92.9315% ( 2) 01:29:36.698 26.647 - 26.764: 92.9829% ( 4) 01:29:36.698 26.764 - 26.880: 93.0086% ( 2) 01:29:36.698 26.996 - 27.113: 93.0343% ( 2) 01:29:36.698 27.113 - 27.229: 93.0857% ( 4) 01:29:36.698 27.229 - 27.345: 93.1757% ( 7) 01:29:36.698 27.345 - 27.462: 93.2399% ( 5) 01:29:36.698 27.462 - 27.578: 93.3813% ( 11) 01:29:36.698 27.578 - 27.695: 93.6126% ( 18) 01:29:36.698 27.695 - 27.811: 93.9082% ( 23) 01:29:36.698 27.811 - 27.927: 94.1267% ( 17) 01:29:36.698 27.927 - 28.044: 94.4095% ( 22) 01:29:36.698 28.044 - 28.160: 94.8079% ( 31) 01:29:36.698 28.160 - 28.276: 95.0520% ( 19) 01:29:36.698 28.276 - 28.393: 95.2834% ( 18) 01:29:36.698 28.393 - 28.509: 95.5019% ( 17) 01:29:36.698 28.509 - 28.625: 95.7332% ( 18) 01:29:36.698 28.625 - 28.742: 95.9517% ( 17) 01:29:36.698 28.742 - 28.858: 96.1188% ( 13) 01:29:36.698 28.858 - 28.975: 96.3244% ( 16) 01:29:36.698 28.975 - 29.091: 96.6200% ( 23) 01:29:36.698 29.091 - 29.207: 96.8770% ( 20) 01:29:36.698 29.207 - 29.324: 97.0184% ( 11) 01:29:36.698 29.324 - 29.440: 97.2754% ( 20) 01:29:36.698 29.440 - 29.556: 97.4810% ( 16) 01:29:36.698 29.556 - 29.673: 97.6867% ( 16) 01:29:36.698 29.673 - 29.789: 97.8537% ( 13) 01:29:36.698 29.789 - 30.022: 98.0979% ( 19) 01:29:36.699 30.022 - 30.255: 98.2779% ( 14) 01:29:36.699 30.255 - 30.487: 98.4192% ( 11) 01:29:36.699 30.487 - 30.720: 98.4835% ( 5) 01:29:36.699 30.720 - 30.953: 98.5349% ( 4) 01:29:36.699 30.953 - 31.185: 98.5992% ( 5) 01:29:36.699 31.185 - 31.418: 98.6120% ( 1) 01:29:36.699 31.418 - 31.651: 98.7020% ( 7) 01:29:36.699 31.651 - 31.884: 98.7277% ( 2) 01:29:36.699 31.884 - 32.116: 98.7405% ( 1) 01:29:36.699 32.116 - 32.349: 98.7534% ( 1) 01:29:36.699 32.349 - 32.582: 98.8048% ( 4) 01:29:36.699 32.582 - 32.815: 98.8305% ( 2) 01:29:36.699 32.815 - 33.047: 98.8433% ( 1) 01:29:36.699 33.047 - 33.280: 98.9076% ( 5) 01:29:36.699 33.280 - 33.513: 98.9462% ( 3) 01:29:36.699 33.513 - 33.745: 98.9719% ( 2) 01:29:36.699 33.745 - 33.978: 99.0104% ( 3) 01:29:36.699 33.978 - 34.211: 99.0233% ( 1) 01:29:36.699 34.211 - 34.444: 99.0490% ( 2) 01:29:36.699 34.444 - 34.676: 99.0747% ( 2) 01:29:36.699 34.676 - 34.909: 99.1004% ( 2) 01:29:36.699 34.909 - 35.142: 99.1132% ( 1) 01:29:36.699 35.375 - 35.607: 99.1261% ( 1) 01:29:36.699 35.607 - 35.840: 99.2032% ( 6) 01:29:36.699 35.840 - 36.073: 99.2417% ( 3) 01:29:36.699 36.073 - 36.305: 99.2803% ( 3) 01:29:36.699 36.305 - 36.538: 99.2931% ( 1) 01:29:36.699 36.538 - 36.771: 99.3189% ( 2) 01:29:36.699 37.004 - 37.236: 99.3574% ( 3) 01:29:36.699 37.236 - 37.469: 99.3831% ( 2) 01:29:36.699 37.702 - 37.935: 99.4345% ( 4) 01:29:36.699 37.935 - 38.167: 99.4859% ( 4) 01:29:36.699 38.167 - 38.400: 99.4988% ( 1) 01:29:36.699 38.633 - 38.865: 99.5245% ( 2) 01:29:36.699 39.098 - 39.331: 99.5373% ( 1) 01:29:36.699 39.331 - 39.564: 99.5759% ( 3) 01:29:36.699 39.564 - 39.796: 99.6144% ( 3) 01:29:36.699 39.796 - 40.029: 99.6401% ( 2) 01:29:36.699 40.029 - 40.262: 99.6530% ( 1) 01:29:36.699 40.495 - 40.727: 99.6787% ( 2) 01:29:36.699 40.727 - 40.960: 99.6916% ( 1) 01:29:36.699 41.425 - 41.658: 99.7044% ( 1) 01:29:36.699 41.658 - 41.891: 99.7173% ( 1) 01:29:36.699 42.124 - 42.356: 99.7301% ( 1) 01:29:36.699 42.356 - 42.589: 99.7430% ( 1) 01:29:36.699 42.589 - 42.822: 99.7558% ( 1) 01:29:36.699 43.055 - 43.287: 99.7687% ( 1) 01:29:36.699 43.287 - 43.520: 99.7944% ( 2) 01:29:36.699 44.218 - 44.451: 99.8329% ( 3) 01:29:36.699 44.916 - 45.149: 99.8458% ( 1) 01:29:36.699 45.382 - 45.615: 99.8586% ( 1) 01:29:36.699 45.615 - 45.847: 99.8715% ( 1) 01:29:36.699 46.313 - 46.545: 99.8843% ( 1) 01:29:36.699 47.942 - 48.175: 99.8972% ( 1) 01:29:36.699 48.175 - 48.407: 99.9100% ( 1) 01:29:36.699 49.804 - 50.036: 99.9229% ( 1) 01:29:36.699 50.967 - 51.200: 99.9357% ( 1) 01:29:36.699 52.131 - 52.364: 99.9486% ( 1) 01:29:36.699 53.062 - 53.295: 99.9614% ( 1) 01:29:36.699 53.527 - 53.760: 99.9743% ( 1) 01:29:36.699 55.156 - 55.389: 99.9871% ( 1) 01:29:36.699 60.975 - 61.440: 100.0000% ( 1) 01:29:36.699 01:29:36.699 Complete histogram 01:29:36.699 ================== 01:29:36.699 Range in us Cumulative Count 01:29:36.699 8.436 - 8.495: 0.0257% ( 2) 01:29:36.699 8.495 - 8.553: 0.1028% ( 6) 01:29:36.699 8.553 - 8.611: 0.2185% ( 9) 01:29:36.699 8.611 - 8.669: 0.3727% ( 12) 01:29:36.699 8.669 - 8.727: 0.5783% ( 16) 01:29:36.699 8.727 - 8.785: 0.8354% ( 20) 01:29:36.699 8.785 - 8.844: 1.1952% ( 28) 01:29:36.699 8.844 - 8.902: 1.8378% ( 50) 01:29:36.699 8.902 - 8.960: 2.6732% ( 65) 01:29:36.699 8.960 - 9.018: 3.5085% ( 65) 01:29:36.699 9.018 - 9.076: 4.8837% ( 107) 01:29:36.699 9.076 - 9.135: 6.5030% ( 126) 01:29:36.699 9.135 - 9.193: 8.3023% ( 140) 01:29:36.699 9.193 - 9.251: 11.3867% ( 240) 01:29:36.699 9.251 - 9.309: 15.9877% ( 358) 01:29:36.699 9.309 - 9.367: 22.0794% ( 474) 01:29:36.699 9.367 - 9.425: 28.2226% ( 478) 01:29:36.699 9.425 - 9.484: 33.4661% ( 408) 01:29:36.699 9.484 - 9.542: 37.2317% ( 293) 01:29:36.699 9.542 - 9.600: 40.4061% ( 247) 01:29:36.699 9.600 - 9.658: 43.7733% ( 262) 01:29:36.699 9.658 - 9.716: 48.6570% ( 380) 01:29:36.699 9.716 - 9.775: 53.1294% ( 348) 01:29:36.699 9.775 - 9.833: 56.9336% ( 296) 01:29:36.699 9.833 - 9.891: 59.5425% ( 203) 01:29:36.699 9.891 - 9.949: 61.3289% ( 139) 01:29:36.699 9.949 - 10.007: 62.8968% ( 122) 01:29:36.699 10.007 - 10.065: 64.1820% ( 100) 01:29:36.699 10.065 - 10.124: 65.2230% ( 81) 01:29:36.699 10.124 - 10.182: 66.1483% ( 72) 01:29:36.699 10.182 - 10.240: 66.8166% ( 52) 01:29:36.699 10.240 - 10.298: 67.5620% ( 58) 01:29:36.699 10.298 - 10.356: 68.2817% ( 56) 01:29:36.699 10.356 - 10.415: 69.1042% ( 64) 01:29:36.699 10.415 - 10.473: 69.9010% ( 62) 01:29:36.699 10.473 - 10.531: 70.5950% ( 54) 01:29:36.699 10.531 - 10.589: 71.3276% ( 57) 01:29:36.699 10.589 - 10.647: 72.0344% ( 55) 01:29:36.699 10.647 - 10.705: 72.6256% ( 46) 01:29:36.699 10.705 - 10.764: 73.3453% ( 56) 01:29:36.699 10.764 - 10.822: 74.0136% ( 52) 01:29:36.699 10.822 - 10.880: 74.4120% ( 31) 01:29:36.699 10.880 - 10.938: 74.8233% ( 32) 01:29:36.699 10.938 - 10.996: 75.0675% ( 19) 01:29:36.699 10.996 - 11.055: 75.2474% ( 14) 01:29:36.699 11.055 - 11.113: 75.3117% ( 5) 01:29:36.699 11.113 - 11.171: 75.4273% ( 9) 01:29:36.699 11.171 - 11.229: 75.4659% ( 3) 01:29:36.699 11.229 - 11.287: 75.5301% ( 5) 01:29:36.699 11.287 - 11.345: 75.5687% ( 3) 01:29:36.699 11.345 - 11.404: 75.6330% ( 5) 01:29:36.699 11.404 - 11.462: 75.6972% ( 5) 01:29:36.699 11.462 - 11.520: 75.9285% ( 18) 01:29:36.699 11.520 - 11.578: 76.7639% ( 65) 01:29:36.699 11.578 - 11.636: 78.3318% ( 122) 01:29:36.699 11.636 - 11.695: 80.4010% ( 161) 01:29:36.699 11.695 - 11.753: 82.2388% ( 143) 01:29:36.699 11.753 - 11.811: 83.4854% ( 97) 01:29:36.699 11.811 - 11.869: 84.5264% ( 81) 01:29:36.699 11.869 - 11.927: 84.9505% ( 33) 01:29:36.699 11.927 - 11.985: 85.3489% ( 31) 01:29:36.699 11.985 - 12.044: 85.4774% ( 10) 01:29:36.699 12.044 - 12.102: 85.5803% ( 8) 01:29:36.699 12.102 - 12.160: 85.6445% ( 5) 01:29:36.699 12.160 - 12.218: 85.8116% ( 13) 01:29:36.699 12.218 - 12.276: 85.9016% ( 7) 01:29:36.699 12.276 - 12.335: 85.9787% ( 6) 01:29:36.699 12.335 - 12.393: 86.0815% ( 8) 01:29:36.699 12.393 - 12.451: 86.1971% ( 9) 01:29:36.699 12.451 - 12.509: 86.4028% ( 16) 01:29:36.699 12.509 - 12.567: 86.5313% ( 10) 01:29:36.699 12.567 - 12.625: 86.6598% ( 10) 01:29:36.699 12.625 - 12.684: 86.8397% ( 14) 01:29:36.699 12.684 - 12.742: 87.0325% ( 15) 01:29:36.699 12.742 - 12.800: 87.2767% ( 19) 01:29:36.699 12.800 - 12.858: 87.5594% ( 22) 01:29:36.699 12.858 - 12.916: 87.7651% ( 16) 01:29:36.699 12.916 - 12.975: 87.9964% ( 18) 01:29:36.699 12.975 - 13.033: 88.1635% ( 13) 01:29:36.699 13.033 - 13.091: 88.3048% ( 11) 01:29:36.699 13.091 - 13.149: 88.4205% ( 9) 01:29:36.699 13.149 - 13.207: 88.4848% ( 5) 01:29:36.699 13.207 - 13.265: 88.5105% ( 2) 01:29:36.699 13.324 - 13.382: 88.5362% ( 2) 01:29:36.699 13.382 - 13.440: 88.5619% ( 2) 01:29:36.699 13.440 - 13.498: 88.5747% ( 1) 01:29:36.699 13.556 - 13.615: 88.5876% ( 1) 01:29:36.699 13.615 - 13.673: 88.6133% ( 2) 01:29:36.699 13.673 - 13.731: 88.6390% ( 2) 01:29:36.699 13.789 - 13.847: 88.6775% ( 3) 01:29:36.699 13.847 - 13.905: 88.7161% ( 3) 01:29:36.699 13.905 - 13.964: 88.7290% ( 1) 01:29:36.699 13.964 - 14.022: 88.7418% ( 1) 01:29:36.699 14.022 - 14.080: 88.7932% ( 4) 01:29:36.699 14.138 - 14.196: 88.8061% ( 1) 01:29:36.699 14.196 - 14.255: 88.8575% ( 4) 01:29:36.699 14.255 - 14.313: 88.9089% ( 4) 01:29:36.699 14.313 - 14.371: 88.9603% ( 4) 01:29:36.699 14.371 - 14.429: 88.9988% ( 3) 01:29:36.699 14.429 - 14.487: 89.0117% ( 1) 01:29:36.699 14.487 - 14.545: 89.0245% ( 1) 01:29:36.699 14.604 - 14.662: 89.0631% ( 3) 01:29:36.699 14.662 - 14.720: 89.1017% ( 3) 01:29:36.699 14.720 - 14.778: 89.1145% ( 1) 01:29:36.699 14.836 - 14.895: 89.1274% ( 1) 01:29:36.699 14.895 - 15.011: 89.1659% ( 3) 01:29:36.699 15.011 - 15.127: 89.1916% ( 2) 01:29:36.699 15.127 - 15.244: 89.2173% ( 2) 01:29:36.699 15.244 - 15.360: 89.3330% ( 9) 01:29:36.699 15.360 - 15.476: 89.3972% ( 5) 01:29:36.699 15.476 - 15.593: 89.4615% ( 5) 01:29:36.699 15.593 - 15.709: 89.5643% ( 8) 01:29:36.699 15.709 - 15.825: 89.6286% ( 5) 01:29:36.699 15.825 - 15.942: 89.7314% ( 8) 01:29:36.699 15.942 - 16.058: 89.8599% ( 10) 01:29:36.699 16.058 - 16.175: 89.9884% ( 10) 01:29:36.699 16.175 - 16.291: 90.1427% ( 12) 01:29:36.699 16.291 - 16.407: 90.2455% ( 8) 01:29:36.699 16.407 - 16.524: 90.3611% ( 9) 01:29:36.700 16.524 - 16.640: 90.4640% ( 8) 01:29:36.700 16.640 - 16.756: 90.5668% ( 8) 01:29:36.700 16.756 - 16.873: 90.6824% ( 9) 01:29:36.700 16.873 - 16.989: 90.7595% ( 6) 01:29:36.700 16.989 - 17.105: 90.8109% ( 4) 01:29:36.700 17.105 - 17.222: 90.8752% ( 5) 01:29:36.700 17.222 - 17.338: 91.0166% ( 11) 01:29:36.700 17.338 - 17.455: 91.1451% ( 10) 01:29:36.700 17.455 - 17.571: 91.2094% ( 5) 01:29:36.700 17.571 - 17.687: 91.2479% ( 3) 01:29:36.700 17.687 - 17.804: 91.2993% ( 4) 01:29:36.700 17.804 - 17.920: 91.3636% ( 5) 01:29:36.700 17.920 - 18.036: 91.4535% ( 7) 01:29:36.700 18.036 - 18.153: 91.5178% ( 5) 01:29:36.700 18.153 - 18.269: 91.5564% ( 3) 01:29:36.700 18.269 - 18.385: 91.6335% ( 6) 01:29:36.700 18.385 - 18.502: 91.7234% ( 7) 01:29:36.700 18.502 - 18.618: 91.7748% ( 4) 01:29:36.700 18.618 - 18.735: 91.8648% ( 7) 01:29:36.700 18.735 - 18.851: 91.9034% ( 3) 01:29:36.700 18.851 - 18.967: 91.9162% ( 1) 01:29:36.700 18.967 - 19.084: 91.9676% ( 4) 01:29:36.700 19.084 - 19.200: 92.0704% ( 8) 01:29:36.700 19.200 - 19.316: 92.1475% ( 6) 01:29:36.700 19.316 - 19.433: 92.2375% ( 7) 01:29:36.700 19.433 - 19.549: 92.3146% ( 6) 01:29:36.700 19.549 - 19.665: 92.3789% ( 5) 01:29:36.700 19.665 - 19.782: 92.4174% ( 3) 01:29:36.700 19.782 - 19.898: 92.4817% ( 5) 01:29:36.700 19.898 - 20.015: 92.5202% ( 3) 01:29:36.700 20.015 - 20.131: 92.5716% ( 4) 01:29:36.700 20.131 - 20.247: 92.5974% ( 2) 01:29:36.700 20.247 - 20.364: 92.7002% ( 8) 01:29:36.700 20.364 - 20.480: 92.8030% ( 8) 01:29:36.700 20.480 - 20.596: 92.8415% ( 3) 01:29:36.700 20.596 - 20.713: 92.9058% ( 5) 01:29:36.700 20.713 - 20.829: 93.0343% ( 10) 01:29:36.700 20.829 - 20.945: 93.0857% ( 4) 01:29:36.700 20.945 - 21.062: 93.1885% ( 8) 01:29:36.700 21.062 - 21.178: 93.2785% ( 7) 01:29:36.700 21.178 - 21.295: 93.3428% ( 5) 01:29:36.700 21.295 - 21.411: 93.4327% ( 7) 01:29:36.700 21.411 - 21.527: 93.4841% ( 4) 01:29:36.700 21.527 - 21.644: 93.5227% ( 3) 01:29:36.700 21.644 - 21.760: 93.5869% ( 5) 01:29:36.700 21.760 - 21.876: 93.6126% ( 2) 01:29:36.700 21.876 - 21.993: 93.6255% ( 1) 01:29:36.700 21.993 - 22.109: 93.7026% ( 6) 01:29:36.700 22.109 - 22.225: 93.7412% ( 3) 01:29:36.700 22.225 - 22.342: 93.7540% ( 1) 01:29:36.700 22.458 - 22.575: 93.7797% ( 2) 01:29:36.700 22.575 - 22.691: 93.7926% ( 1) 01:29:36.700 22.691 - 22.807: 93.8568% ( 5) 01:29:36.700 22.924 - 23.040: 93.8954% ( 3) 01:29:36.700 23.040 - 23.156: 93.9082% ( 1) 01:29:36.700 23.156 - 23.273: 93.9982% ( 7) 01:29:36.700 23.273 - 23.389: 94.0368% ( 3) 01:29:36.700 23.389 - 23.505: 94.1653% ( 10) 01:29:36.700 23.505 - 23.622: 94.3195% ( 12) 01:29:36.700 23.622 - 23.738: 94.5765% ( 20) 01:29:36.700 23.738 - 23.855: 94.9235% ( 27) 01:29:36.700 23.855 - 23.971: 95.3733% ( 35) 01:29:36.700 23.971 - 24.087: 95.7718% ( 31) 01:29:36.700 24.087 - 24.204: 96.1959% ( 33) 01:29:36.700 24.204 - 24.320: 96.5172% ( 25) 01:29:36.700 24.320 - 24.436: 96.8256% ( 24) 01:29:36.700 24.436 - 24.553: 97.1726% ( 27) 01:29:36.700 24.553 - 24.669: 97.3654% ( 15) 01:29:36.700 24.669 - 24.785: 97.5453% ( 14) 01:29:36.700 24.785 - 24.902: 97.6481% ( 8) 01:29:36.700 24.902 - 25.018: 97.7895% ( 11) 01:29:36.700 25.018 - 25.135: 97.9309% ( 11) 01:29:36.700 25.135 - 25.251: 97.9694% ( 3) 01:29:36.700 25.251 - 25.367: 98.0851% ( 9) 01:29:36.700 25.367 - 25.484: 98.1622% ( 6) 01:29:36.700 25.484 - 25.600: 98.2779% ( 9) 01:29:36.700 25.600 - 25.716: 98.3935% ( 9) 01:29:36.700 25.716 - 25.833: 98.5349% ( 11) 01:29:36.700 25.833 - 25.949: 98.5863% ( 4) 01:29:36.700 26.065 - 26.182: 98.6249% ( 3) 01:29:36.700 26.182 - 26.298: 98.6763% ( 4) 01:29:36.700 26.298 - 26.415: 98.7405% ( 5) 01:29:36.700 26.415 - 26.531: 98.8048% ( 5) 01:29:36.700 26.531 - 26.647: 98.8176% ( 1) 01:29:36.700 26.647 - 26.764: 98.8305% ( 1) 01:29:36.700 26.764 - 26.880: 98.8562% ( 2) 01:29:36.700 26.880 - 26.996: 98.8819% ( 2) 01:29:36.700 26.996 - 27.113: 98.9076% ( 2) 01:29:36.700 27.229 - 27.345: 98.9204% ( 1) 01:29:36.700 27.345 - 27.462: 98.9333% ( 1) 01:29:36.700 27.578 - 27.695: 98.9462% ( 1) 01:29:36.700 27.695 - 27.811: 98.9590% ( 1) 01:29:36.700 27.927 - 28.044: 98.9847% ( 2) 01:29:36.700 28.160 - 28.276: 98.9976% ( 1) 01:29:36.700 28.742 - 28.858: 99.0233% ( 2) 01:29:36.700 28.858 - 28.975: 99.0361% ( 1) 01:29:36.700 28.975 - 29.091: 99.0618% ( 2) 01:29:36.700 29.091 - 29.207: 99.0747% ( 1) 01:29:36.700 29.324 - 29.440: 99.0875% ( 1) 01:29:36.700 29.789 - 30.022: 99.1261% ( 3) 01:29:36.700 30.022 - 30.255: 99.1389% ( 1) 01:29:36.700 30.255 - 30.487: 99.2032% ( 5) 01:29:36.700 30.487 - 30.720: 99.2160% ( 1) 01:29:36.700 30.720 - 30.953: 99.2546% ( 3) 01:29:36.700 30.953 - 31.185: 99.2803% ( 2) 01:29:36.700 31.418 - 31.651: 99.2931% ( 1) 01:29:36.700 31.651 - 31.884: 99.3060% ( 1) 01:29:36.700 32.116 - 32.349: 99.3317% ( 2) 01:29:36.700 32.349 - 32.582: 99.3446% ( 1) 01:29:36.700 32.582 - 32.815: 99.3574% ( 1) 01:29:36.700 33.047 - 33.280: 99.3831% ( 2) 01:29:36.700 33.280 - 33.513: 99.4217% ( 3) 01:29:36.700 33.513 - 33.745: 99.4345% ( 1) 01:29:36.700 33.978 - 34.211: 99.4731% ( 3) 01:29:36.700 34.909 - 35.142: 99.5116% ( 3) 01:29:36.700 35.375 - 35.607: 99.5245% ( 1) 01:29:36.700 35.607 - 35.840: 99.5373% ( 1) 01:29:36.700 35.840 - 36.073: 99.5502% ( 1) 01:29:36.700 36.073 - 36.305: 99.5759% ( 2) 01:29:36.700 36.771 - 37.004: 99.6016% ( 2) 01:29:36.700 37.236 - 37.469: 99.6401% ( 3) 01:29:36.700 37.702 - 37.935: 99.6530% ( 1) 01:29:36.700 37.935 - 38.167: 99.6659% ( 1) 01:29:36.700 38.167 - 38.400: 99.6787% ( 1) 01:29:36.700 38.633 - 38.865: 99.6916% ( 1) 01:29:36.700 38.865 - 39.098: 99.7173% ( 2) 01:29:36.700 39.098 - 39.331: 99.7430% ( 2) 01:29:36.700 39.331 - 39.564: 99.7687% ( 2) 01:29:36.700 39.564 - 39.796: 99.7815% ( 1) 01:29:36.700 39.796 - 40.029: 99.7944% ( 1) 01:29:36.700 40.029 - 40.262: 99.8072% ( 1) 01:29:36.700 40.495 - 40.727: 99.8201% ( 1) 01:29:36.700 40.960 - 41.193: 99.8329% ( 1) 01:29:36.700 41.193 - 41.425: 99.8458% ( 1) 01:29:36.700 41.658 - 41.891: 99.8586% ( 1) 01:29:36.700 41.891 - 42.124: 99.8715% ( 1) 01:29:36.700 43.287 - 43.520: 99.8843% ( 1) 01:29:36.700 43.985 - 44.218: 99.8972% ( 1) 01:29:36.700 44.451 - 44.684: 99.9100% ( 1) 01:29:36.700 45.149 - 45.382: 99.9229% ( 1) 01:29:36.700 46.080 - 46.313: 99.9357% ( 1) 01:29:36.700 47.942 - 48.175: 99.9486% ( 1) 01:29:36.700 48.873 - 49.105: 99.9614% ( 1) 01:29:36.700 56.553 - 56.785: 99.9743% ( 1) 01:29:36.700 88.902 - 89.367: 99.9871% ( 1) 01:29:36.700 103.331 - 103.796: 100.0000% ( 1) 01:29:36.700 01:29:36.700 01:29:36.700 real 0m1.352s 01:29:36.700 user 0m1.128s 01:29:36.700 sys 0m0.170s 01:29:36.700 05:24:28 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 01:29:36.700 ************************************ 01:29:36.700 END TEST nvme_overhead 01:29:36.700 05:24:28 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 01:29:36.700 ************************************ 01:29:36.700 05:24:28 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 01:29:36.700 05:24:28 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 01:29:36.700 05:24:28 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 01:29:36.700 05:24:28 nvme -- common/autotest_common.sh@10 -- # set +x 01:29:36.700 ************************************ 01:29:36.700 START TEST nvme_arbitration 01:29:36.700 ************************************ 01:29:36.700 05:24:28 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 01:29:40.888 Initializing NVMe Controllers 01:29:40.888 Attached to 0000:00:10.0 01:29:40.888 Attached to 0000:00:11.0 01:29:40.888 Attached to 0000:00:13.0 01:29:40.888 Attached to 0000:00:12.0 01:29:40.888 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 01:29:40.888 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 01:29:40.888 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 01:29:40.888 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 01:29:40.888 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 01:29:40.888 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 01:29:40.888 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 01:29:40.888 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 01:29:40.888 Initialization complete. Launching workers. 01:29:40.888 Starting thread on core 1 with urgent priority queue 01:29:40.888 Starting thread on core 2 with urgent priority queue 01:29:40.888 Starting thread on core 3 with urgent priority queue 01:29:40.888 Starting thread on core 0 with urgent priority queue 01:29:40.888 QEMU NVMe Ctrl (12340 ) core 0: 597.33 IO/s 167.41 secs/100000 ios 01:29:40.888 QEMU NVMe Ctrl (12342 ) core 0: 597.33 IO/s 167.41 secs/100000 ios 01:29:40.888 QEMU NVMe Ctrl (12341 ) core 1: 618.67 IO/s 161.64 secs/100000 ios 01:29:40.888 QEMU NVMe Ctrl (12342 ) core 1: 618.67 IO/s 161.64 secs/100000 ios 01:29:40.888 QEMU NVMe Ctrl (12343 ) core 2: 576.00 IO/s 173.61 secs/100000 ios 01:29:40.888 QEMU NVMe Ctrl (12342 ) core 3: 746.67 IO/s 133.93 secs/100000 ios 01:29:40.888 ======================================================== 01:29:40.888 01:29:40.888 01:29:40.888 real 0m3.520s 01:29:40.888 user 0m9.351s 01:29:40.888 sys 0m0.194s 01:29:40.888 05:24:31 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 01:29:40.888 ************************************ 01:29:40.888 05:24:31 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 01:29:40.888 END TEST nvme_arbitration 01:29:40.888 ************************************ 01:29:40.888 05:24:31 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 01:29:40.888 05:24:31 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 01:29:40.888 05:24:31 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 01:29:40.888 05:24:31 nvme -- common/autotest_common.sh@10 -- # set +x 01:29:40.888 ************************************ 01:29:40.888 START TEST nvme_single_aen 01:29:40.888 ************************************ 01:29:40.888 05:24:31 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 01:29:40.888 Asynchronous Event Request test 01:29:40.888 Attached to 0000:00:10.0 01:29:40.888 Attached to 0000:00:11.0 01:29:40.888 Attached to 0000:00:13.0 01:29:40.888 Attached to 0000:00:12.0 01:29:40.888 Reset controller to setup AER completions for this process 01:29:40.888 Registering asynchronous event callbacks... 01:29:40.888 Getting orig temperature thresholds of all controllers 01:29:40.888 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 01:29:40.888 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 01:29:40.888 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 01:29:40.888 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 01:29:40.888 Setting all controllers temperature threshold low to trigger AER 01:29:40.888 Waiting for all controllers temperature threshold to be set lower 01:29:40.888 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 01:29:40.888 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 01:29:40.888 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 01:29:40.888 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 01:29:40.888 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 01:29:40.888 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 01:29:40.888 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 01:29:40.888 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 01:29:40.888 Waiting for all controllers to trigger AER and reset threshold 01:29:40.888 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 01:29:40.888 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 01:29:40.888 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 01:29:40.888 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 01:29:40.888 Cleaning up... 01:29:40.888 01:29:40.888 real 0m0.338s 01:29:40.888 user 0m0.146s 01:29:40.888 sys 0m0.143s 01:29:40.888 05:24:32 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 01:29:40.888 05:24:32 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 01:29:40.888 ************************************ 01:29:40.888 END TEST nvme_single_aen 01:29:40.888 ************************************ 01:29:40.888 05:24:32 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 01:29:40.888 05:24:32 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:29:40.888 05:24:32 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 01:29:40.888 05:24:32 nvme -- common/autotest_common.sh@10 -- # set +x 01:29:40.888 ************************************ 01:29:40.888 START TEST nvme_doorbell_aers 01:29:40.888 ************************************ 01:29:40.888 05:24:32 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 01:29:40.888 05:24:32 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 01:29:40.888 05:24:32 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 01:29:40.889 05:24:32 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 01:29:40.889 05:24:32 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 01:29:40.889 05:24:32 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 01:29:40.889 05:24:32 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 01:29:40.889 05:24:32 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 01:29:40.889 05:24:32 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 01:29:40.889 05:24:32 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 01:29:40.889 05:24:32 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 01:29:40.889 05:24:32 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 01:29:40.889 05:24:32 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 01:29:40.889 05:24:32 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 01:29:41.146 [2024-12-09 05:24:32.630651] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64709) is not found. Dropping the request. 01:29:51.143 Executing: test_write_invalid_db 01:29:51.143 Waiting for AER completion... 01:29:51.143 Failure: test_write_invalid_db 01:29:51.143 01:29:51.143 Executing: test_invalid_db_write_overflow_sq 01:29:51.143 Waiting for AER completion... 01:29:51.143 Failure: test_invalid_db_write_overflow_sq 01:29:51.143 01:29:51.143 Executing: test_invalid_db_write_overflow_cq 01:29:51.143 Waiting for AER completion... 01:29:51.143 Failure: test_invalid_db_write_overflow_cq 01:29:51.143 01:29:51.143 05:24:42 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 01:29:51.143 05:24:42 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 01:29:51.401 [2024-12-09 05:24:42.761899] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64709) is not found. Dropping the request. 01:30:01.366 Executing: test_write_invalid_db 01:30:01.366 Waiting for AER completion... 01:30:01.366 Failure: test_write_invalid_db 01:30:01.366 01:30:01.366 Executing: test_invalid_db_write_overflow_sq 01:30:01.366 Waiting for AER completion... 01:30:01.366 Failure: test_invalid_db_write_overflow_sq 01:30:01.366 01:30:01.366 Executing: test_invalid_db_write_overflow_cq 01:30:01.366 Waiting for AER completion... 01:30:01.366 Failure: test_invalid_db_write_overflow_cq 01:30:01.366 01:30:01.366 05:24:52 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 01:30:01.366 05:24:52 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 01:30:01.366 [2024-12-09 05:24:52.883564] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64709) is not found. Dropping the request. 01:30:11.338 Executing: test_write_invalid_db 01:30:11.338 Waiting for AER completion... 01:30:11.338 Failure: test_write_invalid_db 01:30:11.338 01:30:11.338 Executing: test_invalid_db_write_overflow_sq 01:30:11.338 Waiting for AER completion... 01:30:11.338 Failure: test_invalid_db_write_overflow_sq 01:30:11.338 01:30:11.338 Executing: test_invalid_db_write_overflow_cq 01:30:11.338 Waiting for AER completion... 01:30:11.338 Failure: test_invalid_db_write_overflow_cq 01:30:11.338 01:30:11.338 05:25:02 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 01:30:11.338 05:25:02 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 01:30:11.596 [2024-12-09 05:25:03.030608] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64709) is not found. Dropping the request. 01:30:21.563 Executing: test_write_invalid_db 01:30:21.563 Waiting for AER completion... 01:30:21.563 Failure: test_write_invalid_db 01:30:21.563 01:30:21.563 Executing: test_invalid_db_write_overflow_sq 01:30:21.563 Waiting for AER completion... 01:30:21.563 Failure: test_invalid_db_write_overflow_sq 01:30:21.563 01:30:21.563 Executing: test_invalid_db_write_overflow_cq 01:30:21.563 Waiting for AER completion... 01:30:21.563 Failure: test_invalid_db_write_overflow_cq 01:30:21.563 01:30:21.563 01:30:21.563 real 0m40.593s 01:30:21.563 user 0m34.602s 01:30:21.563 sys 0m5.631s 01:30:21.563 05:25:12 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 01:30:21.563 05:25:12 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 01:30:21.563 ************************************ 01:30:21.563 END TEST nvme_doorbell_aers 01:30:21.563 ************************************ 01:30:21.563 05:25:12 nvme -- nvme/nvme.sh@97 -- # uname 01:30:21.563 05:25:12 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 01:30:21.563 05:25:12 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 01:30:21.563 05:25:12 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 01:30:21.563 05:25:12 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 01:30:21.563 05:25:12 nvme -- common/autotest_common.sh@10 -- # set +x 01:30:21.563 ************************************ 01:30:21.563 START TEST nvme_multi_aen 01:30:21.563 ************************************ 01:30:21.563 05:25:12 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 01:30:21.821 [2024-12-09 05:25:13.192721] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64709) is not found. Dropping the request. 01:30:21.821 [2024-12-09 05:25:13.192871] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64709) is not found. Dropping the request. 01:30:21.821 [2024-12-09 05:25:13.192892] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64709) is not found. Dropping the request. 01:30:21.821 [2024-12-09 05:25:13.195059] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64709) is not found. Dropping the request. 01:30:21.821 [2024-12-09 05:25:13.195106] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64709) is not found. Dropping the request. 01:30:21.821 [2024-12-09 05:25:13.195125] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64709) is not found. Dropping the request. 01:30:21.821 [2024-12-09 05:25:13.196768] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64709) is not found. Dropping the request. 01:30:21.821 [2024-12-09 05:25:13.196810] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64709) is not found. Dropping the request. 01:30:21.821 [2024-12-09 05:25:13.196827] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64709) is not found. Dropping the request. 01:30:21.821 [2024-12-09 05:25:13.198430] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64709) is not found. Dropping the request. 01:30:21.821 [2024-12-09 05:25:13.198469] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64709) is not found. Dropping the request. 01:30:21.821 [2024-12-09 05:25:13.198486] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64709) is not found. Dropping the request. 01:30:21.821 Child process pid: 65231 01:30:22.079 [Child] Asynchronous Event Request test 01:30:22.079 [Child] Attached to 0000:00:10.0 01:30:22.079 [Child] Attached to 0000:00:11.0 01:30:22.079 [Child] Attached to 0000:00:13.0 01:30:22.079 [Child] Attached to 0000:00:12.0 01:30:22.079 [Child] Registering asynchronous event callbacks... 01:30:22.079 [Child] Getting orig temperature thresholds of all controllers 01:30:22.079 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 01:30:22.079 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 01:30:22.079 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 01:30:22.079 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 01:30:22.079 [Child] Waiting for all controllers to trigger AER and reset threshold 01:30:22.079 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 01:30:22.079 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 01:30:22.079 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 01:30:22.079 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 01:30:22.079 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 01:30:22.079 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 01:30:22.079 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 01:30:22.079 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 01:30:22.079 [Child] Cleaning up... 01:30:22.079 Asynchronous Event Request test 01:30:22.079 Attached to 0000:00:10.0 01:30:22.079 Attached to 0000:00:11.0 01:30:22.079 Attached to 0000:00:13.0 01:30:22.079 Attached to 0000:00:12.0 01:30:22.079 Reset controller to setup AER completions for this process 01:30:22.079 Registering asynchronous event callbacks... 01:30:22.079 Getting orig temperature thresholds of all controllers 01:30:22.079 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 01:30:22.079 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 01:30:22.079 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 01:30:22.079 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 01:30:22.079 Setting all controllers temperature threshold low to trigger AER 01:30:22.079 Waiting for all controllers temperature threshold to be set lower 01:30:22.079 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 01:30:22.079 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 01:30:22.079 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 01:30:22.079 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 01:30:22.079 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 01:30:22.079 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 01:30:22.079 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 01:30:22.079 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 01:30:22.079 Waiting for all controllers to trigger AER and reset threshold 01:30:22.079 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 01:30:22.079 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 01:30:22.079 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 01:30:22.079 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 01:30:22.079 Cleaning up... 01:30:22.079 01:30:22.079 real 0m0.693s 01:30:22.079 user 0m0.269s 01:30:22.079 sys 0m0.299s 01:30:22.079 05:25:13 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 01:30:22.079 ************************************ 01:30:22.079 END TEST nvme_multi_aen 01:30:22.079 05:25:13 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 01:30:22.079 ************************************ 01:30:22.079 05:25:13 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 01:30:22.079 05:25:13 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:30:22.079 05:25:13 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 01:30:22.079 05:25:13 nvme -- common/autotest_common.sh@10 -- # set +x 01:30:22.079 ************************************ 01:30:22.079 START TEST nvme_startup 01:30:22.079 ************************************ 01:30:22.079 05:25:13 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 01:30:22.646 Initializing NVMe Controllers 01:30:22.646 Attached to 0000:00:10.0 01:30:22.646 Attached to 0000:00:11.0 01:30:22.646 Attached to 0000:00:13.0 01:30:22.646 Attached to 0000:00:12.0 01:30:22.646 Initialization complete. 01:30:22.646 Time used:231560.844 (us). 01:30:22.646 01:30:22.646 real 0m0.332s 01:30:22.646 user 0m0.114s 01:30:22.646 sys 0m0.174s 01:30:22.646 05:25:13 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 01:30:22.646 05:25:13 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 01:30:22.646 ************************************ 01:30:22.646 END TEST nvme_startup 01:30:22.646 ************************************ 01:30:22.646 05:25:14 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 01:30:22.646 05:25:14 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:30:22.646 05:25:14 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 01:30:22.646 05:25:14 nvme -- common/autotest_common.sh@10 -- # set +x 01:30:22.646 ************************************ 01:30:22.646 START TEST nvme_multi_secondary 01:30:22.646 ************************************ 01:30:22.646 05:25:14 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 01:30:22.646 05:25:14 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=65287 01:30:22.646 05:25:14 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 01:30:22.646 05:25:14 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=65288 01:30:22.646 05:25:14 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 01:30:22.646 05:25:14 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 01:30:25.968 Initializing NVMe Controllers 01:30:25.968 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 01:30:25.968 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 01:30:25.968 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 01:30:25.968 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 01:30:25.968 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 01:30:25.968 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 01:30:25.968 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 01:30:25.968 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 01:30:25.968 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 01:30:25.968 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 01:30:25.968 Initialization complete. Launching workers. 01:30:25.968 ======================================================== 01:30:25.968 Latency(us) 01:30:25.968 Device Information : IOPS MiB/s Average min max 01:30:25.968 PCIE (0000:00:10.0) NSID 1 from core 1: 5121.30 20.01 3122.29 1601.94 5425.77 01:30:25.968 PCIE (0000:00:11.0) NSID 1 from core 1: 5121.30 20.01 3124.00 1672.65 5715.23 01:30:25.968 PCIE (0000:00:13.0) NSID 1 from core 1: 5121.30 20.01 3123.82 1617.02 5676.90 01:30:25.968 PCIE (0000:00:12.0) NSID 1 from core 1: 5121.30 20.01 3123.73 1614.47 5937.65 01:30:25.968 PCIE (0000:00:12.0) NSID 2 from core 1: 5121.30 20.01 3123.65 1600.49 6118.91 01:30:25.968 PCIE (0000:00:12.0) NSID 3 from core 1: 5121.30 20.01 3124.02 1617.72 5702.45 01:30:25.968 ======================================================== 01:30:25.968 Total : 30727.79 120.03 3123.59 1600.49 6118.91 01:30:25.968 01:30:26.227 Initializing NVMe Controllers 01:30:26.227 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 01:30:26.227 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 01:30:26.227 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 01:30:26.227 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 01:30:26.227 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 01:30:26.227 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 01:30:26.227 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 01:30:26.227 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 01:30:26.227 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 01:30:26.227 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 01:30:26.227 Initialization complete. Launching workers. 01:30:26.227 ======================================================== 01:30:26.227 Latency(us) 01:30:26.227 Device Information : IOPS MiB/s Average min max 01:30:26.227 PCIE (0000:00:10.0) NSID 1 from core 2: 2537.31 9.91 6304.29 1744.72 13408.64 01:30:26.227 PCIE (0000:00:11.0) NSID 1 from core 2: 2537.31 9.91 6305.17 1830.77 15632.60 01:30:26.227 PCIE (0000:00:13.0) NSID 1 from core 2: 2537.31 9.91 6305.04 1603.02 16177.77 01:30:26.227 PCIE (0000:00:12.0) NSID 1 from core 2: 2537.31 9.91 6304.87 1413.71 12840.76 01:30:26.227 PCIE (0000:00:12.0) NSID 2 from core 2: 2537.31 9.91 6304.67 1261.70 12987.29 01:30:26.227 PCIE (0000:00:12.0) NSID 3 from core 2: 2537.31 9.91 6303.46 1106.72 13326.36 01:30:26.227 ======================================================== 01:30:26.227 Total : 15223.84 59.47 6304.58 1106.72 16177.77 01:30:26.227 01:30:26.485 05:25:17 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 65287 01:30:27.857 Initializing NVMe Controllers 01:30:27.857 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 01:30:27.857 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 01:30:27.857 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 01:30:27.857 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 01:30:27.857 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 01:30:27.857 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 01:30:27.857 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 01:30:27.857 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 01:30:27.857 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 01:30:27.857 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 01:30:27.857 Initialization complete. Launching workers. 01:30:27.857 ======================================================== 01:30:27.857 Latency(us) 01:30:27.857 Device Information : IOPS MiB/s Average min max 01:30:27.857 PCIE (0000:00:10.0) NSID 1 from core 0: 8177.80 31.94 1954.88 956.72 5915.17 01:30:27.857 PCIE (0000:00:11.0) NSID 1 from core 0: 8177.80 31.94 1955.98 974.13 5376.13 01:30:27.857 PCIE (0000:00:13.0) NSID 1 from core 0: 8177.80 31.94 1955.90 986.37 5746.70 01:30:27.857 PCIE (0000:00:12.0) NSID 1 from core 0: 8177.80 31.94 1955.83 974.02 5799.94 01:30:27.857 PCIE (0000:00:12.0) NSID 2 from core 0: 8177.60 31.94 1955.80 963.32 5974.53 01:30:27.857 PCIE (0000:00:12.0) NSID 3 from core 0: 8177.80 31.94 1955.68 980.47 6104.86 01:30:27.857 ======================================================== 01:30:27.857 Total : 49066.61 191.67 1955.68 956.72 6104.86 01:30:27.857 01:30:28.115 05:25:19 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 65288 01:30:28.115 05:25:19 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=65357 01:30:28.115 05:25:19 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 01:30:28.115 05:25:19 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=65358 01:30:28.115 05:25:19 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 01:30:28.115 05:25:19 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 01:30:31.396 Initializing NVMe Controllers 01:30:31.396 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 01:30:31.396 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 01:30:31.396 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 01:30:31.396 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 01:30:31.396 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 01:30:31.396 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 01:30:31.396 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 01:30:31.396 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 01:30:31.396 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 01:30:31.397 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 01:30:31.397 Initialization complete. Launching workers. 01:30:31.397 ======================================================== 01:30:31.397 Latency(us) 01:30:31.397 Device Information : IOPS MiB/s Average min max 01:30:31.397 PCIE (0000:00:10.0) NSID 1 from core 1: 5385.05 21.04 2969.33 1036.19 6751.50 01:30:31.397 PCIE (0000:00:11.0) NSID 1 from core 1: 5385.05 21.04 2970.65 1060.77 6257.07 01:30:31.397 PCIE (0000:00:13.0) NSID 1 from core 1: 5385.05 21.04 2970.59 1069.81 6309.47 01:30:31.397 PCIE (0000:00:12.0) NSID 1 from core 1: 5385.05 21.04 2970.43 1072.99 5980.24 01:30:31.397 PCIE (0000:00:12.0) NSID 2 from core 1: 5385.05 21.04 2970.41 1081.55 6310.97 01:30:31.397 PCIE (0000:00:12.0) NSID 3 from core 1: 5385.05 21.04 2970.34 1073.83 6248.81 01:30:31.397 ======================================================== 01:30:31.397 Total : 32310.29 126.21 2970.29 1036.19 6751.50 01:30:31.397 01:30:31.397 Initializing NVMe Controllers 01:30:31.397 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 01:30:31.397 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 01:30:31.397 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 01:30:31.397 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 01:30:31.397 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 01:30:31.397 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 01:30:31.397 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 01:30:31.397 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 01:30:31.397 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 01:30:31.397 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 01:30:31.397 Initialization complete. Launching workers. 01:30:31.397 ======================================================== 01:30:31.397 Latency(us) 01:30:31.397 Device Information : IOPS MiB/s Average min max 01:30:31.397 PCIE (0000:00:10.0) NSID 1 from core 0: 5423.22 21.18 2948.34 1085.14 6285.25 01:30:31.397 PCIE (0000:00:11.0) NSID 1 from core 0: 5423.22 21.18 2949.75 1091.88 6153.94 01:30:31.397 PCIE (0000:00:13.0) NSID 1 from core 0: 5423.22 21.18 2949.63 1137.29 6526.16 01:30:31.397 PCIE (0000:00:12.0) NSID 1 from core 0: 5423.22 21.18 2949.59 1093.82 6386.16 01:30:31.397 PCIE (0000:00:12.0) NSID 2 from core 0: 5423.22 21.18 2949.47 1100.65 6904.68 01:30:31.397 PCIE (0000:00:12.0) NSID 3 from core 0: 5423.22 21.18 2949.36 1092.68 6613.98 01:30:31.397 ======================================================== 01:30:31.397 Total : 32539.29 127.11 2949.36 1085.14 6904.68 01:30:31.397 01:30:34.027 Initializing NVMe Controllers 01:30:34.027 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 01:30:34.027 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 01:30:34.027 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 01:30:34.027 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 01:30:34.027 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 01:30:34.027 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 01:30:34.027 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 01:30:34.027 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 01:30:34.027 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 01:30:34.027 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 01:30:34.027 Initialization complete. Launching workers. 01:30:34.027 ======================================================== 01:30:34.027 Latency(us) 01:30:34.027 Device Information : IOPS MiB/s Average min max 01:30:34.027 PCIE (0000:00:10.0) NSID 1 from core 2: 3813.77 14.90 4193.32 1002.01 21990.58 01:30:34.027 PCIE (0000:00:11.0) NSID 1 from core 2: 3813.77 14.90 4194.18 1064.97 22678.96 01:30:34.027 PCIE (0000:00:13.0) NSID 1 from core 2: 3813.77 14.90 4194.44 998.42 22847.50 01:30:34.027 PCIE (0000:00:12.0) NSID 1 from core 2: 3813.77 14.90 4194.56 920.81 22212.42 01:30:34.027 PCIE (0000:00:12.0) NSID 2 from core 2: 3813.77 14.90 4194.06 864.54 22065.97 01:30:34.027 PCIE (0000:00:12.0) NSID 3 from core 2: 3813.77 14.90 4197.41 810.09 21928.54 01:30:34.027 ======================================================== 01:30:34.027 Total : 22882.64 89.39 4194.66 810.09 22847.50 01:30:34.027 01:30:34.027 05:25:25 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 65357 01:30:34.027 05:25:25 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 65358 01:30:34.027 01:30:34.027 real 0m11.370s 01:30:34.027 user 0m19.294s 01:30:34.027 sys 0m1.081s 01:30:34.027 05:25:25 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 01:30:34.027 05:25:25 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 01:30:34.027 ************************************ 01:30:34.027 END TEST nvme_multi_secondary 01:30:34.027 ************************************ 01:30:34.027 05:25:25 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 01:30:34.027 05:25:25 nvme -- nvme/nvme.sh@102 -- # kill_stub 01:30:34.027 05:25:25 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/64273 ]] 01:30:34.027 05:25:25 nvme -- common/autotest_common.sh@1094 -- # kill 64273 01:30:34.027 05:25:25 nvme -- common/autotest_common.sh@1095 -- # wait 64273 01:30:34.027 [2024-12-09 05:25:25.440573] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65230) is not found. Dropping the request. 01:30:34.027 [2024-12-09 05:25:25.440682] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65230) is not found. Dropping the request. 01:30:34.027 [2024-12-09 05:25:25.440723] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65230) is not found. Dropping the request. 01:30:34.027 [2024-12-09 05:25:25.440747] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65230) is not found. Dropping the request. 01:30:34.027 [2024-12-09 05:25:25.443533] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65230) is not found. Dropping the request. 01:30:34.027 [2024-12-09 05:25:25.443598] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65230) is not found. Dropping the request. 01:30:34.027 [2024-12-09 05:25:25.443620] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65230) is not found. Dropping the request. 01:30:34.027 [2024-12-09 05:25:25.443654] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65230) is not found. Dropping the request. 01:30:34.027 [2024-12-09 05:25:25.446540] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65230) is not found. Dropping the request. 01:30:34.027 [2024-12-09 05:25:25.446613] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65230) is not found. Dropping the request. 01:30:34.027 [2024-12-09 05:25:25.446657] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65230) is not found. Dropping the request. 01:30:34.027 [2024-12-09 05:25:25.446691] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65230) is not found. Dropping the request. 01:30:34.027 [2024-12-09 05:25:25.449657] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65230) is not found. Dropping the request. 01:30:34.027 [2024-12-09 05:25:25.449727] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65230) is not found. Dropping the request. 01:30:34.027 [2024-12-09 05:25:25.449750] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65230) is not found. Dropping the request. 01:30:34.027 [2024-12-09 05:25:25.449771] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65230) is not found. Dropping the request. 01:30:34.286 05:25:25 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 01:30:34.286 05:25:25 nvme -- common/autotest_common.sh@1101 -- # echo 2 01:30:34.286 05:25:25 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 01:30:34.286 05:25:25 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:30:34.286 05:25:25 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 01:30:34.286 05:25:25 nvme -- common/autotest_common.sh@10 -- # set +x 01:30:34.286 ************************************ 01:30:34.286 START TEST bdev_nvme_reset_stuck_adm_cmd 01:30:34.286 ************************************ 01:30:34.286 05:25:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 01:30:34.286 * Looking for test storage... 01:30:34.286 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 01:30:34.286 05:25:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:30:34.286 05:25:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:30:34.286 05:25:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lcov --version 01:30:34.544 05:25:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:30:34.544 05:25:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:30:34.544 05:25:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 01:30:34.544 05:25:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 01:30:34.544 05:25:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 01:30:34.544 05:25:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 01:30:34.544 05:25:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 01:30:34.544 05:25:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 01:30:34.544 05:25:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 01:30:34.544 05:25:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 01:30:34.544 05:25:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 01:30:34.544 05:25:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:30:34.544 05:25:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 01:30:34.544 05:25:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 01:30:34.544 05:25:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 01:30:34.544 05:25:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:30:34.544 05:25:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 01:30:34.544 05:25:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 01:30:34.544 05:25:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:30:34.544 05:25:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 01:30:34.544 05:25:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 01:30:34.544 05:25:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 01:30:34.544 05:25:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 01:30:34.544 05:25:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:30:34.544 05:25:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 01:30:34.544 05:25:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 01:30:34.544 05:25:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:30:34.544 05:25:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:30:34.544 05:25:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 01:30:34.544 05:25:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:30:34.544 05:25:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:30:34.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:30:34.544 --rc genhtml_branch_coverage=1 01:30:34.544 --rc genhtml_function_coverage=1 01:30:34.544 --rc genhtml_legend=1 01:30:34.544 --rc geninfo_all_blocks=1 01:30:34.544 --rc geninfo_unexecuted_blocks=1 01:30:34.544 01:30:34.544 ' 01:30:34.544 05:25:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:30:34.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:30:34.544 --rc genhtml_branch_coverage=1 01:30:34.544 --rc genhtml_function_coverage=1 01:30:34.544 --rc genhtml_legend=1 01:30:34.544 --rc geninfo_all_blocks=1 01:30:34.544 --rc geninfo_unexecuted_blocks=1 01:30:34.544 01:30:34.544 ' 01:30:34.544 05:25:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:30:34.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:30:34.544 --rc genhtml_branch_coverage=1 01:30:34.544 --rc genhtml_function_coverage=1 01:30:34.544 --rc genhtml_legend=1 01:30:34.544 --rc geninfo_all_blocks=1 01:30:34.544 --rc geninfo_unexecuted_blocks=1 01:30:34.544 01:30:34.544 ' 01:30:34.544 05:25:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:30:34.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:30:34.544 --rc genhtml_branch_coverage=1 01:30:34.544 --rc genhtml_function_coverage=1 01:30:34.544 --rc genhtml_legend=1 01:30:34.544 --rc geninfo_all_blocks=1 01:30:34.544 --rc geninfo_unexecuted_blocks=1 01:30:34.544 01:30:34.544 ' 01:30:34.544 05:25:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 01:30:34.544 05:25:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 01:30:34.544 05:25:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 01:30:34.544 05:25:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 01:30:34.544 05:25:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 01:30:34.544 05:25:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 01:30:34.544 05:25:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 01:30:34.544 05:25:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 01:30:34.544 05:25:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 01:30:34.544 05:25:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 01:30:34.544 05:25:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 01:30:34.544 05:25:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 01:30:34.544 05:25:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 01:30:34.544 05:25:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 01:30:34.544 05:25:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 01:30:34.544 05:25:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 01:30:34.544 05:25:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 01:30:34.544 05:25:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 01:30:34.544 05:25:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 01:30:34.544 05:25:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 01:30:34.544 05:25:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=65531 01:30:34.544 05:25:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 01:30:34.544 05:25:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 01:30:34.544 05:25:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 65531 01:30:34.544 05:25:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 65531 ']' 01:30:34.544 05:25:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:30:34.544 05:25:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 01:30:34.544 05:25:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:30:34.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:30:34.544 05:25:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 01:30:34.544 05:25:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 01:30:34.802 [2024-12-09 05:25:26.270255] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:30:34.802 [2024-12-09 05:25:26.270758] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65531 ] 01:30:35.061 [2024-12-09 05:25:26.487872] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 01:30:35.061 [2024-12-09 05:25:26.659884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:30:35.061 [2024-12-09 05:25:26.659998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:30:35.061 [2024-12-09 05:25:26.660130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:30:35.061 [2024-12-09 05:25:26.660142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:30:35.996 05:25:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:30:35.996 05:25:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 01:30:35.996 05:25:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 01:30:35.996 05:25:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:35.996 05:25:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 01:30:36.256 nvme0n1 01:30:36.256 05:25:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:36.256 05:25:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 01:30:36.256 05:25:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_7KFkf.txt 01:30:36.256 05:25:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 01:30:36.256 05:25:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:36.256 05:25:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 01:30:36.256 true 01:30:36.256 05:25:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:36.256 05:25:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 01:30:36.256 05:25:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1733721927 01:30:36.256 05:25:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=65559 01:30:36.256 05:25:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 01:30:36.256 05:25:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 01:30:36.256 05:25:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 01:30:38.160 05:25:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 01:30:38.160 05:25:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:38.160 05:25:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 01:30:38.160 [2024-12-09 05:25:29.669754] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 01:30:38.160 [2024-12-09 05:25:29.670364] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:30:38.160 [2024-12-09 05:25:29.670414] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 01:30:38.160 [2024-12-09 05:25:29.670436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:30:38.160 [2024-12-09 05:25:29.672991] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 01:30:38.160 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 65559 01:30:38.160 05:25:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:38.160 05:25:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 65559 01:30:38.160 05:25:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 65559 01:30:38.160 05:25:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 01:30:38.160 05:25:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 01:30:38.160 05:25:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:30:38.160 05:25:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:38.160 05:25:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 01:30:38.161 05:25:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:38.161 05:25:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 01:30:38.161 05:25:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_7KFkf.txt 01:30:38.161 05:25:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 01:30:38.161 05:25:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 01:30:38.161 05:25:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 01:30:38.161 05:25:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 01:30:38.161 05:25:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 01:30:38.419 05:25:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 01:30:38.419 05:25:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 01:30:38.419 05:25:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 01:30:38.419 05:25:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 01:30:38.419 05:25:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 01:30:38.419 05:25:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 01:30:38.419 05:25:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 01:30:38.419 05:25:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 01:30:38.419 05:25:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 01:30:38.419 05:25:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 01:30:38.419 05:25:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 01:30:38.419 05:25:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 01:30:38.419 05:25:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 01:30:38.419 05:25:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 01:30:38.419 05:25:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_7KFkf.txt 01:30:38.419 05:25:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 65531 01:30:38.419 05:25:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 65531 ']' 01:30:38.419 05:25:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 65531 01:30:38.419 05:25:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 01:30:38.419 05:25:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:30:38.419 05:25:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65531 01:30:38.419 killing process with pid 65531 01:30:38.419 05:25:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:30:38.419 05:25:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:30:38.419 05:25:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65531' 01:30:38.419 05:25:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 65531 01:30:38.419 05:25:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 65531 01:30:40.946 05:25:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 01:30:40.946 05:25:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 01:30:40.946 ************************************ 01:30:40.946 END TEST bdev_nvme_reset_stuck_adm_cmd 01:30:40.946 ************************************ 01:30:40.946 01:30:40.946 real 0m6.473s 01:30:40.946 user 0m22.078s 01:30:40.946 sys 0m0.857s 01:30:40.946 05:25:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 01:30:40.946 05:25:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 01:30:40.946 05:25:32 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 01:30:40.946 05:25:32 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 01:30:40.946 05:25:32 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:30:40.946 05:25:32 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 01:30:40.946 05:25:32 nvme -- common/autotest_common.sh@10 -- # set +x 01:30:40.946 ************************************ 01:30:40.946 START TEST nvme_fio 01:30:40.946 ************************************ 01:30:40.946 05:25:32 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 01:30:40.946 05:25:32 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 01:30:40.946 05:25:32 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 01:30:40.946 05:25:32 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 01:30:40.946 05:25:32 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 01:30:40.946 05:25:32 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 01:30:40.946 05:25:32 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 01:30:40.946 05:25:32 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 01:30:40.946 05:25:32 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 01:30:40.946 05:25:32 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 01:30:40.946 05:25:32 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 01:30:40.946 05:25:32 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 01:30:40.946 05:25:32 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 01:30:40.946 05:25:32 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 01:30:40.946 05:25:32 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 01:30:40.946 05:25:32 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 01:30:41.203 05:25:32 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 01:30:41.203 05:25:32 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 01:30:41.768 05:25:33 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 01:30:41.768 05:25:33 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 01:30:41.768 05:25:33 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 01:30:41.768 05:25:33 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 01:30:41.768 05:25:33 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:30:41.768 05:25:33 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 01:30:41.768 05:25:33 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 01:30:41.768 05:25:33 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 01:30:41.768 05:25:33 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 01:30:41.768 05:25:33 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:30:41.768 05:25:33 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 01:30:41.768 05:25:33 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 01:30:41.768 05:25:33 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:30:41.768 05:25:33 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 01:30:41.768 05:25:33 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 01:30:41.768 05:25:33 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 01:30:41.768 05:25:33 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 01:30:41.768 05:25:33 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 01:30:41.768 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 01:30:41.768 fio-3.35 01:30:41.768 Starting 1 thread 01:30:45.059 01:30:45.059 test: (groupid=0, jobs=1): err= 0: pid=65710: Mon Dec 9 05:25:36 2024 01:30:45.059 read: IOPS=15.5k, BW=60.6MiB/s (63.5MB/s)(121MiB/2001msec) 01:30:45.059 slat (nsec): min=4116, max=73511, avg=6453.93, stdev=3489.66 01:30:45.059 clat (usec): min=244, max=8558, avg=4098.12, stdev=570.49 01:30:45.059 lat (usec): min=250, max=8606, avg=4104.58, stdev=571.20 01:30:45.059 clat percentiles (usec): 01:30:45.059 | 1.00th=[ 3294], 5.00th=[ 3458], 10.00th=[ 3556], 20.00th=[ 3720], 01:30:45.059 | 30.00th=[ 3818], 40.00th=[ 3916], 50.00th=[ 4015], 60.00th=[ 4146], 01:30:45.059 | 70.00th=[ 4228], 80.00th=[ 4359], 90.00th=[ 4555], 95.00th=[ 4883], 01:30:45.059 | 99.00th=[ 6587], 99.50th=[ 6783], 99.90th=[ 7308], 99.95th=[ 7439], 01:30:45.059 | 99.99th=[ 8455] 01:30:45.059 bw ( KiB/s): min=58416, max=64928, per=100.00%, avg=62402.67, stdev=3493.30, samples=3 01:30:45.059 iops : min=14604, max=16232, avg=15600.67, stdev=873.33, samples=3 01:30:45.059 write: IOPS=15.5k, BW=60.6MiB/s (63.6MB/s)(121MiB/2001msec); 0 zone resets 01:30:45.059 slat (usec): min=4, max=704, avg= 6.88, stdev= 5.46 01:30:45.059 clat (usec): min=234, max=8465, avg=4120.40, stdev=577.59 01:30:45.059 lat (usec): min=239, max=8476, avg=4127.28, stdev=578.29 01:30:45.059 clat percentiles (usec): 01:30:45.059 | 1.00th=[ 3294], 5.00th=[ 3490], 10.00th=[ 3589], 20.00th=[ 3720], 01:30:45.059 | 30.00th=[ 3818], 40.00th=[ 3949], 50.00th=[ 4047], 60.00th=[ 4146], 01:30:45.059 | 70.00th=[ 4293], 80.00th=[ 4424], 90.00th=[ 4555], 95.00th=[ 4883], 01:30:45.059 | 99.00th=[ 6652], 99.50th=[ 6783], 99.90th=[ 7308], 99.95th=[ 7439], 01:30:45.059 | 99.99th=[ 8225] 01:30:45.059 bw ( KiB/s): min=58720, max=64432, per=99.81%, avg=61960.00, stdev=2932.42, samples=3 01:30:45.059 iops : min=14680, max=16108, avg=15490.00, stdev=733.11, samples=3 01:30:45.059 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 01:30:45.059 lat (msec) : 2=0.06%, 4=46.50%, 10=53.40% 01:30:45.059 cpu : usr=98.90%, sys=0.05%, ctx=7, majf=0, minf=606 01:30:45.059 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 01:30:45.059 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:30:45.059 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:30:45.059 issued rwts: total=31044,31056,0,0 short=0,0,0,0 dropped=0,0,0,0 01:30:45.059 latency : target=0, window=0, percentile=100.00%, depth=128 01:30:45.059 01:30:45.059 Run status group 0 (all jobs): 01:30:45.059 READ: bw=60.6MiB/s (63.5MB/s), 60.6MiB/s-60.6MiB/s (63.5MB/s-63.5MB/s), io=121MiB (127MB), run=2001-2001msec 01:30:45.059 WRITE: bw=60.6MiB/s (63.6MB/s), 60.6MiB/s-60.6MiB/s (63.6MB/s-63.6MB/s), io=121MiB (127MB), run=2001-2001msec 01:30:45.316 ----------------------------------------------------- 01:30:45.316 Suppressions used: 01:30:45.316 count bytes template 01:30:45.316 1 32 /usr/src/fio/parse.c 01:30:45.316 1 8 libtcmalloc_minimal.so 01:30:45.316 ----------------------------------------------------- 01:30:45.316 01:30:45.316 05:25:36 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 01:30:45.316 05:25:36 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 01:30:45.316 05:25:36 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 01:30:45.316 05:25:36 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 01:30:45.574 05:25:37 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 01:30:45.574 05:25:37 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 01:30:46.150 05:25:37 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 01:30:46.150 05:25:37 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 01:30:46.150 05:25:37 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 01:30:46.150 05:25:37 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 01:30:46.150 05:25:37 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:30:46.150 05:25:37 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 01:30:46.150 05:25:37 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 01:30:46.150 05:25:37 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 01:30:46.150 05:25:37 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 01:30:46.150 05:25:37 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:30:46.150 05:25:37 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 01:30:46.150 05:25:37 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 01:30:46.150 05:25:37 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:30:46.150 05:25:37 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 01:30:46.150 05:25:37 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 01:30:46.151 05:25:37 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 01:30:46.151 05:25:37 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 01:30:46.151 05:25:37 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 01:30:46.151 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 01:30:46.151 fio-3.35 01:30:46.151 Starting 1 thread 01:30:49.435 01:30:49.435 test: (groupid=0, jobs=1): err= 0: pid=65776: Mon Dec 9 05:25:40 2024 01:30:49.435 read: IOPS=16.1k, BW=62.7MiB/s (65.8MB/s)(125MiB/2001msec) 01:30:49.435 slat (nsec): min=4223, max=74818, avg=6389.86, stdev=3063.98 01:30:49.435 clat (usec): min=362, max=10517, avg=3957.46, stdev=711.18 01:30:49.435 lat (usec): min=367, max=10571, avg=3963.85, stdev=712.21 01:30:49.435 clat percentiles (usec): 01:30:49.435 | 1.00th=[ 3163], 5.00th=[ 3326], 10.00th=[ 3425], 20.00th=[ 3523], 01:30:49.435 | 30.00th=[ 3589], 40.00th=[ 3687], 50.00th=[ 3752], 60.00th=[ 3851], 01:30:49.435 | 70.00th=[ 3982], 80.00th=[ 4228], 90.00th=[ 4621], 95.00th=[ 5800], 01:30:49.435 | 99.00th=[ 6456], 99.50th=[ 6849], 99.90th=[ 7308], 99.95th=[ 8979], 01:30:49.435 | 99.99th=[10421] 01:30:49.435 bw ( KiB/s): min=57056, max=66984, per=98.49%, avg=63245.33, stdev=5398.67, samples=3 01:30:49.435 iops : min=14264, max=16746, avg=15811.33, stdev=1349.67, samples=3 01:30:49.435 write: IOPS=16.1k, BW=62.8MiB/s (65.9MB/s)(126MiB/2001msec); 0 zone resets 01:30:49.435 slat (nsec): min=4277, max=57908, avg=6669.27, stdev=3209.45 01:30:49.435 clat (usec): min=240, max=10418, avg=3976.89, stdev=714.83 01:30:49.435 lat (usec): min=246, max=10430, avg=3983.56, stdev=715.87 01:30:49.435 clat percentiles (usec): 01:30:49.435 | 1.00th=[ 3163], 5.00th=[ 3359], 10.00th=[ 3425], 20.00th=[ 3523], 01:30:49.435 | 30.00th=[ 3621], 40.00th=[ 3687], 50.00th=[ 3785], 60.00th=[ 3884], 01:30:49.435 | 70.00th=[ 4015], 80.00th=[ 4228], 90.00th=[ 4686], 95.00th=[ 5800], 01:30:49.435 | 99.00th=[ 6456], 99.50th=[ 6849], 99.90th=[ 7504], 99.95th=[ 9110], 01:30:49.435 | 99.99th=[10159] 01:30:49.435 bw ( KiB/s): min=57400, max=66416, per=97.85%, avg=62946.67, stdev=4853.72, samples=3 01:30:49.435 iops : min=14350, max=16604, avg=15736.67, stdev=1213.43, samples=3 01:30:49.435 lat (usec) : 250=0.01%, 500=0.02%, 750=0.02%, 1000=0.01% 01:30:49.435 lat (msec) : 2=0.10%, 4=69.74%, 10=30.09%, 20=0.02% 01:30:49.435 cpu : usr=98.90%, sys=0.10%, ctx=5, majf=0, minf=606 01:30:49.435 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 01:30:49.435 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:30:49.435 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:30:49.435 issued rwts: total=32125,32179,0,0 short=0,0,0,0 dropped=0,0,0,0 01:30:49.435 latency : target=0, window=0, percentile=100.00%, depth=128 01:30:49.435 01:30:49.435 Run status group 0 (all jobs): 01:30:49.435 READ: bw=62.7MiB/s (65.8MB/s), 62.7MiB/s-62.7MiB/s (65.8MB/s-65.8MB/s), io=125MiB (132MB), run=2001-2001msec 01:30:49.435 WRITE: bw=62.8MiB/s (65.9MB/s), 62.8MiB/s-62.8MiB/s (65.9MB/s-65.9MB/s), io=126MiB (132MB), run=2001-2001msec 01:30:49.694 ----------------------------------------------------- 01:30:49.694 Suppressions used: 01:30:49.694 count bytes template 01:30:49.694 1 32 /usr/src/fio/parse.c 01:30:49.694 1 8 libtcmalloc_minimal.so 01:30:49.694 ----------------------------------------------------- 01:30:49.694 01:30:49.694 05:25:41 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 01:30:49.694 05:25:41 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 01:30:49.694 05:25:41 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 01:30:49.694 05:25:41 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 01:30:50.260 05:25:41 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 01:30:50.260 05:25:41 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 01:30:50.518 05:25:41 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 01:30:50.518 05:25:41 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 01:30:50.518 05:25:41 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 01:30:50.518 05:25:41 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 01:30:50.518 05:25:41 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:30:50.518 05:25:41 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 01:30:50.518 05:25:41 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 01:30:50.518 05:25:41 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 01:30:50.518 05:25:41 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 01:30:50.518 05:25:41 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:30:50.518 05:25:41 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 01:30:50.518 05:25:41 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 01:30:50.518 05:25:41 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:30:50.518 05:25:41 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 01:30:50.518 05:25:41 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 01:30:50.518 05:25:41 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 01:30:50.518 05:25:41 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 01:30:50.518 05:25:41 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 01:30:50.777 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 01:30:50.777 fio-3.35 01:30:50.777 Starting 1 thread 01:30:54.072 01:30:54.072 test: (groupid=0, jobs=1): err= 0: pid=65842: Mon Dec 9 05:25:45 2024 01:30:54.072 read: IOPS=17.0k, BW=66.2MiB/s (69.4MB/s)(133MiB/2001msec) 01:30:54.072 slat (nsec): min=4338, max=66434, avg=6117.38, stdev=2990.29 01:30:54.072 clat (usec): min=243, max=10883, avg=3749.17, stdev=588.97 01:30:54.072 lat (usec): min=248, max=10928, avg=3755.28, stdev=590.06 01:30:54.072 clat percentiles (usec): 01:30:54.072 | 1.00th=[ 3163], 5.00th=[ 3261], 10.00th=[ 3359], 20.00th=[ 3425], 01:30:54.072 | 30.00th=[ 3490], 40.00th=[ 3556], 50.00th=[ 3621], 60.00th=[ 3687], 01:30:54.072 | 70.00th=[ 3785], 80.00th=[ 3884], 90.00th=[ 4146], 95.00th=[ 4424], 01:30:54.072 | 99.00th=[ 6456], 99.50th=[ 6652], 99.90th=[ 6915], 99.95th=[ 8848], 01:30:54.072 | 99.99th=[10683] 01:30:54.072 bw ( KiB/s): min=62728, max=71256, per=98.95%, avg=67104.00, stdev=4268.41, samples=3 01:30:54.072 iops : min=15682, max=17814, avg=16776.00, stdev=1067.10, samples=3 01:30:54.072 write: IOPS=17.0k, BW=66.4MiB/s (69.6MB/s)(133MiB/2001msec); 0 zone resets 01:30:54.072 slat (usec): min=4, max=117, avg= 6.47, stdev= 3.21 01:30:54.072 clat (usec): min=346, max=10730, avg=3764.46, stdev=598.23 01:30:54.072 lat (usec): min=352, max=10739, avg=3770.92, stdev=599.28 01:30:54.072 clat percentiles (usec): 01:30:54.072 | 1.00th=[ 3163], 5.00th=[ 3294], 10.00th=[ 3359], 20.00th=[ 3458], 01:30:54.072 | 30.00th=[ 3523], 40.00th=[ 3556], 50.00th=[ 3621], 60.00th=[ 3687], 01:30:54.072 | 70.00th=[ 3785], 80.00th=[ 3916], 90.00th=[ 4146], 95.00th=[ 4490], 01:30:54.072 | 99.00th=[ 6521], 99.50th=[ 6652], 99.90th=[ 7177], 99.95th=[ 8848], 01:30:54.072 | 99.99th=[10421] 01:30:54.072 bw ( KiB/s): min=63080, max=70920, per=98.61%, avg=67050.67, stdev=3920.98, samples=3 01:30:54.072 iops : min=15770, max=17730, avg=16762.67, stdev=980.25, samples=3 01:30:54.072 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 01:30:54.072 lat (msec) : 2=0.05%, 4=85.05%, 10=14.84%, 20=0.02% 01:30:54.072 cpu : usr=98.85%, sys=0.20%, ctx=3, majf=0, minf=607 01:30:54.072 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 01:30:54.072 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:30:54.073 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:30:54.073 issued rwts: total=33924,34013,0,0 short=0,0,0,0 dropped=0,0,0,0 01:30:54.073 latency : target=0, window=0, percentile=100.00%, depth=128 01:30:54.073 01:30:54.073 Run status group 0 (all jobs): 01:30:54.073 READ: bw=66.2MiB/s (69.4MB/s), 66.2MiB/s-66.2MiB/s (69.4MB/s-69.4MB/s), io=133MiB (139MB), run=2001-2001msec 01:30:54.073 WRITE: bw=66.4MiB/s (69.6MB/s), 66.4MiB/s-66.4MiB/s (69.6MB/s-69.6MB/s), io=133MiB (139MB), run=2001-2001msec 01:30:54.330 ----------------------------------------------------- 01:30:54.330 Suppressions used: 01:30:54.330 count bytes template 01:30:54.330 1 32 /usr/src/fio/parse.c 01:30:54.330 1 8 libtcmalloc_minimal.so 01:30:54.330 ----------------------------------------------------- 01:30:54.330 01:30:54.330 05:25:45 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 01:30:54.330 05:25:45 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 01:30:54.330 05:25:45 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 01:30:54.330 05:25:45 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 01:30:54.895 05:25:46 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 01:30:54.895 05:25:46 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 01:30:55.154 05:25:46 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 01:30:55.154 05:25:46 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 01:30:55.154 05:25:46 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 01:30:55.154 05:25:46 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 01:30:55.154 05:25:46 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:30:55.154 05:25:46 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 01:30:55.154 05:25:46 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 01:30:55.154 05:25:46 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 01:30:55.154 05:25:46 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 01:30:55.154 05:25:46 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:30:55.154 05:25:46 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 01:30:55.154 05:25:46 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 01:30:55.154 05:25:46 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:30:55.154 05:25:46 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 01:30:55.154 05:25:46 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 01:30:55.154 05:25:46 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 01:30:55.154 05:25:46 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 01:30:55.154 05:25:46 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 01:30:55.413 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 01:30:55.413 fio-3.35 01:30:55.413 Starting 1 thread 01:30:59.607 01:30:59.607 test: (groupid=0, jobs=1): err= 0: pid=65908: Mon Dec 9 05:25:51 2024 01:30:59.607 read: IOPS=17.6k, BW=68.8MiB/s (72.2MB/s)(138MiB/2001msec) 01:30:59.607 slat (nsec): min=4252, max=62040, avg=5888.52, stdev=2164.70 01:30:59.607 clat (usec): min=241, max=9785, avg=3609.91, stdev=472.84 01:30:59.607 lat (usec): min=247, max=9847, avg=3615.79, stdev=473.50 01:30:59.607 clat percentiles (usec): 01:30:59.607 | 1.00th=[ 3032], 5.00th=[ 3163], 10.00th=[ 3228], 20.00th=[ 3294], 01:30:59.607 | 30.00th=[ 3359], 40.00th=[ 3425], 50.00th=[ 3490], 60.00th=[ 3556], 01:30:59.607 | 70.00th=[ 3654], 80.00th=[ 3851], 90.00th=[ 4293], 95.00th=[ 4490], 01:30:59.607 | 99.00th=[ 4948], 99.50th=[ 5932], 99.90th=[ 7177], 99.95th=[ 8160], 01:30:59.607 | 99.99th=[ 9634] 01:30:59.607 bw ( KiB/s): min=63625, max=72368, per=98.54%, avg=69453.67, stdev=5047.77, samples=3 01:30:59.607 iops : min=15906, max=18092, avg=17363.33, stdev=1262.09, samples=3 01:30:59.607 write: IOPS=17.6k, BW=68.9MiB/s (72.2MB/s)(138MiB/2001msec); 0 zone resets 01:30:59.607 slat (nsec): min=4483, max=80150, avg=6190.96, stdev=2405.90 01:30:59.608 clat (usec): min=293, max=9711, avg=3623.83, stdev=470.09 01:30:59.608 lat (usec): min=298, max=9723, avg=3630.02, stdev=470.76 01:30:59.608 clat percentiles (usec): 01:30:59.608 | 1.00th=[ 3064], 5.00th=[ 3163], 10.00th=[ 3228], 20.00th=[ 3326], 01:30:59.608 | 30.00th=[ 3359], 40.00th=[ 3425], 50.00th=[ 3490], 60.00th=[ 3556], 01:30:59.608 | 70.00th=[ 3654], 80.00th=[ 3851], 90.00th=[ 4293], 95.00th=[ 4490], 01:30:59.608 | 99.00th=[ 4948], 99.50th=[ 5866], 99.90th=[ 7177], 99.95th=[ 8356], 01:30:59.608 | 99.99th=[ 9372] 01:30:59.608 bw ( KiB/s): min=63912, max=72312, per=98.47%, avg=69440.00, stdev=4788.61, samples=3 01:30:59.608 iops : min=15978, max=18078, avg=17360.00, stdev=1197.15, samples=3 01:30:59.608 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 01:30:59.608 lat (msec) : 2=0.06%, 4=82.97%, 10=16.92% 01:30:59.608 cpu : usr=99.05%, sys=0.10%, ctx=4, majf=0, minf=604 01:30:59.608 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 01:30:59.608 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:30:59.608 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:30:59.608 issued rwts: total=35260,35276,0,0 short=0,0,0,0 dropped=0,0,0,0 01:30:59.608 latency : target=0, window=0, percentile=100.00%, depth=128 01:30:59.608 01:30:59.608 Run status group 0 (all jobs): 01:30:59.608 READ: bw=68.8MiB/s (72.2MB/s), 68.8MiB/s-68.8MiB/s (72.2MB/s-72.2MB/s), io=138MiB (144MB), run=2001-2001msec 01:30:59.608 WRITE: bw=68.9MiB/s (72.2MB/s), 68.9MiB/s-68.9MiB/s (72.2MB/s-72.2MB/s), io=138MiB (144MB), run=2001-2001msec 01:30:59.867 ----------------------------------------------------- 01:30:59.867 Suppressions used: 01:30:59.867 count bytes template 01:30:59.867 1 32 /usr/src/fio/parse.c 01:30:59.867 1 8 libtcmalloc_minimal.so 01:30:59.867 ----------------------------------------------------- 01:30:59.867 01:31:00.126 05:25:51 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 01:31:00.126 05:25:51 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 01:31:00.126 01:31:00.126 real 0m19.157s 01:31:00.126 user 0m15.672s 01:31:00.126 sys 0m1.939s 01:31:00.126 05:25:51 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 01:31:00.126 05:25:51 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 01:31:00.126 ************************************ 01:31:00.126 END TEST nvme_fio 01:31:00.126 ************************************ 01:31:00.126 ************************************ 01:31:00.126 END TEST nvme 01:31:00.126 ************************************ 01:31:00.126 01:31:00.126 real 1m36.194s 01:31:00.126 user 3m55.012s 01:31:00.126 sys 0m15.495s 01:31:00.126 05:25:51 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 01:31:00.126 05:25:51 nvme -- common/autotest_common.sh@10 -- # set +x 01:31:00.126 05:25:51 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 01:31:00.126 05:25:51 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 01:31:00.126 05:25:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:31:00.126 05:25:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:31:00.126 05:25:51 -- common/autotest_common.sh@10 -- # set +x 01:31:00.126 ************************************ 01:31:00.126 START TEST nvme_scc 01:31:00.126 ************************************ 01:31:00.126 05:25:51 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 01:31:00.126 * Looking for test storage... 01:31:00.126 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 01:31:00.126 05:25:51 nvme_scc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:31:00.126 05:25:51 nvme_scc -- common/autotest_common.sh@1693 -- # lcov --version 01:31:00.126 05:25:51 nvme_scc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:31:00.388 05:25:51 nvme_scc -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:31:00.388 05:25:51 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:31:00.388 05:25:51 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 01:31:00.388 05:25:51 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 01:31:00.388 05:25:51 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 01:31:00.388 05:25:51 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 01:31:00.388 05:25:51 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 01:31:00.388 05:25:51 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 01:31:00.388 05:25:51 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 01:31:00.388 05:25:51 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 01:31:00.388 05:25:51 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 01:31:00.388 05:25:51 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:31:00.388 05:25:51 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 01:31:00.388 05:25:51 nvme_scc -- scripts/common.sh@345 -- # : 1 01:31:00.388 05:25:51 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 01:31:00.388 05:25:51 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:31:00.388 05:25:51 nvme_scc -- scripts/common.sh@365 -- # decimal 1 01:31:00.388 05:25:51 nvme_scc -- scripts/common.sh@353 -- # local d=1 01:31:00.388 05:25:51 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:31:00.388 05:25:51 nvme_scc -- scripts/common.sh@355 -- # echo 1 01:31:00.388 05:25:51 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 01:31:00.388 05:25:51 nvme_scc -- scripts/common.sh@366 -- # decimal 2 01:31:00.388 05:25:51 nvme_scc -- scripts/common.sh@353 -- # local d=2 01:31:00.388 05:25:51 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:31:00.388 05:25:51 nvme_scc -- scripts/common.sh@355 -- # echo 2 01:31:00.388 05:25:51 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 01:31:00.388 05:25:51 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:31:00.388 05:25:51 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:31:00.388 05:25:51 nvme_scc -- scripts/common.sh@368 -- # return 0 01:31:00.388 05:25:51 nvme_scc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:31:00.388 05:25:51 nvme_scc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:31:00.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:31:00.388 --rc genhtml_branch_coverage=1 01:31:00.388 --rc genhtml_function_coverage=1 01:31:00.388 --rc genhtml_legend=1 01:31:00.388 --rc geninfo_all_blocks=1 01:31:00.388 --rc geninfo_unexecuted_blocks=1 01:31:00.388 01:31:00.388 ' 01:31:00.388 05:25:51 nvme_scc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:31:00.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:31:00.388 --rc genhtml_branch_coverage=1 01:31:00.388 --rc genhtml_function_coverage=1 01:31:00.388 --rc genhtml_legend=1 01:31:00.388 --rc geninfo_all_blocks=1 01:31:00.388 --rc geninfo_unexecuted_blocks=1 01:31:00.388 01:31:00.388 ' 01:31:00.388 05:25:51 nvme_scc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:31:00.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:31:00.388 --rc genhtml_branch_coverage=1 01:31:00.388 --rc genhtml_function_coverage=1 01:31:00.388 --rc genhtml_legend=1 01:31:00.388 --rc geninfo_all_blocks=1 01:31:00.388 --rc geninfo_unexecuted_blocks=1 01:31:00.388 01:31:00.388 ' 01:31:00.388 05:25:51 nvme_scc -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:31:00.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:31:00.388 --rc genhtml_branch_coverage=1 01:31:00.388 --rc genhtml_function_coverage=1 01:31:00.388 --rc genhtml_legend=1 01:31:00.388 --rc geninfo_all_blocks=1 01:31:00.388 --rc geninfo_unexecuted_blocks=1 01:31:00.388 01:31:00.388 ' 01:31:00.388 05:25:51 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 01:31:00.388 05:25:51 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 01:31:00.388 05:25:51 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 01:31:00.388 05:25:51 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 01:31:00.388 05:25:51 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:31:00.388 05:25:51 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 01:31:00.388 05:25:51 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:31:00.388 05:25:51 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:31:00.388 05:25:51 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:31:00.388 05:25:51 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:31:00.388 05:25:51 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:31:00.388 05:25:51 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:31:00.388 05:25:51 nvme_scc -- paths/export.sh@5 -- # export PATH 01:31:00.388 05:25:51 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:31:00.388 05:25:51 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 01:31:00.388 05:25:51 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 01:31:00.388 05:25:51 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 01:31:00.388 05:25:51 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 01:31:00.388 05:25:51 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 01:31:00.389 05:25:51 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 01:31:00.389 05:25:51 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 01:31:00.389 05:25:51 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 01:31:00.389 05:25:51 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 01:31:00.389 05:25:51 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:31:00.389 05:25:51 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 01:31:00.389 05:25:51 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 01:31:00.389 05:25:51 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 01:31:00.389 05:25:51 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 01:31:00.660 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:31:00.918 Waiting for block devices as requested 01:31:00.918 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 01:31:00.918 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 01:31:00.918 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 01:31:01.176 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 01:31:06.453 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 01:31:06.453 05:25:57 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 01:31:06.453 05:25:57 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 01:31:06.453 05:25:57 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 01:31:06.453 05:25:57 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 01:31:06.453 05:25:57 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 01:31:06.453 05:25:57 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 01:31:06.453 05:25:57 nvme_scc -- scripts/common.sh@18 -- # local i 01:31:06.453 05:25:57 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 01:31:06.453 05:25:57 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 01:31:06.453 05:25:57 nvme_scc -- scripts/common.sh@27 -- # return 0 01:31:06.453 05:25:57 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 01:31:06.453 05:25:57 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 01:31:06.453 05:25:57 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 01:31:06.453 05:25:57 nvme_scc -- nvme/functions.sh@18 -- # shift 01:31:06.453 05:25:57 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 01:31:06.453 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.453 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.453 05:25:57 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 01:31:06.453 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 01:31:06.453 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.453 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.453 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 01:31:06.453 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 01:31:06.453 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 01:31:06.453 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.453 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.453 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 01:31:06.453 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 01:31:06.453 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 01:31:06.453 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.453 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.453 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 01:31:06.453 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 01:31:06.453 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 01:31:06.453 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.453 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.453 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 01:31:06.453 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 01:31:06.453 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 01:31:06.453 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.453 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.453 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 01:31:06.453 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 01:31:06.453 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 01:31:06.453 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.453 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.453 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 01:31:06.453 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 01:31:06.453 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 01:31:06.453 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.453 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.453 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 01:31:06.453 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 01:31:06.453 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 01:31:06.453 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.453 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.453 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.453 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 01:31:06.453 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 01:31:06.453 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.453 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.453 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 01:31:06.453 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 01:31:06.453 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 01:31:06.453 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.453 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.453 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.453 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 01:31:06.453 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 01:31:06.453 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.453 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.453 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 01:31:06.453 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 01:31:06.453 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 01:31:06.453 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.453 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.454 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 01:31:06.455 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.456 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng0n1 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@18 -- # shift 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dps]=0 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 01:31:06.457 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npda]=0 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nows]=0 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.458 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 01:31:06.459 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 01:31:06.459 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 01:31:06.459 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.459 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.459 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 01:31:06.459 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 01:31:06.459 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 01:31:06.459 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.459 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.459 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 01:31:06.459 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 01:31:06.459 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 01:31:06.459 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.459 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.459 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 01:31:06.459 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 01:31:06.459 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 01:31:06.459 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.459 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.459 05:25:57 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 01:31:06.459 05:25:57 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 01:31:06.459 05:25:57 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 01:31:06.459 05:25:57 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 01:31:06.459 05:25:57 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 01:31:06.459 05:25:57 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 01:31:06.459 05:25:57 nvme_scc -- nvme/functions.sh@18 -- # shift 01:31:06.459 05:25:57 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 01:31:06.459 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.459 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.459 05:25:57 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 01:31:06.459 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 01:31:06.459 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.459 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.459 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 01:31:06.459 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 01:31:06.459 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 01:31:06.459 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.459 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.459 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 01:31:06.459 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 01:31:06.459 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 01:31:06.459 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.459 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.459 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 01:31:06.459 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 01:31:06.459 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 01:31:06.459 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.459 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.459 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 01:31:06.459 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 01:31:06.459 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 01:31:06.459 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.459 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.459 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 01:31:06.459 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 01:31:06.459 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 01:31:06.459 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.459 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.459 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 01:31:06.459 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 01:31:06.459 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 01:31:06.459 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.459 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.459 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 01:31:06.459 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 01:31:06.459 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 01:31:06.459 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.459 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.459 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 01:31:06.459 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 01:31:06.459 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 01:31:06.459 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.459 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.459 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.459 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 01:31:06.459 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 01:31:06.459 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.459 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.459 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.459 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 01:31:06.459 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 01:31:06.459 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.459 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.459 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.459 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 01:31:06.459 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 01:31:06.459 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.459 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.459 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.459 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 01:31:06.459 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 01:31:06.460 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 01:31:06.461 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 01:31:06.461 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.461 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.461 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 01:31:06.461 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 01:31:06.461 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 01:31:06.461 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.461 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.461 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 01:31:06.461 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 01:31:06.461 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 01:31:06.461 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.461 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.461 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 01:31:06.461 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 01:31:06.461 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 01:31:06.461 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.461 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.461 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 01:31:06.461 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 01:31:06.461 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 01:31:06.461 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.461 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.461 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 01:31:06.461 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 01:31:06.461 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 01:31:06.461 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.461 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.461 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 01:31:06.461 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 01:31:06.461 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 01:31:06.461 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.461 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.461 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 01:31:06.461 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 01:31:06.461 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 01:31:06.461 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.461 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.461 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 01:31:06.461 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 01:31:06.461 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 01:31:06.461 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.461 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.461 05:25:57 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 01:31:06.461 05:25:57 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 01:31:06.461 05:25:57 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 01:31:06.461 05:25:57 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 01:31:06.461 05:25:57 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 01:31:06.461 05:25:57 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 01:31:06.461 05:25:57 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 01:31:06.461 05:25:57 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 01:31:06.461 05:25:57 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 01:31:06.461 05:25:57 nvme_scc -- scripts/common.sh@18 -- # local i 01:31:06.461 05:25:57 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 01:31:06.461 05:25:57 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 01:31:06.461 05:25:57 nvme_scc -- scripts/common.sh@27 -- # return 0 01:31:06.461 05:25:57 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 01:31:06.461 05:25:57 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 01:31:06.461 05:25:57 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 01:31:06.461 05:25:57 nvme_scc -- nvme/functions.sh@18 -- # shift 01:31:06.461 05:25:57 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 01:31:06.461 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.461 05:25:57 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 01:31:06.461 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.461 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 01:31:06.461 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.461 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.461 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 01:31:06.461 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 01:31:06.461 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 01:31:06.461 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.461 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.461 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 01:31:06.461 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 01:31:06.461 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 01:31:06.461 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.461 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.461 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 01:31:06.461 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 01:31:06.461 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 01:31:06.461 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.461 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.461 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 01:31:06.461 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 01:31:06.461 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 01:31:06.461 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.461 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.461 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 01:31:06.461 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 01:31:06.461 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 01:31:06.461 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.461 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.461 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 01:31:06.461 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 01:31:06.461 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 01:31:06.461 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.461 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.461 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 01:31:06.461 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 01:31:06.461 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 01:31:06.461 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.461 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.461 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.461 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 01:31:06.461 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 01:31:06.461 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.461 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.461 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 01:31:06.461 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 01:31:06.461 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.462 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.463 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 01:31:06.464 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.465 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.465 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.465 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 01:31:06.465 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 01:31:06.465 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.465 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.465 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.465 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 01:31:06.465 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 01:31:06.465 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.465 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.465 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 01:31:06.465 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 01:31:06.465 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 01:31:06.465 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.465 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.465 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 01:31:06.465 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 01:31:06.465 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 01:31:06.465 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.465 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.465 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 01:31:06.465 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 01:31:06.465 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 01:31:06.465 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.465 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.465 05:25:57 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 01:31:06.465 05:25:57 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 01:31:06.465 05:25:57 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 01:31:06.465 05:25:57 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng1n1 01:31:06.465 05:25:57 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 01:31:06.465 05:25:57 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 01:31:06.465 05:25:57 nvme_scc -- nvme/functions.sh@18 -- # shift 01:31:06.465 05:25:57 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 01:31:06.465 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.465 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.465 05:25:57 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 01:31:06.465 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 01:31:06.465 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.465 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.465 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 01:31:06.465 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 01:31:06.465 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 01:31:06.465 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.465 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.465 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 01:31:06.465 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 01:31:06.465 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 01:31:06.465 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.465 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.465 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 01:31:06.465 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 01:31:06.465 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 01:31:06.465 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.465 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.465 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 01:31:06.465 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 01:31:06.465 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 01:31:06.465 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.465 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.465 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 01:31:06.465 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 01:31:06.465 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 01:31:06.465 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.465 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.465 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 01:31:06.465 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 01:31:06.465 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 01:31:06.465 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.465 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.465 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 01:31:06.465 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 01:31:06.465 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 01:31:06.465 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.465 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.465 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 01:31:06.465 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 01:31:06.465 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 01:31:06.465 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.465 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.465 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.465 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 01:31:06.466 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dps]=0 01:31:06.466 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.466 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.466 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.466 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 01:31:06.466 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 01:31:06.466 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.466 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.466 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.466 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 01:31:06.466 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 01:31:06.466 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.466 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.466 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.466 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 01:31:06.466 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 01:31:06.466 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.466 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.466 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 01:31:06.466 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 01:31:06.466 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 01:31:06.466 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.466 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.466 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.466 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 01:31:06.466 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 01:31:06.466 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.466 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.466 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.466 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 01:31:06.466 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 01:31:06.466 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.466 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.466 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.466 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 01:31:06.466 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 01:31:06.466 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.466 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.466 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.466 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 01:31:06.466 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 01:31:06.466 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.466 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.466 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.466 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 01:31:06.466 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 01:31:06.466 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.466 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.466 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.466 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 01:31:06.466 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 01:31:06.466 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.466 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.466 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.466 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 01:31:06.466 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 01:31:06.466 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.466 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.466 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.466 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 01:31:06.466 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 01:31:06.466 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.466 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.466 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.466 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 01:31:06.466 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 01:31:06.466 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.466 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.466 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.466 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 01:31:06.466 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 01:31:06.466 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.466 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.466 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.466 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 01:31:06.466 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 01:31:06.466 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.466 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.466 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.466 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 01:31:06.466 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npda]=0 01:31:06.466 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.466 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.466 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.466 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 01:31:06.466 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nows]=0 01:31:06.466 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.466 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.466 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 01:31:06.466 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 01:31:06.466 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 01:31:06.466 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.466 05:25:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.466 05:25:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 01:31:06.466 05:25:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 01:31:06.466 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 01:31:06.466 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.466 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.466 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 01:31:06.466 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 01:31:06.466 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 01:31:06.466 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.466 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.466 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.466 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 01:31:06.466 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 01:31:06.466 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.466 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.466 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.466 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 01:31:06.466 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 01:31:06.466 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.466 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.466 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.466 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 01:31:06.466 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 01:31:06.466 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.466 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.466 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.466 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 01:31:06.466 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 01:31:06.466 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.466 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.466 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@18 -- # shift 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.467 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.468 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 01:31:06.469 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 01:31:06.469 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 01:31:06.469 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.469 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.469 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 01:31:06.469 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 01:31:06.469 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 01:31:06.469 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.469 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.469 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 01:31:06.469 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 01:31:06.469 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 01:31:06.469 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.469 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.469 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 01:31:06.469 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 01:31:06.469 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 01:31:06.469 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.469 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.469 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 01:31:06.469 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 01:31:06.469 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 01:31:06.469 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.469 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.469 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 01:31:06.469 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 01:31:06.469 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 01:31:06.469 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.469 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.469 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 01:31:06.469 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 01:31:06.469 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 01:31:06.469 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.469 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.469 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 01:31:06.469 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 01:31:06.469 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 01:31:06.469 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.469 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.469 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 01:31:06.469 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 01:31:06.469 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 01:31:06.469 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.469 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.469 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 01:31:06.469 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 01:31:06.469 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 01:31:06.469 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.469 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.469 05:25:58 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 01:31:06.469 05:25:58 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 01:31:06.469 05:25:58 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 01:31:06.469 05:25:58 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 01:31:06.469 05:25:58 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 01:31:06.469 05:25:58 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 01:31:06.469 05:25:58 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 01:31:06.469 05:25:58 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 01:31:06.469 05:25:58 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 01:31:06.469 05:25:58 nvme_scc -- scripts/common.sh@18 -- # local i 01:31:06.469 05:25:58 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 01:31:06.469 05:25:58 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 01:31:06.469 05:25:58 nvme_scc -- scripts/common.sh@27 -- # return 0 01:31:06.469 05:25:58 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 01:31:06.469 05:25:58 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 01:31:06.469 05:25:58 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 01:31:06.469 05:25:58 nvme_scc -- nvme/functions.sh@18 -- # shift 01:31:06.469 05:25:58 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 01:31:06.469 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.469 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.469 05:25:58 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 01:31:06.469 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 01:31:06.469 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.469 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.469 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 01:31:06.469 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 01:31:06.469 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 01:31:06.469 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.469 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.469 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 01:31:06.469 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 01:31:06.469 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 01:31:06.469 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.469 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.469 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 01:31:06.469 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 01:31:06.469 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 01:31:06.469 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.469 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.469 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 01:31:06.469 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 01:31:06.469 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 01:31:06.469 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.469 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.469 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 01:31:06.469 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 01:31:06.469 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 01:31:06.469 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.469 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.469 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 01:31:06.469 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 01:31:06.469 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 01:31:06.469 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.469 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.469 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 01:31:06.469 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 01:31:06.469 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 01:31:06.469 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.469 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.469 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.469 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 01:31:06.469 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 01:31:06.470 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.470 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.470 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 01:31:06.470 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 01:31:06.470 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 01:31:06.470 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.470 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.470 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.470 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 01:31:06.470 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 01:31:06.470 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.470 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.470 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 01:31:06.470 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 01:31:06.470 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 01:31:06.470 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.470 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.470 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.470 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 01:31:06.470 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 01:31:06.470 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.470 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.470 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.470 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 01:31:06.470 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 01:31:06.470 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.470 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.470 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 01:31:06.470 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 01:31:06.470 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 01:31:06.470 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.470 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.470 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 01:31:06.470 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 01:31:06.470 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 01:31:06.470 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.470 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.470 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.470 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 01:31:06.470 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 01:31:06.470 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.470 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.470 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 01:31:06.470 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 01:31:06.470 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 01:31:06.470 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.470 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.470 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 01:31:06.470 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 01:31:06.470 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 01:31:06.470 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.470 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.470 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.470 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 01:31:06.470 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 01:31:06.470 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.470 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.470 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.470 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 01:31:06.470 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 01:31:06.470 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.470 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.470 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.470 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 01:31:06.470 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 01:31:06.470 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.470 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.470 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.470 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 01:31:06.470 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 01:31:06.470 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.470 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.470 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.470 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 01:31:06.470 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 01:31:06.470 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.470 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.741 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.741 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 01:31:06.741 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 01:31:06.741 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.741 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.741 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 01:31:06.741 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 01:31:06.741 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 01:31:06.741 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.741 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.741 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 01:31:06.741 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 01:31:06.741 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 01:31:06.741 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.741 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.741 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 01:31:06.741 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 01:31:06.741 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 01:31:06.741 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.741 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.741 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 01:31:06.741 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 01:31:06.741 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 01:31:06.741 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.741 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.741 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 01:31:06.741 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 01:31:06.741 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 01:31:06.741 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.741 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.741 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.741 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 01:31:06.741 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 01:31:06.741 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.741 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.741 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.741 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 01:31:06.741 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 01:31:06.741 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.741 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.741 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.741 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 01:31:06.741 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 01:31:06.741 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.741 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.741 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.741 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 01:31:06.741 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 01:31:06.741 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.741 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.741 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 01:31:06.741 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 01:31:06.741 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 01:31:06.741 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.741 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.741 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 01:31:06.741 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 01:31:06.741 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 01:31:06.741 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.741 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.741 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.742 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n1 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@18 -- # shift 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 01:31:06.743 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dps]=0 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npda]=0 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nows]=0 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 01:31:06.744 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n2 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@18 -- # shift 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dps]=0 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.745 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npda]=0 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nows]=0 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 01:31:06.746 05:25:58 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n3 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@18 -- # shift 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dps]=0 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npda]=0 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nows]=0 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.747 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@18 -- # shift 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.748 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 01:31:06.749 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@18 -- # shift 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.750 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@18 -- # shift 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.751 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 01:31:06.752 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 01:31:06.753 05:25:58 nvme_scc -- scripts/common.sh@18 -- # local i 01:31:06.753 05:25:58 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 01:31:06.753 05:25:58 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 01:31:06.753 05:25:58 nvme_scc -- scripts/common.sh@27 -- # return 0 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@18 -- # shift 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.753 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.754 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 01:31:06.755 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 01:31:06.756 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.756 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.756 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.756 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 01:31:06.756 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 01:31:06.756 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.756 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.756 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.756 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 01:31:06.756 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 01:31:06.756 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.756 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.756 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.756 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 01:31:06.756 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 01:31:06.756 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.756 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.756 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 01:31:06.756 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 01:31:06.756 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 01:31:06.756 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.756 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.756 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.756 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 01:31:06.756 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 01:31:06.756 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.756 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.756 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.756 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 01:31:06.756 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 01:31:06.756 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.756 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.756 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.756 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 01:31:06.756 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 01:31:06.756 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.756 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.756 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.756 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 01:31:06.756 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 01:31:06.756 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.756 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.756 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.756 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 01:31:06.756 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 01:31:06.756 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.756 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.756 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:06.756 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 01:31:06.756 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 01:31:06.756 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.756 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.756 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 01:31:06.756 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 01:31:06.756 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 01:31:06.756 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.756 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.756 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 01:31:06.756 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 01:31:06.756 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 01:31:06.756 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.756 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.756 05:25:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 01:31:06.756 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 01:31:06.756 05:25:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 01:31:06.756 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 01:31:06.756 05:25:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 01:31:06.756 05:25:58 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 01:31:06.756 05:25:58 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 01:31:06.756 05:25:58 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 01:31:06.756 05:25:58 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 01:31:06.756 05:25:58 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 01:31:06.756 05:25:58 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 01:31:06.756 05:25:58 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 01:31:06.756 05:25:58 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 01:31:06.756 05:25:58 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 01:31:06.756 05:25:58 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 01:31:06.756 05:25:58 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 01:31:06.756 05:25:58 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 01:31:06.756 05:25:58 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 01:31:06.756 05:25:58 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 01:31:06.756 05:25:58 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 01:31:06.756 05:25:58 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 01:31:06.756 05:25:58 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 01:31:06.756 05:25:58 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 01:31:06.756 05:25:58 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 01:31:06.756 05:25:58 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 01:31:06.756 05:25:58 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 01:31:06.756 05:25:58 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 01:31:06.756 05:25:58 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 01:31:06.756 05:25:58 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 01:31:06.756 05:25:58 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 01:31:06.756 05:25:58 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 01:31:06.756 05:25:58 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 01:31:06.756 05:25:58 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 01:31:06.756 05:25:58 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 01:31:06.756 05:25:58 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 01:31:06.756 05:25:58 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 01:31:06.756 05:25:58 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 01:31:06.756 05:25:58 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 01:31:06.756 05:25:58 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 01:31:06.756 05:25:58 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 01:31:06.756 05:25:58 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 01:31:06.756 05:25:58 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 01:31:06.756 05:25:58 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 01:31:07.071 05:25:58 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 01:31:07.071 05:25:58 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 01:31:07.071 05:25:58 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 01:31:07.071 05:25:58 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 01:31:07.071 05:25:58 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 01:31:07.071 05:25:58 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 01:31:07.071 05:25:58 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 01:31:07.071 05:25:58 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 01:31:07.071 05:25:58 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 01:31:07.071 05:25:58 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 01:31:07.071 05:25:58 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 01:31:07.071 05:25:58 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 01:31:07.071 05:25:58 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 01:31:07.071 05:25:58 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 01:31:07.071 05:25:58 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 01:31:07.071 05:25:58 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 01:31:07.071 05:25:58 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 01:31:07.071 05:25:58 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 01:31:07.071 05:25:58 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 01:31:07.071 05:25:58 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 01:31:07.071 05:25:58 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 01:31:07.071 05:25:58 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 01:31:07.071 05:25:58 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 01:31:07.071 05:25:58 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 01:31:07.071 05:25:58 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 01:31:07.071 05:25:58 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 01:31:07.071 05:25:58 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 01:31:07.071 05:25:58 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 01:31:07.071 05:25:58 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 01:31:07.071 05:25:58 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 01:31:07.071 05:25:58 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 01:31:07.071 05:25:58 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 01:31:07.071 05:25:58 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 01:31:07.071 05:25:58 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 01:31:07.071 05:25:58 nvme_scc -- nvme/functions.sh@209 -- # return 0 01:31:07.071 05:25:58 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 01:31:07.071 05:25:58 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 01:31:07.071 05:25:58 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 01:31:07.330 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:31:07.897 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 01:31:07.897 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 01:31:07.897 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 01:31:07.897 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 01:31:08.155 05:25:59 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 01:31:08.155 05:25:59 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:31:08.155 05:25:59 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 01:31:08.155 05:25:59 nvme_scc -- common/autotest_common.sh@10 -- # set +x 01:31:08.155 ************************************ 01:31:08.155 START TEST nvme_simple_copy 01:31:08.155 ************************************ 01:31:08.155 05:25:59 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 01:31:08.413 Initializing NVMe Controllers 01:31:08.413 Attaching to 0000:00:10.0 01:31:08.413 Controller supports SCC. Attached to 0000:00:10.0 01:31:08.413 Namespace ID: 1 size: 6GB 01:31:08.413 Initialization complete. 01:31:08.413 01:31:08.413 Controller QEMU NVMe Ctrl (12340 ) 01:31:08.413 Controller PCI vendor:6966 PCI subsystem vendor:6900 01:31:08.413 Namespace Block Size:4096 01:31:08.413 Writing LBAs 0 to 63 with Random Data 01:31:08.413 Copied LBAs from 0 - 63 to the Destination LBA 256 01:31:08.413 LBAs matching Written Data: 64 01:31:08.413 01:31:08.413 real 0m0.335s 01:31:08.413 user 0m0.126s 01:31:08.413 sys 0m0.106s 01:31:08.413 05:25:59 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 01:31:08.413 ************************************ 01:31:08.413 END TEST nvme_simple_copy 01:31:08.413 05:25:59 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 01:31:08.413 ************************************ 01:31:08.413 01:31:08.413 real 0m8.368s 01:31:08.414 user 0m1.544s 01:31:08.414 sys 0m1.738s 01:31:08.414 05:25:59 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 01:31:08.414 ************************************ 01:31:08.414 END TEST nvme_scc 01:31:08.414 ************************************ 01:31:08.414 05:25:59 nvme_scc -- common/autotest_common.sh@10 -- # set +x 01:31:08.414 05:25:59 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 01:31:08.414 05:25:59 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 01:31:08.414 05:25:59 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 01:31:08.414 05:25:59 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 01:31:08.414 05:25:59 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 01:31:08.414 05:25:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:31:08.414 05:25:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:31:08.414 05:25:59 -- common/autotest_common.sh@10 -- # set +x 01:31:08.414 ************************************ 01:31:08.414 START TEST nvme_fdp 01:31:08.414 ************************************ 01:31:08.414 05:26:00 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 01:31:08.672 * Looking for test storage... 01:31:08.672 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 01:31:08.672 05:26:00 nvme_fdp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:31:08.672 05:26:00 nvme_fdp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:31:08.672 05:26:00 nvme_fdp -- common/autotest_common.sh@1693 -- # lcov --version 01:31:08.672 05:26:00 nvme_fdp -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:31:08.672 05:26:00 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:31:08.672 05:26:00 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 01:31:08.672 05:26:00 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 01:31:08.672 05:26:00 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 01:31:08.672 05:26:00 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 01:31:08.672 05:26:00 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 01:31:08.672 05:26:00 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 01:31:08.672 05:26:00 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 01:31:08.672 05:26:00 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 01:31:08.672 05:26:00 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 01:31:08.672 05:26:00 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:31:08.672 05:26:00 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 01:31:08.672 05:26:00 nvme_fdp -- scripts/common.sh@345 -- # : 1 01:31:08.672 05:26:00 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 01:31:08.672 05:26:00 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:31:08.672 05:26:00 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 01:31:08.672 05:26:00 nvme_fdp -- scripts/common.sh@353 -- # local d=1 01:31:08.672 05:26:00 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:31:08.672 05:26:00 nvme_fdp -- scripts/common.sh@355 -- # echo 1 01:31:08.672 05:26:00 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 01:31:08.672 05:26:00 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 01:31:08.672 05:26:00 nvme_fdp -- scripts/common.sh@353 -- # local d=2 01:31:08.672 05:26:00 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:31:08.672 05:26:00 nvme_fdp -- scripts/common.sh@355 -- # echo 2 01:31:08.672 05:26:00 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 01:31:08.672 05:26:00 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:31:08.672 05:26:00 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:31:08.672 05:26:00 nvme_fdp -- scripts/common.sh@368 -- # return 0 01:31:08.672 05:26:00 nvme_fdp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:31:08.672 05:26:00 nvme_fdp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:31:08.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:31:08.672 --rc genhtml_branch_coverage=1 01:31:08.672 --rc genhtml_function_coverage=1 01:31:08.672 --rc genhtml_legend=1 01:31:08.672 --rc geninfo_all_blocks=1 01:31:08.672 --rc geninfo_unexecuted_blocks=1 01:31:08.672 01:31:08.672 ' 01:31:08.672 05:26:00 nvme_fdp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:31:08.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:31:08.672 --rc genhtml_branch_coverage=1 01:31:08.672 --rc genhtml_function_coverage=1 01:31:08.672 --rc genhtml_legend=1 01:31:08.672 --rc geninfo_all_blocks=1 01:31:08.672 --rc geninfo_unexecuted_blocks=1 01:31:08.672 01:31:08.672 ' 01:31:08.672 05:26:00 nvme_fdp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:31:08.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:31:08.672 --rc genhtml_branch_coverage=1 01:31:08.672 --rc genhtml_function_coverage=1 01:31:08.672 --rc genhtml_legend=1 01:31:08.672 --rc geninfo_all_blocks=1 01:31:08.672 --rc geninfo_unexecuted_blocks=1 01:31:08.672 01:31:08.672 ' 01:31:08.672 05:26:00 nvme_fdp -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:31:08.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:31:08.672 --rc genhtml_branch_coverage=1 01:31:08.672 --rc genhtml_function_coverage=1 01:31:08.672 --rc genhtml_legend=1 01:31:08.672 --rc geninfo_all_blocks=1 01:31:08.672 --rc geninfo_unexecuted_blocks=1 01:31:08.672 01:31:08.672 ' 01:31:08.672 05:26:00 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 01:31:08.672 05:26:00 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 01:31:08.672 05:26:00 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 01:31:08.672 05:26:00 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 01:31:08.672 05:26:00 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:31:08.672 05:26:00 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 01:31:08.672 05:26:00 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:31:08.672 05:26:00 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:31:08.672 05:26:00 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:31:08.672 05:26:00 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:31:08.672 05:26:00 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:31:08.672 05:26:00 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:31:08.672 05:26:00 nvme_fdp -- paths/export.sh@5 -- # export PATH 01:31:08.672 05:26:00 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:31:08.672 05:26:00 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 01:31:08.672 05:26:00 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 01:31:08.672 05:26:00 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 01:31:08.672 05:26:00 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 01:31:08.672 05:26:00 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 01:31:08.673 05:26:00 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 01:31:08.673 05:26:00 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 01:31:08.673 05:26:00 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 01:31:08.673 05:26:00 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 01:31:08.673 05:26:00 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:31:08.673 05:26:00 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 01:31:09.239 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:31:09.239 Waiting for block devices as requested 01:31:09.239 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 01:31:09.496 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 01:31:09.496 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 01:31:09.496 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 01:31:14.774 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 01:31:14.774 05:26:06 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 01:31:14.774 05:26:06 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 01:31:14.774 05:26:06 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 01:31:14.774 05:26:06 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 01:31:14.774 05:26:06 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 01:31:14.774 05:26:06 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 01:31:14.774 05:26:06 nvme_fdp -- scripts/common.sh@18 -- # local i 01:31:14.774 05:26:06 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 01:31:14.774 05:26:06 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 01:31:14.774 05:26:06 nvme_fdp -- scripts/common.sh@27 -- # return 0 01:31:14.774 05:26:06 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 01:31:14.774 05:26:06 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 01:31:14.774 05:26:06 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 01:31:14.774 05:26:06 nvme_fdp -- nvme/functions.sh@18 -- # shift 01:31:14.774 05:26:06 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 01:31:14.774 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.775 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 01:31:14.776 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng0n1 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@18 -- # shift 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 01:31:14.777 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dps]=0 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npda]=0 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nows]=0 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.778 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@18 -- # shift 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 01:31:14.779 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.780 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 01:31:14.781 05:26:06 nvme_fdp -- scripts/common.sh@18 -- # local i 01:31:14.781 05:26:06 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 01:31:14.781 05:26:06 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 01:31:14.781 05:26:06 nvme_fdp -- scripts/common.sh@27 -- # return 0 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@18 -- # shift 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 01:31:14.781 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.782 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.783 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng1n1 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@18 -- # shift 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dps]=0 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 01:31:14.784 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.785 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.785 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.785 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 01:31:14.785 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 01:31:14.785 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.785 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.785 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.785 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 01:31:14.785 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 01:31:14.785 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.785 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:14.785 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:14.785 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 01:31:14.785 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npda]=0 01:31:14.785 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:14.785 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.048 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.048 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 01:31:15.048 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nows]=0 01:31:15.048 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.048 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.048 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 01:31:15.048 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 01:31:15.048 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 01:31:15.048 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.048 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.048 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 01:31:15.048 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 01:31:15.048 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 01:31:15.048 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.048 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.048 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 01:31:15.048 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 01:31:15.048 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 01:31:15.048 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.048 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.048 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.048 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 01:31:15.048 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 01:31:15.048 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.048 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.048 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.048 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 01:31:15.048 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 01:31:15.048 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.048 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.048 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.048 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 01:31:15.048 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 01:31:15.048 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.048 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.048 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.048 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 01:31:15.048 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 01:31:15.048 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.048 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.048 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.048 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 01:31:15.048 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 01:31:15.048 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.048 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.048 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 01:31:15.048 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 01:31:15.048 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 01:31:15.048 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.048 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.048 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 01:31:15.048 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 01:31:15.048 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 01:31:15.048 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.048 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.048 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 01:31:15.048 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 01:31:15.048 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 01:31:15.048 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.048 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.048 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 01:31:15.048 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 01:31:15.048 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 01:31:15.048 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.048 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.048 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 01:31:15.048 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 01:31:15.048 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 01:31:15.048 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.048 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.048 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 01:31:15.048 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 01:31:15.048 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 01:31:15.048 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.048 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.048 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 01:31:15.048 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 01:31:15.048 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 01:31:15.048 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.048 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.048 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 01:31:15.048 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 01:31:15.048 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 01:31:15.048 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.048 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.048 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 01:31:15.048 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 01:31:15.048 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 01:31:15.048 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.048 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.048 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 01:31:15.048 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 01:31:15.048 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 01:31:15.048 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.048 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.048 05:26:06 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 01:31:15.048 05:26:06 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 01:31:15.048 05:26:06 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 01:31:15.048 05:26:06 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 01:31:15.048 05:26:06 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 01:31:15.048 05:26:06 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@18 -- # shift 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 01:31:15.049 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 01:31:15.050 05:26:06 nvme_fdp -- scripts/common.sh@18 -- # local i 01:31:15.050 05:26:06 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 01:31:15.050 05:26:06 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 01:31:15.050 05:26:06 nvme_fdp -- scripts/common.sh@27 -- # return 0 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@18 -- # shift 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 01:31:15.050 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 01:31:15.051 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.052 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n1 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@18 -- # shift 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 01:31:15.053 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dps]=0 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npda]=0 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nows]=0 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 01:31:15.054 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n2 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@18 -- # shift 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dps]=0 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.055 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npda]=0 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nows]=0 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n3 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@18 -- # shift 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.056 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dps]=0 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npda]=0 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nows]=0 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.057 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.058 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.058 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 01:31:15.058 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 01:31:15.058 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.058 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.058 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.058 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 01:31:15.058 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 01:31:15.058 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.058 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.058 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.058 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 01:31:15.058 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 01:31:15.058 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.058 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.058 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 01:31:15.058 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 01:31:15.058 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 01:31:15.058 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.058 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.058 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 01:31:15.058 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 01:31:15.058 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 01:31:15.058 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.058 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.058 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 01:31:15.058 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 01:31:15.058 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 01:31:15.058 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.058 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.058 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 01:31:15.058 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 01:31:15.058 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 01:31:15.058 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.058 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.058 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 01:31:15.058 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 01:31:15.058 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 01:31:15.058 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.058 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.058 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 01:31:15.058 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 01:31:15.058 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 01:31:15.058 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.058 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.058 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 01:31:15.058 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 01:31:15.058 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 01:31:15.058 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.058 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.058 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 01:31:15.058 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 01:31:15.058 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 01:31:15.058 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.058 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.058 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 01:31:15.058 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 01:31:15.058 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 01:31:15.058 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.058 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.058 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 01:31:15.058 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 01:31:15.058 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 01:31:15.058 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.058 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.058 05:26:06 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 01:31:15.058 05:26:06 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 01:31:15.058 05:26:06 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 01:31:15.058 05:26:06 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 01:31:15.058 05:26:06 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 01:31:15.058 05:26:06 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 01:31:15.058 05:26:06 nvme_fdp -- nvme/functions.sh@18 -- # shift 01:31:15.058 05:26:06 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 01:31:15.058 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.058 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.058 05:26:06 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 01:31:15.322 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 01:31:15.322 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.322 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.322 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 01:31:15.322 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 01:31:15.322 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 01:31:15.322 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.322 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.322 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.323 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@18 -- # shift 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.324 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 01:31:15.325 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@18 -- # shift 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.326 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 01:31:15.327 05:26:06 nvme_fdp -- scripts/common.sh@18 -- # local i 01:31:15.327 05:26:06 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 01:31:15.327 05:26:06 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 01:31:15.327 05:26:06 nvme_fdp -- scripts/common.sh@27 -- # return 0 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@18 -- # shift 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 01:31:15.327 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 01:31:15.328 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.329 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 01:31:15.330 05:26:06 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 01:31:15.330 05:26:06 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 01:31:15.331 05:26:06 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 01:31:15.331 05:26:06 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 01:31:15.331 05:26:06 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 01:31:15.331 05:26:06 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 01:31:15.331 05:26:06 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 01:31:15.331 05:26:06 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 01:31:15.331 05:26:06 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 01:31:15.331 05:26:06 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 01:31:15.331 05:26:06 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 01:31:15.331 05:26:06 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 01:31:15.331 05:26:06 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 01:31:15.331 05:26:06 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 01:31:15.331 05:26:06 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 01:31:15.331 05:26:06 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 01:31:15.331 05:26:06 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 01:31:15.331 05:26:06 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 01:31:15.331 05:26:06 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 01:31:15.331 05:26:06 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 01:31:15.331 05:26:06 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 01:31:15.331 05:26:06 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 01:31:15.331 05:26:06 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 01:31:15.331 05:26:06 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 01:31:15.331 05:26:06 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 01:31:15.331 05:26:06 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 01:31:15.331 05:26:06 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 01:31:15.331 05:26:06 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 01:31:15.331 05:26:06 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 01:31:15.331 05:26:06 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 01:31:15.331 05:26:06 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 01:31:15.331 05:26:06 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 01:31:15.331 05:26:06 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 01:31:15.331 05:26:06 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 01:31:15.331 05:26:06 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 01:31:15.331 05:26:06 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 01:31:15.331 05:26:06 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 01:31:15.331 05:26:06 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 01:31:15.331 05:26:06 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 01:31:15.331 05:26:06 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 01:31:15.331 05:26:06 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 01:31:15.331 05:26:06 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 01:31:15.331 05:26:06 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 01:31:15.331 05:26:06 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 01:31:15.331 05:26:06 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 01:31:15.331 05:26:06 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 01:31:15.331 05:26:06 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 01:31:15.331 05:26:06 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 01:31:15.331 05:26:06 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 01:31:15.331 05:26:06 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 01:31:15.331 05:26:06 nvme_fdp -- nvme/functions.sh@209 -- # return 0 01:31:15.331 05:26:06 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 01:31:15.331 05:26:06 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 01:31:15.331 05:26:06 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 01:31:15.897 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:31:16.463 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 01:31:16.463 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 01:31:16.463 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 01:31:16.722 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 01:31:16.722 05:26:08 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 01:31:16.722 05:26:08 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:31:16.722 05:26:08 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 01:31:16.722 05:26:08 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 01:31:16.722 ************************************ 01:31:16.722 START TEST nvme_flexible_data_placement 01:31:16.722 ************************************ 01:31:16.722 05:26:08 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 01:31:16.982 Initializing NVMe Controllers 01:31:16.982 Attaching to 0000:00:13.0 01:31:16.982 Controller supports FDP Attached to 0000:00:13.0 01:31:16.982 Namespace ID: 1 Endurance Group ID: 1 01:31:16.982 Initialization complete. 01:31:16.982 01:31:16.982 ================================== 01:31:16.982 == FDP tests for Namespace: #01 == 01:31:16.982 ================================== 01:31:16.982 01:31:16.982 Get Feature: FDP: 01:31:16.982 ================= 01:31:16.982 Enabled: Yes 01:31:16.982 FDP configuration Index: 0 01:31:16.982 01:31:16.982 FDP configurations log page 01:31:16.982 =========================== 01:31:16.982 Number of FDP configurations: 1 01:31:16.982 Version: 0 01:31:16.982 Size: 112 01:31:16.982 FDP Configuration Descriptor: 0 01:31:16.982 Descriptor Size: 96 01:31:16.982 Reclaim Group Identifier format: 2 01:31:16.982 FDP Volatile Write Cache: Not Present 01:31:16.982 FDP Configuration: Valid 01:31:16.982 Vendor Specific Size: 0 01:31:16.982 Number of Reclaim Groups: 2 01:31:16.982 Number of Recalim Unit Handles: 8 01:31:16.982 Max Placement Identifiers: 128 01:31:16.982 Number of Namespaces Suppprted: 256 01:31:16.982 Reclaim unit Nominal Size: 6000000 bytes 01:31:16.982 Estimated Reclaim Unit Time Limit: Not Reported 01:31:16.982 RUH Desc #000: RUH Type: Initially Isolated 01:31:16.982 RUH Desc #001: RUH Type: Initially Isolated 01:31:16.982 RUH Desc #002: RUH Type: Initially Isolated 01:31:16.982 RUH Desc #003: RUH Type: Initially Isolated 01:31:16.982 RUH Desc #004: RUH Type: Initially Isolated 01:31:16.982 RUH Desc #005: RUH Type: Initially Isolated 01:31:16.982 RUH Desc #006: RUH Type: Initially Isolated 01:31:16.982 RUH Desc #007: RUH Type: Initially Isolated 01:31:16.982 01:31:16.982 FDP reclaim unit handle usage log page 01:31:16.982 ====================================== 01:31:16.982 Number of Reclaim Unit Handles: 8 01:31:16.982 RUH Usage Desc #000: RUH Attributes: Controller Specified 01:31:16.982 RUH Usage Desc #001: RUH Attributes: Unused 01:31:16.982 RUH Usage Desc #002: RUH Attributes: Unused 01:31:16.982 RUH Usage Desc #003: RUH Attributes: Unused 01:31:16.982 RUH Usage Desc #004: RUH Attributes: Unused 01:31:16.982 RUH Usage Desc #005: RUH Attributes: Unused 01:31:16.982 RUH Usage Desc #006: RUH Attributes: Unused 01:31:16.982 RUH Usage Desc #007: RUH Attributes: Unused 01:31:16.982 01:31:16.982 FDP statistics log page 01:31:16.982 ======================= 01:31:16.982 Host bytes with metadata written: 866021376 01:31:16.982 Media bytes with metadata written: 866119680 01:31:16.982 Media bytes erased: 0 01:31:16.982 01:31:16.982 FDP Reclaim unit handle status 01:31:16.982 ============================== 01:31:16.982 Number of RUHS descriptors: 2 01:31:16.982 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000002619 01:31:16.982 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 01:31:16.982 01:31:16.982 FDP write on placement id: 0 success 01:31:16.982 01:31:16.982 Set Feature: Enabling FDP events on Placement handle: #0 Success 01:31:16.982 01:31:16.982 IO mgmt send: RUH update for Placement ID: #0 Success 01:31:16.982 01:31:16.982 Get Feature: FDP Events for Placement handle: #0 01:31:16.982 ======================== 01:31:16.982 Number of FDP Events: 6 01:31:16.982 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 01:31:16.982 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 01:31:16.982 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 01:31:16.982 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 01:31:16.982 FDP Event: #4 Type: Media Reallocated Enabled: No 01:31:16.982 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 01:31:16.982 01:31:16.982 FDP events log page 01:31:16.982 =================== 01:31:16.982 Number of FDP events: 1 01:31:16.982 FDP Event #0: 01:31:16.982 Event Type: RU Not Written to Capacity 01:31:16.982 Placement Identifier: Valid 01:31:16.982 NSID: Valid 01:31:16.982 Location: Valid 01:31:16.982 Placement Identifier: 0 01:31:16.982 Event Timestamp: 8 01:31:16.982 Namespace Identifier: 1 01:31:16.982 Reclaim Group Identifier: 0 01:31:16.982 Reclaim Unit Handle Identifier: 0 01:31:16.982 01:31:16.982 FDP test passed 01:31:16.982 ************************************ 01:31:16.982 END TEST nvme_flexible_data_placement 01:31:16.982 ************************************ 01:31:16.982 01:31:16.982 real 0m0.308s 01:31:16.982 user 0m0.110s 01:31:16.982 sys 0m0.096s 01:31:16.982 05:26:08 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 01:31:16.982 05:26:08 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 01:31:16.982 ************************************ 01:31:16.982 END TEST nvme_fdp 01:31:16.982 ************************************ 01:31:16.982 01:31:16.982 real 0m8.532s 01:31:16.982 user 0m1.576s 01:31:16.982 sys 0m1.841s 01:31:16.982 05:26:08 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 01:31:16.982 05:26:08 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 01:31:16.982 05:26:08 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 01:31:16.982 05:26:08 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 01:31:16.982 05:26:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:31:16.982 05:26:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:31:16.982 05:26:08 -- common/autotest_common.sh@10 -- # set +x 01:31:16.982 ************************************ 01:31:16.982 START TEST nvme_rpc 01:31:16.982 ************************************ 01:31:16.982 05:26:08 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 01:31:17.241 * Looking for test storage... 01:31:17.241 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 01:31:17.241 05:26:08 nvme_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:31:17.241 05:26:08 nvme_rpc -- common/autotest_common.sh@1693 -- # lcov --version 01:31:17.241 05:26:08 nvme_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:31:17.241 05:26:08 nvme_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:31:17.241 05:26:08 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:31:17.241 05:26:08 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 01:31:17.241 05:26:08 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 01:31:17.241 05:26:08 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 01:31:17.241 05:26:08 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 01:31:17.242 05:26:08 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 01:31:17.242 05:26:08 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 01:31:17.242 05:26:08 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 01:31:17.242 05:26:08 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 01:31:17.242 05:26:08 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 01:31:17.242 05:26:08 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:31:17.242 05:26:08 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 01:31:17.242 05:26:08 nvme_rpc -- scripts/common.sh@345 -- # : 1 01:31:17.242 05:26:08 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 01:31:17.242 05:26:08 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:31:17.242 05:26:08 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 01:31:17.242 05:26:08 nvme_rpc -- scripts/common.sh@353 -- # local d=1 01:31:17.242 05:26:08 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:31:17.242 05:26:08 nvme_rpc -- scripts/common.sh@355 -- # echo 1 01:31:17.242 05:26:08 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 01:31:17.242 05:26:08 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 01:31:17.242 05:26:08 nvme_rpc -- scripts/common.sh@353 -- # local d=2 01:31:17.242 05:26:08 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:31:17.242 05:26:08 nvme_rpc -- scripts/common.sh@355 -- # echo 2 01:31:17.242 05:26:08 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 01:31:17.242 05:26:08 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:31:17.242 05:26:08 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:31:17.242 05:26:08 nvme_rpc -- scripts/common.sh@368 -- # return 0 01:31:17.242 05:26:08 nvme_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:31:17.242 05:26:08 nvme_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:31:17.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:31:17.242 --rc genhtml_branch_coverage=1 01:31:17.242 --rc genhtml_function_coverage=1 01:31:17.242 --rc genhtml_legend=1 01:31:17.242 --rc geninfo_all_blocks=1 01:31:17.242 --rc geninfo_unexecuted_blocks=1 01:31:17.242 01:31:17.242 ' 01:31:17.242 05:26:08 nvme_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:31:17.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:31:17.242 --rc genhtml_branch_coverage=1 01:31:17.242 --rc genhtml_function_coverage=1 01:31:17.242 --rc genhtml_legend=1 01:31:17.242 --rc geninfo_all_blocks=1 01:31:17.242 --rc geninfo_unexecuted_blocks=1 01:31:17.242 01:31:17.242 ' 01:31:17.242 05:26:08 nvme_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:31:17.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:31:17.242 --rc genhtml_branch_coverage=1 01:31:17.242 --rc genhtml_function_coverage=1 01:31:17.242 --rc genhtml_legend=1 01:31:17.242 --rc geninfo_all_blocks=1 01:31:17.242 --rc geninfo_unexecuted_blocks=1 01:31:17.242 01:31:17.242 ' 01:31:17.242 05:26:08 nvme_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:31:17.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:31:17.242 --rc genhtml_branch_coverage=1 01:31:17.242 --rc genhtml_function_coverage=1 01:31:17.242 --rc genhtml_legend=1 01:31:17.242 --rc geninfo_all_blocks=1 01:31:17.242 --rc geninfo_unexecuted_blocks=1 01:31:17.242 01:31:17.242 ' 01:31:17.242 05:26:08 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:31:17.242 05:26:08 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 01:31:17.242 05:26:08 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 01:31:17.242 05:26:08 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 01:31:17.242 05:26:08 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 01:31:17.242 05:26:08 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 01:31:17.242 05:26:08 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 01:31:17.242 05:26:08 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 01:31:17.242 05:26:08 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 01:31:17.242 05:26:08 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 01:31:17.242 05:26:08 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 01:31:17.501 05:26:08 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 01:31:17.501 05:26:08 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 01:31:17.501 05:26:08 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 01:31:17.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:31:17.501 05:26:08 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 01:31:17.501 05:26:08 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=67309 01:31:17.501 05:26:08 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 01:31:17.501 05:26:08 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 01:31:17.501 05:26:08 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 67309 01:31:17.501 05:26:08 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 67309 ']' 01:31:17.501 05:26:08 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:31:17.501 05:26:08 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 01:31:17.501 05:26:08 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:31:17.501 05:26:08 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 01:31:17.501 05:26:08 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 01:31:17.501 [2024-12-09 05:26:08.991017] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:31:17.501 [2024-12-09 05:26:08.991495] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67309 ] 01:31:17.759 [2024-12-09 05:26:09.182948] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 01:31:17.759 [2024-12-09 05:26:09.347012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:31:17.759 [2024-12-09 05:26:09.347014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:31:18.692 05:26:10 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:31:18.692 05:26:10 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 01:31:18.692 05:26:10 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 01:31:19.256 Nvme0n1 01:31:19.256 05:26:10 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 01:31:19.256 05:26:10 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 01:31:19.523 request: 01:31:19.523 { 01:31:19.523 "bdev_name": "Nvme0n1", 01:31:19.523 "filename": "non_existing_file", 01:31:19.523 "method": "bdev_nvme_apply_firmware", 01:31:19.523 "req_id": 1 01:31:19.523 } 01:31:19.523 Got JSON-RPC error response 01:31:19.523 response: 01:31:19.523 { 01:31:19.523 "code": -32603, 01:31:19.523 "message": "open file failed." 01:31:19.523 } 01:31:19.523 05:26:10 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 01:31:19.523 05:26:10 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 01:31:19.523 05:26:10 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 01:31:19.798 05:26:11 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 01:31:19.798 05:26:11 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 67309 01:31:19.798 05:26:11 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 67309 ']' 01:31:19.798 05:26:11 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 67309 01:31:19.798 05:26:11 nvme_rpc -- common/autotest_common.sh@959 -- # uname 01:31:19.798 05:26:11 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:31:19.798 05:26:11 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67309 01:31:19.798 killing process with pid 67309 01:31:19.798 05:26:11 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:31:19.798 05:26:11 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:31:19.798 05:26:11 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67309' 01:31:19.798 05:26:11 nvme_rpc -- common/autotest_common.sh@973 -- # kill 67309 01:31:19.798 05:26:11 nvme_rpc -- common/autotest_common.sh@978 -- # wait 67309 01:31:22.326 ************************************ 01:31:22.326 END TEST nvme_rpc 01:31:22.326 ************************************ 01:31:22.326 01:31:22.326 real 0m4.807s 01:31:22.326 user 0m9.033s 01:31:22.326 sys 0m0.809s 01:31:22.326 05:26:13 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 01:31:22.326 05:26:13 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 01:31:22.326 05:26:13 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 01:31:22.326 05:26:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:31:22.326 05:26:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:31:22.326 05:26:13 -- common/autotest_common.sh@10 -- # set +x 01:31:22.326 ************************************ 01:31:22.326 START TEST nvme_rpc_timeouts 01:31:22.327 ************************************ 01:31:22.327 05:26:13 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 01:31:22.327 * Looking for test storage... 01:31:22.327 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 01:31:22.327 05:26:13 nvme_rpc_timeouts -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:31:22.327 05:26:13 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lcov --version 01:31:22.327 05:26:13 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:31:22.327 05:26:13 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:31:22.327 05:26:13 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:31:22.327 05:26:13 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 01:31:22.327 05:26:13 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 01:31:22.327 05:26:13 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 01:31:22.327 05:26:13 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 01:31:22.327 05:26:13 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 01:31:22.327 05:26:13 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 01:31:22.327 05:26:13 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 01:31:22.327 05:26:13 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 01:31:22.327 05:26:13 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 01:31:22.327 05:26:13 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:31:22.327 05:26:13 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 01:31:22.327 05:26:13 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 01:31:22.327 05:26:13 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 01:31:22.327 05:26:13 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:31:22.327 05:26:13 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 01:31:22.327 05:26:13 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 01:31:22.327 05:26:13 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:31:22.327 05:26:13 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 01:31:22.327 05:26:13 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 01:31:22.327 05:26:13 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 01:31:22.327 05:26:13 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 01:31:22.327 05:26:13 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:31:22.327 05:26:13 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 01:31:22.327 05:26:13 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 01:31:22.327 05:26:13 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:31:22.327 05:26:13 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:31:22.327 05:26:13 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 01:31:22.327 05:26:13 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:31:22.327 05:26:13 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:31:22.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:31:22.327 --rc genhtml_branch_coverage=1 01:31:22.327 --rc genhtml_function_coverage=1 01:31:22.327 --rc genhtml_legend=1 01:31:22.327 --rc geninfo_all_blocks=1 01:31:22.327 --rc geninfo_unexecuted_blocks=1 01:31:22.327 01:31:22.327 ' 01:31:22.327 05:26:13 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:31:22.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:31:22.327 --rc genhtml_branch_coverage=1 01:31:22.327 --rc genhtml_function_coverage=1 01:31:22.327 --rc genhtml_legend=1 01:31:22.327 --rc geninfo_all_blocks=1 01:31:22.327 --rc geninfo_unexecuted_blocks=1 01:31:22.327 01:31:22.327 ' 01:31:22.327 05:26:13 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:31:22.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:31:22.327 --rc genhtml_branch_coverage=1 01:31:22.327 --rc genhtml_function_coverage=1 01:31:22.327 --rc genhtml_legend=1 01:31:22.327 --rc geninfo_all_blocks=1 01:31:22.327 --rc geninfo_unexecuted_blocks=1 01:31:22.327 01:31:22.327 ' 01:31:22.327 05:26:13 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:31:22.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:31:22.327 --rc genhtml_branch_coverage=1 01:31:22.327 --rc genhtml_function_coverage=1 01:31:22.327 --rc genhtml_legend=1 01:31:22.327 --rc geninfo_all_blocks=1 01:31:22.327 --rc geninfo_unexecuted_blocks=1 01:31:22.327 01:31:22.327 ' 01:31:22.327 05:26:13 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:31:22.327 05:26:13 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_67385 01:31:22.327 05:26:13 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_67385 01:31:22.327 05:26:13 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=67423 01:31:22.327 05:26:13 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 01:31:22.327 05:26:13 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 01:31:22.327 05:26:13 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 67423 01:31:22.327 05:26:13 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 67423 ']' 01:31:22.327 05:26:13 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:31:22.327 05:26:13 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 01:31:22.327 05:26:13 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:31:22.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:31:22.327 05:26:13 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 01:31:22.327 05:26:13 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 01:31:22.327 [2024-12-09 05:26:13.781838] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:31:22.327 [2024-12-09 05:26:13.782268] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67423 ] 01:31:22.586 [2024-12-09 05:26:13.968801] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 01:31:22.586 [2024-12-09 05:26:14.107795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:31:22.586 [2024-12-09 05:26:14.107812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:31:23.521 05:26:14 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:31:23.521 Checking default timeout settings: 01:31:23.521 05:26:14 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 01:31:23.521 05:26:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 01:31:23.521 05:26:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 01:31:24.086 Making settings changes with rpc: 01:31:24.086 05:26:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 01:31:24.086 05:26:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 01:31:24.344 Check default vs. modified settings: 01:31:24.344 05:26:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 01:31:24.344 05:26:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 01:31:24.603 05:26:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 01:31:24.603 05:26:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 01:31:24.603 05:26:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_67385 01:31:24.603 05:26:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 01:31:24.603 05:26:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 01:31:24.603 05:26:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 01:31:24.603 05:26:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 01:31:24.603 05:26:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_67385 01:31:24.603 05:26:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 01:31:24.603 Setting action_on_timeout is changed as expected. 01:31:24.603 05:26:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 01:31:24.603 05:26:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 01:31:24.603 05:26:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 01:31:24.603 05:26:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 01:31:24.603 05:26:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_67385 01:31:24.603 05:26:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 01:31:24.603 05:26:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 01:31:24.603 05:26:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 01:31:24.603 05:26:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_67385 01:31:24.603 05:26:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 01:31:24.603 05:26:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 01:31:24.604 Setting timeout_us is changed as expected. 01:31:24.604 05:26:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 01:31:24.604 05:26:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 01:31:24.604 05:26:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 01:31:24.604 05:26:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 01:31:24.604 05:26:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_67385 01:31:24.604 05:26:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 01:31:24.604 05:26:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 01:31:24.604 05:26:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 01:31:24.604 05:26:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_67385 01:31:24.604 05:26:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 01:31:24.604 05:26:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 01:31:24.604 Setting timeout_admin_us is changed as expected. 01:31:24.604 05:26:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 01:31:24.604 05:26:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 01:31:24.604 05:26:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 01:31:24.604 05:26:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 01:31:24.604 05:26:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_67385 /tmp/settings_modified_67385 01:31:24.604 05:26:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 67423 01:31:24.604 05:26:16 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 67423 ']' 01:31:24.604 05:26:16 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 67423 01:31:24.604 05:26:16 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 01:31:24.604 05:26:16 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:31:24.604 05:26:16 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67423 01:31:24.604 killing process with pid 67423 01:31:24.604 05:26:16 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:31:24.604 05:26:16 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:31:24.604 05:26:16 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67423' 01:31:24.604 05:26:16 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 67423 01:31:24.604 05:26:16 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 67423 01:31:27.134 RPC TIMEOUT SETTING TEST PASSED. 01:31:27.134 05:26:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 01:31:27.134 ************************************ 01:31:27.134 END TEST nvme_rpc_timeouts 01:31:27.135 ************************************ 01:31:27.135 01:31:27.135 real 0m5.090s 01:31:27.135 user 0m9.808s 01:31:27.135 sys 0m0.797s 01:31:27.135 05:26:18 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 01:31:27.135 05:26:18 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 01:31:27.135 05:26:18 -- spdk/autotest.sh@239 -- # uname -s 01:31:27.135 05:26:18 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 01:31:27.135 05:26:18 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 01:31:27.135 05:26:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:31:27.135 05:26:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:31:27.135 05:26:18 -- common/autotest_common.sh@10 -- # set +x 01:31:27.135 ************************************ 01:31:27.135 START TEST sw_hotplug 01:31:27.135 ************************************ 01:31:27.135 05:26:18 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 01:31:27.135 * Looking for test storage... 01:31:27.135 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 01:31:27.135 05:26:18 sw_hotplug -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:31:27.135 05:26:18 sw_hotplug -- common/autotest_common.sh@1693 -- # lcov --version 01:31:27.135 05:26:18 sw_hotplug -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:31:27.393 05:26:18 sw_hotplug -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:31:27.393 05:26:18 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:31:27.393 05:26:18 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 01:31:27.394 05:26:18 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 01:31:27.394 05:26:18 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 01:31:27.394 05:26:18 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 01:31:27.394 05:26:18 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 01:31:27.394 05:26:18 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 01:31:27.394 05:26:18 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 01:31:27.394 05:26:18 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 01:31:27.394 05:26:18 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 01:31:27.394 05:26:18 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:31:27.394 05:26:18 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 01:31:27.394 05:26:18 sw_hotplug -- scripts/common.sh@345 -- # : 1 01:31:27.394 05:26:18 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 01:31:27.394 05:26:18 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:31:27.394 05:26:18 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 01:31:27.394 05:26:18 sw_hotplug -- scripts/common.sh@353 -- # local d=1 01:31:27.394 05:26:18 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:31:27.394 05:26:18 sw_hotplug -- scripts/common.sh@355 -- # echo 1 01:31:27.394 05:26:18 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 01:31:27.394 05:26:18 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 01:31:27.394 05:26:18 sw_hotplug -- scripts/common.sh@353 -- # local d=2 01:31:27.394 05:26:18 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:31:27.394 05:26:18 sw_hotplug -- scripts/common.sh@355 -- # echo 2 01:31:27.394 05:26:18 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 01:31:27.394 05:26:18 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:31:27.394 05:26:18 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:31:27.394 05:26:18 sw_hotplug -- scripts/common.sh@368 -- # return 0 01:31:27.394 05:26:18 sw_hotplug -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:31:27.394 05:26:18 sw_hotplug -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:31:27.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:31:27.394 --rc genhtml_branch_coverage=1 01:31:27.394 --rc genhtml_function_coverage=1 01:31:27.394 --rc genhtml_legend=1 01:31:27.394 --rc geninfo_all_blocks=1 01:31:27.394 --rc geninfo_unexecuted_blocks=1 01:31:27.394 01:31:27.394 ' 01:31:27.394 05:26:18 sw_hotplug -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:31:27.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:31:27.394 --rc genhtml_branch_coverage=1 01:31:27.394 --rc genhtml_function_coverage=1 01:31:27.394 --rc genhtml_legend=1 01:31:27.394 --rc geninfo_all_blocks=1 01:31:27.394 --rc geninfo_unexecuted_blocks=1 01:31:27.394 01:31:27.394 ' 01:31:27.394 05:26:18 sw_hotplug -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:31:27.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:31:27.394 --rc genhtml_branch_coverage=1 01:31:27.394 --rc genhtml_function_coverage=1 01:31:27.394 --rc genhtml_legend=1 01:31:27.394 --rc geninfo_all_blocks=1 01:31:27.394 --rc geninfo_unexecuted_blocks=1 01:31:27.394 01:31:27.394 ' 01:31:27.394 05:26:18 sw_hotplug -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:31:27.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:31:27.394 --rc genhtml_branch_coverage=1 01:31:27.394 --rc genhtml_function_coverage=1 01:31:27.394 --rc genhtml_legend=1 01:31:27.394 --rc geninfo_all_blocks=1 01:31:27.394 --rc geninfo_unexecuted_blocks=1 01:31:27.394 01:31:27.394 ' 01:31:27.394 05:26:18 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 01:31:27.653 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:31:27.911 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 01:31:27.911 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 01:31:27.911 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 01:31:27.911 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 01:31:27.911 05:26:19 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 01:31:27.911 05:26:19 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 01:31:27.911 05:26:19 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 01:31:27.911 05:26:19 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 01:31:27.911 05:26:19 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 01:31:27.911 05:26:19 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 01:31:27.911 05:26:19 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 01:31:27.911 05:26:19 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 01:31:27.911 05:26:19 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 01:31:27.911 05:26:19 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 01:31:27.911 05:26:19 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 01:31:27.911 05:26:19 sw_hotplug -- scripts/common.sh@233 -- # local class 01:31:27.911 05:26:19 sw_hotplug -- scripts/common.sh@234 -- # local subclass 01:31:27.911 05:26:19 sw_hotplug -- scripts/common.sh@235 -- # local progif 01:31:27.911 05:26:19 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 01:31:27.911 05:26:19 sw_hotplug -- scripts/common.sh@236 -- # class=01 01:31:27.911 05:26:19 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 01:31:27.911 05:26:19 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 01:31:27.911 05:26:19 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 01:31:27.911 05:26:19 sw_hotplug -- scripts/common.sh@238 -- # progif=02 01:31:27.911 05:26:19 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 01:31:27.911 05:26:19 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 01:31:27.911 05:26:19 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 01:31:27.911 05:26:19 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 01:31:27.911 05:26:19 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 01:31:27.911 05:26:19 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 01:31:27.911 05:26:19 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 01:31:27.911 05:26:19 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 01:31:27.911 05:26:19 sw_hotplug -- scripts/common.sh@18 -- # local i 01:31:27.911 05:26:19 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 01:31:27.911 05:26:19 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 01:31:27.911 05:26:19 sw_hotplug -- scripts/common.sh@27 -- # return 0 01:31:27.911 05:26:19 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 01:31:27.911 05:26:19 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 01:31:27.911 05:26:19 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 01:31:27.911 05:26:19 sw_hotplug -- scripts/common.sh@18 -- # local i 01:31:27.911 05:26:19 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 01:31:27.911 05:26:19 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 01:31:27.911 05:26:19 sw_hotplug -- scripts/common.sh@27 -- # return 0 01:31:27.911 05:26:19 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 01:31:27.911 05:26:19 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 01:31:27.911 05:26:19 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 01:31:27.911 05:26:19 sw_hotplug -- scripts/common.sh@18 -- # local i 01:31:27.911 05:26:19 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 01:31:27.911 05:26:19 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 01:31:27.911 05:26:19 sw_hotplug -- scripts/common.sh@27 -- # return 0 01:31:27.911 05:26:19 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 01:31:27.911 05:26:19 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 01:31:27.911 05:26:19 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 01:31:27.911 05:26:19 sw_hotplug -- scripts/common.sh@18 -- # local i 01:31:27.911 05:26:19 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 01:31:27.911 05:26:19 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 01:31:27.911 05:26:19 sw_hotplug -- scripts/common.sh@27 -- # return 0 01:31:27.911 05:26:19 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 01:31:27.911 05:26:19 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 01:31:27.911 05:26:19 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 01:31:27.911 05:26:19 sw_hotplug -- scripts/common.sh@323 -- # uname -s 01:31:27.911 05:26:19 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 01:31:27.911 05:26:19 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 01:31:27.911 05:26:19 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 01:31:27.911 05:26:19 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 01:31:27.911 05:26:19 sw_hotplug -- scripts/common.sh@323 -- # uname -s 01:31:27.911 05:26:19 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 01:31:27.911 05:26:19 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 01:31:27.912 05:26:19 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 01:31:27.912 05:26:19 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 01:31:27.912 05:26:19 sw_hotplug -- scripts/common.sh@323 -- # uname -s 01:31:27.912 05:26:19 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 01:31:27.912 05:26:19 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 01:31:27.912 05:26:19 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 01:31:27.912 05:26:19 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 01:31:27.912 05:26:19 sw_hotplug -- scripts/common.sh@323 -- # uname -s 01:31:27.912 05:26:19 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 01:31:27.912 05:26:19 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 01:31:27.912 05:26:19 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 01:31:27.912 05:26:19 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 01:31:27.912 05:26:19 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 01:31:27.912 05:26:19 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 01:31:27.912 05:26:19 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 01:31:28.170 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:31:28.428 Waiting for block devices as requested 01:31:28.428 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 01:31:28.428 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 01:31:28.686 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 01:31:28.686 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 01:31:33.947 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 01:31:33.947 05:26:25 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 01:31:33.947 05:26:25 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 01:31:34.206 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 01:31:34.463 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:31:34.463 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 01:31:34.721 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 01:31:34.979 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 01:31:34.979 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 01:31:34.979 05:26:26 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 01:31:34.979 05:26:26 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 01:31:34.979 05:26:26 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 01:31:34.979 05:26:26 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 01:31:34.979 05:26:26 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=68297 01:31:34.979 05:26:26 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 01:31:34.979 05:26:26 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 01:31:34.979 05:26:26 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 01:31:34.979 05:26:26 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 01:31:34.979 05:26:26 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 01:31:34.979 05:26:26 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 01:31:34.979 05:26:26 sw_hotplug -- common/autotest_common.sh@711 -- # exec 01:31:34.979 05:26:26 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 01:31:34.979 05:26:26 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 01:31:34.979 05:26:26 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 01:31:34.979 05:26:26 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 01:31:34.979 05:26:26 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 01:31:34.979 05:26:26 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 01:31:34.979 05:26:26 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 01:31:35.238 Initializing NVMe Controllers 01:31:35.238 Attaching to 0000:00:10.0 01:31:35.238 Attaching to 0000:00:11.0 01:31:35.238 Attached to 0000:00:11.0 01:31:35.238 Attached to 0000:00:10.0 01:31:35.238 Initialization complete. Starting I/O... 01:31:35.238 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 01:31:35.238 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 01:31:35.238 01:31:36.613 QEMU NVMe Ctrl (12341 ): 1180 I/Os completed (+1180) 01:31:36.613 QEMU NVMe Ctrl (12340 ): 1209 I/Os completed (+1209) 01:31:36.613 01:31:37.547 QEMU NVMe Ctrl (12341 ): 2764 I/Os completed (+1584) 01:31:37.547 QEMU NVMe Ctrl (12340 ): 2947 I/Os completed (+1738) 01:31:37.547 01:31:38.481 QEMU NVMe Ctrl (12341 ): 4544 I/Os completed (+1780) 01:31:38.481 QEMU NVMe Ctrl (12340 ): 4767 I/Os completed (+1820) 01:31:38.481 01:31:39.413 QEMU NVMe Ctrl (12341 ): 6187 I/Os completed (+1643) 01:31:39.413 QEMU NVMe Ctrl (12340 ): 6443 I/Os completed (+1676) 01:31:39.413 01:31:40.346 QEMU NVMe Ctrl (12341 ): 7975 I/Os completed (+1788) 01:31:40.346 QEMU NVMe Ctrl (12340 ): 8238 I/Os completed (+1795) 01:31:40.346 01:31:41.280 05:26:32 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 01:31:41.280 05:26:32 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 01:31:41.280 05:26:32 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 01:31:41.280 [2024-12-09 05:26:32.591711] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 01:31:41.280 Controller removed: QEMU NVMe Ctrl (12340 ) 01:31:41.280 [2024-12-09 05:26:32.594067] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:31:41.280 [2024-12-09 05:26:32.594140] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:31:41.280 [2024-12-09 05:26:32.594171] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:31:41.280 [2024-12-09 05:26:32.594197] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:31:41.280 unregister_dev: QEMU NVMe Ctrl (12340 ) 01:31:41.280 [2024-12-09 05:26:32.597294] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:31:41.280 [2024-12-09 05:26:32.597355] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:31:41.280 [2024-12-09 05:26:32.597379] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:31:41.280 [2024-12-09 05:26:32.597417] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:31:41.280 05:26:32 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 01:31:41.280 05:26:32 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 01:31:41.280 [2024-12-09 05:26:32.627612] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 01:31:41.280 Controller removed: QEMU NVMe Ctrl (12341 ) 01:31:41.280 [2024-12-09 05:26:32.629605] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:31:41.280 [2024-12-09 05:26:32.629677] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:31:41.280 [2024-12-09 05:26:32.629741] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:31:41.280 [2024-12-09 05:26:32.629769] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:31:41.280 unregister_dev: QEMU NVMe Ctrl (12341 ) 01:31:41.280 [2024-12-09 05:26:32.632678] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:31:41.280 [2024-12-09 05:26:32.632743] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:31:41.280 [2024-12-09 05:26:32.632774] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:31:41.280 [2024-12-09 05:26:32.632797] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:31:41.280 05:26:32 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 01:31:41.280 05:26:32 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 01:31:41.280 05:26:32 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 01:31:41.280 05:26:32 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 01:31:41.280 05:26:32 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 01:31:41.280 05:26:32 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 01:31:41.280 05:26:32 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 01:31:41.280 05:26:32 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 01:31:41.280 05:26:32 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 01:31:41.280 05:26:32 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 01:31:41.281 Attaching to 0000:00:10.0 01:31:41.281 Attached to 0000:00:10.0 01:31:41.281 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 01:31:41.281 01:31:41.538 05:26:32 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 01:31:41.538 05:26:32 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 01:31:41.538 05:26:32 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 01:31:41.538 Attaching to 0000:00:11.0 01:31:41.538 Attached to 0000:00:11.0 01:31:42.473 QEMU NVMe Ctrl (12340 ): 1828 I/Os completed (+1828) 01:31:42.473 QEMU NVMe Ctrl (12341 ): 1655 I/Os completed (+1655) 01:31:42.473 01:31:43.409 QEMU NVMe Ctrl (12340 ): 3636 I/Os completed (+1808) 01:31:43.409 QEMU NVMe Ctrl (12341 ): 3466 I/Os completed (+1811) 01:31:43.409 01:31:44.349 QEMU NVMe Ctrl (12340 ): 5404 I/Os completed (+1768) 01:31:44.349 QEMU NVMe Ctrl (12341 ): 5239 I/Os completed (+1773) 01:31:44.349 01:31:45.284 QEMU NVMe Ctrl (12340 ): 7236 I/Os completed (+1832) 01:31:45.284 QEMU NVMe Ctrl (12341 ): 7071 I/Os completed (+1832) 01:31:45.284 01:31:46.660 QEMU NVMe Ctrl (12340 ): 8968 I/Os completed (+1732) 01:31:46.660 QEMU NVMe Ctrl (12341 ): 8837 I/Os completed (+1766) 01:31:46.660 01:31:47.228 QEMU NVMe Ctrl (12340 ): 10760 I/Os completed (+1792) 01:31:47.228 QEMU NVMe Ctrl (12341 ): 10664 I/Os completed (+1827) 01:31:47.228 01:31:48.604 QEMU NVMe Ctrl (12340 ): 12496 I/Os completed (+1736) 01:31:48.604 QEMU NVMe Ctrl (12341 ): 12457 I/Os completed (+1793) 01:31:48.604 01:31:49.540 QEMU NVMe Ctrl (12340 ): 14252 I/Os completed (+1756) 01:31:49.540 QEMU NVMe Ctrl (12341 ): 14246 I/Os completed (+1789) 01:31:49.540 01:31:50.474 QEMU NVMe Ctrl (12340 ): 16068 I/Os completed (+1816) 01:31:50.474 QEMU NVMe Ctrl (12341 ): 16105 I/Os completed (+1859) 01:31:50.474 01:31:51.408 QEMU NVMe Ctrl (12340 ): 17904 I/Os completed (+1836) 01:31:51.408 QEMU NVMe Ctrl (12341 ): 17993 I/Os completed (+1888) 01:31:51.408 01:31:52.343 QEMU NVMe Ctrl (12340 ): 19728 I/Os completed (+1824) 01:31:52.343 QEMU NVMe Ctrl (12341 ): 19860 I/Os completed (+1867) 01:31:52.343 01:31:53.278 QEMU NVMe Ctrl (12340 ): 21568 I/Os completed (+1840) 01:31:53.278 QEMU NVMe Ctrl (12341 ): 21736 I/Os completed (+1876) 01:31:53.278 01:31:53.536 05:26:44 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 01:31:53.536 05:26:44 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 01:31:53.536 05:26:44 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 01:31:53.536 05:26:44 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 01:31:53.536 [2024-12-09 05:26:44.940613] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 01:31:53.536 Controller removed: QEMU NVMe Ctrl (12340 ) 01:31:53.536 [2024-12-09 05:26:44.942732] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:31:53.536 [2024-12-09 05:26:44.942804] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:31:53.536 [2024-12-09 05:26:44.942835] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:31:53.536 [2024-12-09 05:26:44.942877] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:31:53.536 unregister_dev: QEMU NVMe Ctrl (12340 ) 01:31:53.536 [2024-12-09 05:26:44.946197] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:31:53.536 [2024-12-09 05:26:44.946265] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:31:53.536 [2024-12-09 05:26:44.946310] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:31:53.536 [2024-12-09 05:26:44.946332] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:31:53.536 05:26:44 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 01:31:53.536 05:26:44 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 01:31:53.536 [2024-12-09 05:26:44.971901] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 01:31:53.536 Controller removed: QEMU NVMe Ctrl (12341 ) 01:31:53.536 [2024-12-09 05:26:44.974041] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:31:53.537 [2024-12-09 05:26:44.974116] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:31:53.537 [2024-12-09 05:26:44.974157] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:31:53.537 [2024-12-09 05:26:44.974186] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:31:53.537 unregister_dev: QEMU NVMe Ctrl (12341 ) 01:31:53.537 [2024-12-09 05:26:44.977116] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:31:53.537 [2024-12-09 05:26:44.977170] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:31:53.537 [2024-12-09 05:26:44.977198] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:31:53.537 [2024-12-09 05:26:44.977221] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:31:53.537 05:26:44 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 01:31:53.537 05:26:44 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 01:31:53.537 05:26:45 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 01:31:53.537 05:26:45 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 01:31:53.537 05:26:45 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 01:31:53.796 05:26:45 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 01:31:53.796 05:26:45 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 01:31:53.796 05:26:45 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 01:31:53.796 05:26:45 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 01:31:53.796 05:26:45 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 01:31:53.796 Attaching to 0000:00:10.0 01:31:53.796 Attached to 0000:00:10.0 01:31:53.796 05:26:45 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 01:31:53.796 05:26:45 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 01:31:53.796 05:26:45 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 01:31:53.796 Attaching to 0000:00:11.0 01:31:53.796 Attached to 0000:00:11.0 01:31:54.363 QEMU NVMe Ctrl (12340 ): 1160 I/Os completed (+1160) 01:31:54.363 QEMU NVMe Ctrl (12341 ): 1008 I/Os completed (+1008) 01:31:54.363 01:31:55.297 QEMU NVMe Ctrl (12340 ): 2960 I/Os completed (+1800) 01:31:55.297 QEMU NVMe Ctrl (12341 ): 2843 I/Os completed (+1835) 01:31:55.297 01:31:56.231 QEMU NVMe Ctrl (12340 ): 4740 I/Os completed (+1780) 01:31:56.231 QEMU NVMe Ctrl (12341 ): 4650 I/Os completed (+1807) 01:31:56.231 01:31:57.606 QEMU NVMe Ctrl (12340 ): 6528 I/Os completed (+1788) 01:31:57.607 QEMU NVMe Ctrl (12341 ): 6489 I/Os completed (+1839) 01:31:57.607 01:31:58.542 QEMU NVMe Ctrl (12340 ): 8336 I/Os completed (+1808) 01:31:58.542 QEMU NVMe Ctrl (12341 ): 8321 I/Os completed (+1832) 01:31:58.542 01:31:59.475 QEMU NVMe Ctrl (12340 ): 10100 I/Os completed (+1764) 01:31:59.475 QEMU NVMe Ctrl (12341 ): 10106 I/Os completed (+1785) 01:31:59.475 01:32:00.410 QEMU NVMe Ctrl (12340 ): 11876 I/Os completed (+1776) 01:32:00.410 QEMU NVMe Ctrl (12341 ): 11908 I/Os completed (+1802) 01:32:00.410 01:32:01.347 QEMU NVMe Ctrl (12340 ): 13676 I/Os completed (+1800) 01:32:01.347 QEMU NVMe Ctrl (12341 ): 13717 I/Os completed (+1809) 01:32:01.347 01:32:02.281 QEMU NVMe Ctrl (12340 ): 15452 I/Os completed (+1776) 01:32:02.281 QEMU NVMe Ctrl (12341 ): 15500 I/Os completed (+1783) 01:32:02.281 01:32:03.658 QEMU NVMe Ctrl (12340 ): 17200 I/Os completed (+1748) 01:32:03.658 QEMU NVMe Ctrl (12341 ): 17260 I/Os completed (+1760) 01:32:03.658 01:32:04.225 QEMU NVMe Ctrl (12340 ): 18888 I/Os completed (+1688) 01:32:04.225 QEMU NVMe Ctrl (12341 ): 18964 I/Os completed (+1704) 01:32:04.225 01:32:05.598 QEMU NVMe Ctrl (12340 ): 20700 I/Os completed (+1812) 01:32:05.598 QEMU NVMe Ctrl (12341 ): 20810 I/Os completed (+1846) 01:32:05.598 01:32:05.855 05:26:57 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 01:32:05.855 05:26:57 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 01:32:05.855 05:26:57 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 01:32:05.855 05:26:57 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 01:32:05.855 [2024-12-09 05:26:57.279983] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 01:32:05.855 Controller removed: QEMU NVMe Ctrl (12340 ) 01:32:05.855 [2024-12-09 05:26:57.282169] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:32:05.855 [2024-12-09 05:26:57.282236] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:32:05.855 [2024-12-09 05:26:57.282265] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:32:05.855 [2024-12-09 05:26:57.282293] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:32:05.855 unregister_dev: QEMU NVMe Ctrl (12340 ) 01:32:05.855 [2024-12-09 05:26:57.285192] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:32:05.855 [2024-12-09 05:26:57.285255] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:32:05.855 [2024-12-09 05:26:57.285280] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:32:05.855 [2024-12-09 05:26:57.285301] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:32:05.855 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:10.0/vendor 01:32:05.855 EAL: Scan for (pci) bus failed. 01:32:05.855 05:26:57 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 01:32:05.855 05:26:57 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 01:32:05.855 [2024-12-09 05:26:57.305558] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 01:32:05.855 Controller removed: QEMU NVMe Ctrl (12341 ) 01:32:05.855 [2024-12-09 05:26:57.307430] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:32:05.855 [2024-12-09 05:26:57.307521] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:32:05.855 [2024-12-09 05:26:57.307552] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:32:05.855 [2024-12-09 05:26:57.307578] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:32:05.855 unregister_dev: QEMU NVMe Ctrl (12341 ) 01:32:05.855 [2024-12-09 05:26:57.310340] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:32:05.856 [2024-12-09 05:26:57.310416] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:32:05.856 [2024-12-09 05:26:57.310451] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:32:05.856 [2024-12-09 05:26:57.310471] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:32:05.856 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 01:32:05.856 EAL: Scan for (pci) bus failed. 01:32:05.856 05:26:57 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 01:32:05.856 05:26:57 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 01:32:05.856 05:26:57 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 01:32:05.856 05:26:57 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 01:32:05.856 05:26:57 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 01:32:06.112 05:26:57 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 01:32:06.112 05:26:57 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 01:32:06.112 05:26:57 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 01:32:06.112 05:26:57 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 01:32:06.112 05:26:57 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 01:32:06.112 Attaching to 0000:00:10.0 01:32:06.112 Attached to 0000:00:10.0 01:32:06.112 05:26:57 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 01:32:06.112 05:26:57 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 01:32:06.112 05:26:57 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 01:32:06.112 Attaching to 0000:00:11.0 01:32:06.112 Attached to 0000:00:11.0 01:32:06.112 unregister_dev: QEMU NVMe Ctrl (12340 ) 01:32:06.112 unregister_dev: QEMU NVMe Ctrl (12341 ) 01:32:06.112 [2024-12-09 05:26:57.631074] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 01:32:18.327 05:27:09 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 01:32:18.327 05:27:09 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 01:32:18.327 05:27:09 sw_hotplug -- common/autotest_common.sh@719 -- # time=43.03 01:32:18.327 05:27:09 sw_hotplug -- common/autotest_common.sh@720 -- # echo 43.03 01:32:18.327 05:27:09 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 01:32:18.327 05:27:09 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=43.03 01:32:18.327 05:27:09 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 43.03 2 01:32:18.327 remove_attach_helper took 43.03s to complete (handling 2 nvme drive(s)) 05:27:09 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 01:32:24.883 05:27:15 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 68297 01:32:24.883 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (68297) - No such process 01:32:24.883 05:27:15 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 68297 01:32:24.883 05:27:15 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 01:32:24.883 05:27:15 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 01:32:24.883 05:27:15 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 01:32:24.883 05:27:15 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=68841 01:32:24.883 05:27:15 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:32:24.883 05:27:15 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 01:32:24.883 05:27:15 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 68841 01:32:24.883 05:27:15 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 68841 ']' 01:32:24.883 05:27:15 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:32:24.883 05:27:15 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 01:32:24.883 05:27:15 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:32:24.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:32:24.883 05:27:15 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 01:32:24.883 05:27:15 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 01:32:24.883 [2024-12-09 05:27:15.767883] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:32:24.883 [2024-12-09 05:27:15.768103] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68841 ] 01:32:24.883 [2024-12-09 05:27:15.959805] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:32:24.883 [2024-12-09 05:27:16.115814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:32:25.448 05:27:16 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:32:25.448 05:27:16 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 01:32:25.448 05:27:16 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 01:32:25.448 05:27:16 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:25.448 05:27:16 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 01:32:25.448 05:27:16 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:25.448 05:27:16 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 01:32:25.448 05:27:17 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 01:32:25.448 05:27:17 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 01:32:25.448 05:27:17 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 01:32:25.448 05:27:17 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 01:32:25.448 05:27:17 sw_hotplug -- common/autotest_common.sh@711 -- # exec 01:32:25.448 05:27:17 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 01:32:25.448 05:27:17 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 01:32:25.448 05:27:17 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 01:32:25.448 05:27:17 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 01:32:25.448 05:27:17 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 01:32:25.448 05:27:17 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 01:32:25.448 05:27:17 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 01:32:32.043 05:27:23 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 01:32:32.043 05:27:23 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 01:32:32.043 05:27:23 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 01:32:32.043 05:27:23 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 01:32:32.043 05:27:23 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 01:32:32.043 05:27:23 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 01:32:32.043 05:27:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 01:32:32.043 05:27:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 01:32:32.043 05:27:23 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 01:32:32.043 05:27:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 01:32:32.043 05:27:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 01:32:32.043 05:27:23 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:32.044 05:27:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 01:32:32.044 05:27:23 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:32.044 [2024-12-09 05:27:23.095800] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 01:32:32.044 [2024-12-09 05:27:23.098956] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:32:32.044 [2024-12-09 05:27:23.099074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 01:32:32.044 [2024-12-09 05:27:23.099118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:32.044 [2024-12-09 05:27:23.099179] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:32:32.044 [2024-12-09 05:27:23.099201] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 01:32:32.044 [2024-12-09 05:27:23.099219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:32.044 [2024-12-09 05:27:23.099235] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:32:32.044 [2024-12-09 05:27:23.099252] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 01:32:32.044 [2024-12-09 05:27:23.099265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:32.044 [2024-12-09 05:27:23.099287] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:32:32.044 [2024-12-09 05:27:23.099316] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 01:32:32.044 [2024-12-09 05:27:23.099349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:32.044 05:27:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 01:32:32.044 05:27:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 01:32:32.044 [2024-12-09 05:27:23.495714] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 01:32:32.044 [2024-12-09 05:27:23.498798] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:32:32.044 [2024-12-09 05:27:23.498867] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 01:32:32.044 [2024-12-09 05:27:23.498891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:32.044 [2024-12-09 05:27:23.498918] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:32:32.044 [2024-12-09 05:27:23.498937] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 01:32:32.044 [2024-12-09 05:27:23.498951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:32.044 [2024-12-09 05:27:23.498969] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:32:32.044 [2024-12-09 05:27:23.498990] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 01:32:32.044 [2024-12-09 05:27:23.499006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:32.044 [2024-12-09 05:27:23.499021] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:32:32.044 [2024-12-09 05:27:23.499046] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 01:32:32.044 [2024-12-09 05:27:23.499075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:32.044 05:27:23 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 01:32:32.044 05:27:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 01:32:32.044 05:27:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 01:32:32.044 05:27:23 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 01:32:32.044 05:27:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 01:32:32.044 05:27:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 01:32:32.044 05:27:23 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:32.044 05:27:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 01:32:32.044 05:27:23 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:32.303 05:27:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 01:32:32.303 05:27:23 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 01:32:32.303 05:27:23 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 01:32:32.303 05:27:23 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 01:32:32.303 05:27:23 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 01:32:32.303 05:27:23 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 01:32:32.303 05:27:23 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 01:32:32.303 05:27:23 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 01:32:32.303 05:27:23 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 01:32:32.303 05:27:23 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 01:32:32.562 05:27:23 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 01:32:32.562 05:27:23 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 01:32:32.562 05:27:23 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 01:32:44.764 05:27:35 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 01:32:44.764 05:27:35 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 01:32:44.764 05:27:35 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 01:32:44.764 05:27:35 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 01:32:44.764 05:27:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 01:32:44.764 05:27:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 01:32:44.764 05:27:35 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:44.764 05:27:35 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 01:32:44.764 05:27:35 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:44.764 05:27:36 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 01:32:44.764 05:27:36 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 01:32:44.764 05:27:36 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 01:32:44.764 05:27:36 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 01:32:44.764 05:27:36 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 01:32:44.764 05:27:36 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 01:32:44.764 05:27:36 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 01:32:44.764 05:27:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 01:32:44.764 05:27:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 01:32:44.764 05:27:36 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 01:32:44.764 05:27:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 01:32:44.764 05:27:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 01:32:44.764 05:27:36 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:44.764 05:27:36 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 01:32:44.764 05:27:36 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:44.764 [2024-12-09 05:27:36.096060] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 01:32:44.764 [2024-12-09 05:27:36.099069] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:32:44.764 [2024-12-09 05:27:36.099139] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 01:32:44.764 [2024-12-09 05:27:36.099160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:44.764 [2024-12-09 05:27:36.099206] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:32:44.764 [2024-12-09 05:27:36.099237] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 01:32:44.764 [2024-12-09 05:27:36.099254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:44.764 [2024-12-09 05:27:36.099270] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:32:44.764 [2024-12-09 05:27:36.099286] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 01:32:44.764 [2024-12-09 05:27:36.099299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:44.764 [2024-12-09 05:27:36.099316] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:32:44.764 [2024-12-09 05:27:36.099329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 01:32:44.764 [2024-12-09 05:27:36.099346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:44.764 05:27:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 01:32:44.764 05:27:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 01:32:45.023 [2024-12-09 05:27:36.496042] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 01:32:45.023 [2024-12-09 05:27:36.498838] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:32:45.023 [2024-12-09 05:27:36.498921] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 01:32:45.023 [2024-12-09 05:27:36.498947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:45.023 [2024-12-09 05:27:36.498975] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:32:45.023 [2024-12-09 05:27:36.499029] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 01:32:45.023 [2024-12-09 05:27:36.499044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:45.023 [2024-12-09 05:27:36.499061] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:32:45.023 [2024-12-09 05:27:36.499074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 01:32:45.023 [2024-12-09 05:27:36.499090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:45.023 [2024-12-09 05:27:36.499104] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:32:45.023 [2024-12-09 05:27:36.499120] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 01:32:45.023 [2024-12-09 05:27:36.499165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:45.023 05:27:36 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 01:32:45.023 05:27:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 01:32:45.023 05:27:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 01:32:45.023 05:27:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 01:32:45.023 05:27:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 01:32:45.023 05:27:36 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:45.023 05:27:36 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 01:32:45.023 05:27:36 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 01:32:45.023 05:27:36 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:45.281 05:27:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 01:32:45.281 05:27:36 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 01:32:45.281 05:27:36 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 01:32:45.281 05:27:36 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 01:32:45.281 05:27:36 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 01:32:45.281 05:27:36 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 01:32:45.281 05:27:36 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 01:32:45.281 05:27:36 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 01:32:45.281 05:27:36 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 01:32:45.281 05:27:36 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 01:32:45.539 05:27:36 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 01:32:45.539 05:27:36 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 01:32:45.539 05:27:36 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 01:32:57.790 05:27:48 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 01:32:57.790 05:27:48 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 01:32:57.790 05:27:48 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 01:32:57.790 05:27:48 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 01:32:57.790 05:27:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 01:32:57.790 05:27:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 01:32:57.790 05:27:48 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:57.790 05:27:48 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 01:32:57.790 05:27:48 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:57.790 05:27:49 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 01:32:57.790 05:27:49 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 01:32:57.790 05:27:49 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 01:32:57.790 05:27:49 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 01:32:57.790 05:27:49 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 01:32:57.790 05:27:49 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 01:32:57.790 05:27:49 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 01:32:57.790 05:27:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 01:32:57.790 05:27:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 01:32:57.790 05:27:49 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 01:32:57.790 05:27:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 01:32:57.790 05:27:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 01:32:57.790 05:27:49 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:57.790 05:27:49 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 01:32:57.790 05:27:49 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:57.790 [2024-12-09 05:27:49.096376] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 01:32:57.790 [2024-12-09 05:27:49.099598] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:32:57.790 [2024-12-09 05:27:49.099714] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 01:32:57.790 [2024-12-09 05:27:49.099738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:57.790 [2024-12-09 05:27:49.099770] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:32:57.790 [2024-12-09 05:27:49.099786] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 01:32:57.790 [2024-12-09 05:27:49.099806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:57.790 [2024-12-09 05:27:49.099822] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:32:57.790 [2024-12-09 05:27:49.099838] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 01:32:57.790 [2024-12-09 05:27:49.099853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:57.790 [2024-12-09 05:27:49.099871] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:32:57.790 [2024-12-09 05:27:49.099885] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 01:32:57.790 [2024-12-09 05:27:49.099902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:57.790 05:27:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 01:32:57.790 05:27:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 01:32:58.049 [2024-12-09 05:27:49.496367] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 01:32:58.049 [2024-12-09 05:27:49.499399] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:32:58.049 [2024-12-09 05:27:49.499470] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 01:32:58.049 [2024-12-09 05:27:49.499533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:58.049 [2024-12-09 05:27:49.499561] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:32:58.049 [2024-12-09 05:27:49.499579] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 01:32:58.049 [2024-12-09 05:27:49.499594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:58.049 [2024-12-09 05:27:49.499611] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:32:58.049 [2024-12-09 05:27:49.499641] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 01:32:58.049 [2024-12-09 05:27:49.499687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:58.049 [2024-12-09 05:27:49.499703] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:32:58.049 [2024-12-09 05:27:49.499733] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 01:32:58.049 [2024-12-09 05:27:49.499751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:58.049 05:27:49 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 01:32:58.049 05:27:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 01:32:58.049 05:27:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 01:32:58.050 05:27:49 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 01:32:58.050 05:27:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 01:32:58.050 05:27:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 01:32:58.050 05:27:49 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:58.050 05:27:49 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 01:32:58.050 05:27:49 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:58.308 05:27:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 01:32:58.308 05:27:49 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 01:32:58.308 05:27:49 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 01:32:58.308 05:27:49 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 01:32:58.308 05:27:49 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 01:32:58.308 05:27:49 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 01:32:58.308 05:27:49 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 01:32:58.308 05:27:49 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 01:32:58.308 05:27:49 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 01:32:58.308 05:27:49 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 01:32:58.566 05:27:49 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 01:32:58.566 05:27:49 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 01:32:58.566 05:27:49 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 01:33:10.770 05:28:01 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 01:33:10.770 05:28:01 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 01:33:10.770 05:28:01 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 01:33:10.770 05:28:01 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 01:33:10.770 05:28:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 01:33:10.771 05:28:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 01:33:10.771 05:28:01 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:10.771 05:28:01 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 01:33:10.771 05:28:02 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:10.771 05:28:02 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 01:33:10.771 05:28:02 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 01:33:10.771 05:28:02 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.02 01:33:10.771 05:28:02 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.02 01:33:10.771 05:28:02 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 01:33:10.771 05:28:02 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.02 01:33:10.771 05:28:02 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.02 2 01:33:10.771 remove_attach_helper took 45.02s to complete (handling 2 nvme drive(s)) 05:28:02 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 01:33:10.771 05:28:02 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:10.771 05:28:02 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 01:33:10.771 05:28:02 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:10.771 05:28:02 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 01:33:10.771 05:28:02 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:10.771 05:28:02 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 01:33:10.771 05:28:02 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:10.771 05:28:02 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 01:33:10.771 05:28:02 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 01:33:10.771 05:28:02 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 01:33:10.771 05:28:02 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 01:33:10.771 05:28:02 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 01:33:10.771 05:28:02 sw_hotplug -- common/autotest_common.sh@711 -- # exec 01:33:10.771 05:28:02 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 01:33:10.771 05:28:02 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 01:33:10.771 05:28:02 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 01:33:10.771 05:28:02 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 01:33:10.771 05:28:02 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 01:33:10.771 05:28:02 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 01:33:10.771 05:28:02 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 01:33:17.372 05:28:08 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 01:33:17.372 05:28:08 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 01:33:17.372 05:28:08 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 01:33:17.372 05:28:08 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 01:33:17.372 05:28:08 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 01:33:17.372 05:28:08 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 01:33:17.372 05:28:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 01:33:17.372 05:28:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 01:33:17.372 05:28:08 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 01:33:17.372 05:28:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 01:33:17.372 05:28:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 01:33:17.372 05:28:08 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:17.372 05:28:08 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 01:33:17.372 [2024-12-09 05:28:08.152276] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 01:33:17.372 [2024-12-09 05:28:08.156830] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:33:17.372 [2024-12-09 05:28:08.156884] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 01:33:17.372 [2024-12-09 05:28:08.156905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:33:17.372 [2024-12-09 05:28:08.156936] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:33:17.372 [2024-12-09 05:28:08.156951] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 01:33:17.372 [2024-12-09 05:28:08.156967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:33:17.372 [2024-12-09 05:28:08.156982] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:33:17.372 [2024-12-09 05:28:08.156997] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 01:33:17.372 [2024-12-09 05:28:08.157011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:33:17.372 [2024-12-09 05:28:08.157027] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:33:17.372 [2024-12-09 05:28:08.157040] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 01:33:17.372 [2024-12-09 05:28:08.157057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:33:17.372 05:28:08 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:17.372 05:28:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 01:33:17.372 05:28:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 01:33:17.372 [2024-12-09 05:28:08.552251] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 01:33:17.372 [2024-12-09 05:28:08.554252] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:33:17.372 [2024-12-09 05:28:08.554308] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 01:33:17.372 [2024-12-09 05:28:08.554350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:33:17.372 [2024-12-09 05:28:08.554377] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:33:17.372 [2024-12-09 05:28:08.554395] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 01:33:17.372 [2024-12-09 05:28:08.554410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:33:17.372 [2024-12-09 05:28:08.554429] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:33:17.372 [2024-12-09 05:28:08.554442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 01:33:17.372 [2024-12-09 05:28:08.554459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:33:17.372 [2024-12-09 05:28:08.554474] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:33:17.372 [2024-12-09 05:28:08.554491] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 01:33:17.372 [2024-12-09 05:28:08.554504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:33:17.372 05:28:08 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 01:33:17.372 05:28:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 01:33:17.372 05:28:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 01:33:17.372 05:28:08 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 01:33:17.372 05:28:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 01:33:17.372 05:28:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 01:33:17.372 05:28:08 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:17.372 05:28:08 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 01:33:17.372 05:28:08 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:17.372 05:28:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 01:33:17.372 05:28:08 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 01:33:17.372 05:28:08 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 01:33:17.372 05:28:08 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 01:33:17.372 05:28:08 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 01:33:17.372 05:28:08 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 01:33:17.372 05:28:08 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 01:33:17.372 05:28:08 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 01:33:17.372 05:28:08 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 01:33:17.372 05:28:08 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 01:33:17.630 05:28:09 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 01:33:17.630 05:28:09 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 01:33:17.630 05:28:09 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 01:33:29.832 05:28:21 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 01:33:29.832 05:28:21 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 01:33:29.832 05:28:21 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 01:33:29.832 05:28:21 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 01:33:29.832 05:28:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 01:33:29.832 05:28:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 01:33:29.832 05:28:21 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:29.832 05:28:21 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 01:33:29.832 05:28:21 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:29.832 05:28:21 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 01:33:29.832 05:28:21 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 01:33:29.832 05:28:21 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 01:33:29.832 05:28:21 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 01:33:29.832 05:28:21 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 01:33:29.832 05:28:21 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 01:33:29.832 05:28:21 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 01:33:29.832 05:28:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 01:33:29.832 05:28:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 01:33:29.832 05:28:21 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 01:33:29.832 05:28:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 01:33:29.832 05:28:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 01:33:29.832 05:28:21 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:29.832 05:28:21 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 01:33:29.832 [2024-12-09 05:28:21.152541] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 01:33:29.832 [2024-12-09 05:28:21.154626] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:33:29.832 [2024-12-09 05:28:21.154713] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 01:33:29.832 [2024-12-09 05:28:21.154736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:33:29.832 [2024-12-09 05:28:21.154768] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:33:29.832 [2024-12-09 05:28:21.154784] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 01:33:29.832 [2024-12-09 05:28:21.154804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:33:29.832 [2024-12-09 05:28:21.154820] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:33:29.832 [2024-12-09 05:28:21.154852] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 01:33:29.832 [2024-12-09 05:28:21.154867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:33:29.832 [2024-12-09 05:28:21.154885] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:33:29.832 [2024-12-09 05:28:21.154898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 01:33:29.832 [2024-12-09 05:28:21.154915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:33:29.832 05:28:21 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:29.832 05:28:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 01:33:29.832 05:28:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 01:33:30.091 [2024-12-09 05:28:21.552526] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 01:33:30.091 [2024-12-09 05:28:21.554644] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:33:30.091 [2024-12-09 05:28:21.554749] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 01:33:30.091 [2024-12-09 05:28:21.554775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:33:30.091 [2024-12-09 05:28:21.554810] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:33:30.091 [2024-12-09 05:28:21.554834] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 01:33:30.091 [2024-12-09 05:28:21.554849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:33:30.091 [2024-12-09 05:28:21.554868] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:33:30.091 [2024-12-09 05:28:21.554881] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 01:33:30.091 [2024-12-09 05:28:21.554908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:33:30.091 [2024-12-09 05:28:21.554923] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:33:30.091 [2024-12-09 05:28:21.554940] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 01:33:30.091 [2024-12-09 05:28:21.554954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:33:30.091 05:28:21 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 01:33:30.091 05:28:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 01:33:30.091 05:28:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 01:33:30.091 05:28:21 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 01:33:30.092 05:28:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 01:33:30.092 05:28:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 01:33:30.092 05:28:21 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:30.092 05:28:21 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 01:33:30.092 05:28:21 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:30.350 05:28:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 01:33:30.350 05:28:21 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 01:33:30.350 05:28:21 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 01:33:30.350 05:28:21 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 01:33:30.351 05:28:21 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 01:33:30.351 05:28:21 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 01:33:30.351 05:28:21 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 01:33:30.351 05:28:21 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 01:33:30.351 05:28:21 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 01:33:30.351 05:28:21 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 01:33:30.620 05:28:21 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 01:33:30.620 05:28:21 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 01:33:30.620 05:28:21 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 01:33:42.881 05:28:33 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 01:33:42.881 05:28:33 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 01:33:42.881 05:28:33 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 01:33:42.881 05:28:33 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 01:33:42.881 05:28:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 01:33:42.881 05:28:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 01:33:42.881 05:28:34 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:42.881 05:28:34 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 01:33:42.881 05:28:34 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:42.881 05:28:34 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 01:33:42.881 05:28:34 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 01:33:42.881 05:28:34 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 01:33:42.881 05:28:34 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 01:33:42.881 05:28:34 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 01:33:42.881 05:28:34 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 01:33:42.881 05:28:34 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 01:33:42.881 05:28:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 01:33:42.882 05:28:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 01:33:42.882 05:28:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 01:33:42.882 05:28:34 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 01:33:42.882 05:28:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 01:33:42.882 05:28:34 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:42.882 05:28:34 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 01:33:42.882 05:28:34 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:42.882 05:28:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 01:33:42.882 05:28:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 01:33:42.882 [2024-12-09 05:28:34.152712] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 01:33:42.882 [2024-12-09 05:28:34.158432] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:33:42.882 [2024-12-09 05:28:34.158494] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 01:33:42.882 [2024-12-09 05:28:34.158517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:33:42.882 [2024-12-09 05:28:34.158547] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:33:42.882 [2024-12-09 05:28:34.158563] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 01:33:42.882 [2024-12-09 05:28:34.158580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:33:42.882 [2024-12-09 05:28:34.158596] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:33:42.882 [2024-12-09 05:28:34.158616] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 01:33:42.882 [2024-12-09 05:28:34.158630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:33:42.882 [2024-12-09 05:28:34.158648] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:33:42.882 [2024-12-09 05:28:34.158684] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 01:33:42.882 [2024-12-09 05:28:34.158707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:33:43.139 [2024-12-09 05:28:34.552678] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 01:33:43.139 [2024-12-09 05:28:34.554541] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:33:43.139 [2024-12-09 05:28:34.554622] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 01:33:43.139 [2024-12-09 05:28:34.554645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:33:43.139 [2024-12-09 05:28:34.554682] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:33:43.139 [2024-12-09 05:28:34.554715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 01:33:43.139 [2024-12-09 05:28:34.554731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:33:43.139 [2024-12-09 05:28:34.554751] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:33:43.139 [2024-12-09 05:28:34.554764] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 01:33:43.139 [2024-12-09 05:28:34.554794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:33:43.139 [2024-12-09 05:28:34.554823] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 01:33:43.139 [2024-12-09 05:28:34.554860] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 01:33:43.139 [2024-12-09 05:28:34.554878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:33:43.139 05:28:34 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 01:33:43.139 05:28:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 01:33:43.139 05:28:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 01:33:43.139 05:28:34 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 01:33:43.139 05:28:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 01:33:43.139 05:28:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 01:33:43.139 05:28:34 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:43.139 05:28:34 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 01:33:43.139 05:28:34 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:43.139 05:28:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 01:33:43.139 05:28:34 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 01:33:43.396 05:28:34 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 01:33:43.396 05:28:34 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 01:33:43.396 05:28:34 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 01:33:43.396 05:28:34 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 01:33:43.396 05:28:34 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 01:33:43.396 05:28:34 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 01:33:43.396 05:28:34 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 01:33:43.396 05:28:34 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 01:33:43.396 05:28:34 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 01:33:43.396 05:28:34 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 01:33:43.396 05:28:34 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 01:33:55.607 05:28:46 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 01:33:55.607 05:28:46 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 01:33:55.607 05:28:46 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 01:33:55.607 05:28:46 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 01:33:55.607 05:28:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 01:33:55.607 05:28:46 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:55.607 05:28:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 01:33:55.607 05:28:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 01:33:55.607 05:28:47 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:55.607 05:28:47 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 01:33:55.607 05:28:47 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 01:33:55.607 05:28:47 sw_hotplug -- common/autotest_common.sh@719 -- # time=44.98 01:33:55.607 05:28:47 sw_hotplug -- common/autotest_common.sh@720 -- # echo 44.98 01:33:55.607 05:28:47 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 01:33:55.607 05:28:47 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.98 01:33:55.607 05:28:47 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.98 2 01:33:55.607 remove_attach_helper took 44.98s to complete (handling 2 nvme drive(s)) 05:28:47 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 01:33:55.607 05:28:47 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 68841 01:33:55.607 05:28:47 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 68841 ']' 01:33:55.607 05:28:47 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 68841 01:33:55.607 05:28:47 sw_hotplug -- common/autotest_common.sh@959 -- # uname 01:33:55.607 05:28:47 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:33:55.607 05:28:47 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68841 01:33:55.607 05:28:47 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:33:55.607 05:28:47 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:33:55.607 killing process with pid 68841 01:33:55.607 05:28:47 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68841' 01:33:55.607 05:28:47 sw_hotplug -- common/autotest_common.sh@973 -- # kill 68841 01:33:55.607 05:28:47 sw_hotplug -- common/autotest_common.sh@978 -- # wait 68841 01:33:58.136 05:28:49 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 01:33:58.394 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:33:58.961 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 01:33:58.961 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 01:33:58.961 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 01:33:58.961 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 01:33:59.220 01:33:59.220 real 2m31.985s 01:33:59.220 user 1m53.209s 01:33:59.220 sys 0m18.503s 01:33:59.220 05:28:50 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 01:33:59.220 05:28:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 01:33:59.220 ************************************ 01:33:59.220 END TEST sw_hotplug 01:33:59.220 ************************************ 01:33:59.220 05:28:50 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 01:33:59.220 05:28:50 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 01:33:59.220 05:28:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:33:59.220 05:28:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:33:59.220 05:28:50 -- common/autotest_common.sh@10 -- # set +x 01:33:59.220 ************************************ 01:33:59.220 START TEST nvme_xnvme 01:33:59.220 ************************************ 01:33:59.220 05:28:50 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 01:33:59.220 * Looking for test storage... 01:33:59.220 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 01:33:59.220 05:28:50 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:33:59.220 05:28:50 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 01:33:59.220 05:28:50 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:33:59.220 05:28:50 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:33:59.220 05:28:50 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:33:59.220 05:28:50 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 01:33:59.220 05:28:50 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 01:33:59.220 05:28:50 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 01:33:59.220 05:28:50 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 01:33:59.220 05:28:50 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 01:33:59.220 05:28:50 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 01:33:59.220 05:28:50 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 01:33:59.220 05:28:50 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 01:33:59.220 05:28:50 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 01:33:59.220 05:28:50 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:33:59.220 05:28:50 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 01:33:59.220 05:28:50 nvme_xnvme -- scripts/common.sh@345 -- # : 1 01:33:59.220 05:28:50 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 01:33:59.220 05:28:50 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:33:59.220 05:28:50 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 01:33:59.220 05:28:50 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 01:33:59.220 05:28:50 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:33:59.220 05:28:50 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 01:33:59.220 05:28:50 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 01:33:59.220 05:28:50 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 01:33:59.220 05:28:50 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 01:33:59.220 05:28:50 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:33:59.220 05:28:50 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 01:33:59.220 05:28:50 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 01:33:59.220 05:28:50 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:33:59.220 05:28:50 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:33:59.220 05:28:50 nvme_xnvme -- scripts/common.sh@368 -- # return 0 01:33:59.220 05:28:50 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:33:59.220 05:28:50 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:33:59.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:33:59.220 --rc genhtml_branch_coverage=1 01:33:59.220 --rc genhtml_function_coverage=1 01:33:59.220 --rc genhtml_legend=1 01:33:59.220 --rc geninfo_all_blocks=1 01:33:59.220 --rc geninfo_unexecuted_blocks=1 01:33:59.220 01:33:59.220 ' 01:33:59.220 05:28:50 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:33:59.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:33:59.220 --rc genhtml_branch_coverage=1 01:33:59.220 --rc genhtml_function_coverage=1 01:33:59.221 --rc genhtml_legend=1 01:33:59.221 --rc geninfo_all_blocks=1 01:33:59.221 --rc geninfo_unexecuted_blocks=1 01:33:59.221 01:33:59.221 ' 01:33:59.221 05:28:50 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:33:59.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:33:59.221 --rc genhtml_branch_coverage=1 01:33:59.221 --rc genhtml_function_coverage=1 01:33:59.221 --rc genhtml_legend=1 01:33:59.221 --rc geninfo_all_blocks=1 01:33:59.221 --rc geninfo_unexecuted_blocks=1 01:33:59.221 01:33:59.221 ' 01:33:59.221 05:28:50 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:33:59.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:33:59.221 --rc genhtml_branch_coverage=1 01:33:59.221 --rc genhtml_function_coverage=1 01:33:59.221 --rc genhtml_legend=1 01:33:59.221 --rc geninfo_all_blocks=1 01:33:59.221 --rc geninfo_unexecuted_blocks=1 01:33:59.221 01:33:59.221 ' 01:33:59.221 05:28:50 nvme_xnvme -- xnvme/common.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/dd/common.sh 01:33:59.221 05:28:50 nvme_xnvme -- dd/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 01:33:59.221 05:28:50 nvme_xnvme -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 01:33:59.221 05:28:50 nvme_xnvme -- common/autotest_common.sh@34 -- # set -e 01:33:59.221 05:28:50 nvme_xnvme -- common/autotest_common.sh@35 -- # shopt -s nullglob 01:33:59.221 05:28:50 nvme_xnvme -- common/autotest_common.sh@36 -- # shopt -s extglob 01:33:59.221 05:28:50 nvme_xnvme -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 01:33:59.221 05:28:50 nvme_xnvme -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 01:33:59.221 05:28:50 nvme_xnvme -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 01:33:59.221 05:28:50 nvme_xnvme -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 01:33:59.221 05:28:50 nvme_xnvme -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 01:33:59.221 05:28:50 nvme_xnvme -- common/build_config.sh@2 -- # CONFIG_ASAN=y 01:33:59.221 05:28:50 nvme_xnvme -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 01:33:59.221 05:28:50 nvme_xnvme -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 01:33:59.221 05:28:50 nvme_xnvme -- common/build_config.sh@5 -- # CONFIG_USDT=n 01:33:59.221 05:28:50 nvme_xnvme -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 01:33:59.221 05:28:50 nvme_xnvme -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 01:33:59.221 05:28:50 nvme_xnvme -- common/build_config.sh@8 -- # CONFIG_RBD=n 01:33:59.221 05:28:50 nvme_xnvme -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 01:33:59.221 05:28:50 nvme_xnvme -- common/build_config.sh@10 -- # CONFIG_IDXD=y 01:33:59.221 05:28:50 nvme_xnvme -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 01:33:59.221 05:28:50 nvme_xnvme -- common/build_config.sh@12 -- # CONFIG_SMA=n 01:33:59.221 05:28:50 nvme_xnvme -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 01:33:59.221 05:28:50 nvme_xnvme -- common/build_config.sh@14 -- # CONFIG_TSAN=n 01:33:59.221 05:28:50 nvme_xnvme -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 01:33:59.221 05:28:50 nvme_xnvme -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 01:33:59.221 05:28:50 nvme_xnvme -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 01:33:59.221 05:28:50 nvme_xnvme -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 01:33:59.221 05:28:50 nvme_xnvme -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 01:33:59.221 05:28:50 nvme_xnvme -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 01:33:59.221 05:28:50 nvme_xnvme -- common/build_config.sh@21 -- # CONFIG_LTO=n 01:33:59.221 05:28:50 nvme_xnvme -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 01:33:59.221 05:28:50 nvme_xnvme -- common/build_config.sh@23 -- # CONFIG_CET=n 01:33:59.221 05:28:50 nvme_xnvme -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 01:33:59.221 05:28:50 nvme_xnvme -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 01:33:59.221 05:28:50 nvme_xnvme -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 01:33:59.221 05:28:50 nvme_xnvme -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 01:33:59.221 05:28:50 nvme_xnvme -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 01:33:59.483 05:28:50 nvme_xnvme -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 01:33:59.483 05:28:50 nvme_xnvme -- common/build_config.sh@30 -- # CONFIG_UBLK=y 01:33:59.483 05:28:50 nvme_xnvme -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 01:33:59.483 05:28:50 nvme_xnvme -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 01:33:59.483 05:28:50 nvme_xnvme -- common/build_config.sh@33 -- # CONFIG_OCF=n 01:33:59.483 05:28:50 nvme_xnvme -- common/build_config.sh@34 -- # CONFIG_FUSE=n 01:33:59.483 05:28:50 nvme_xnvme -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 01:33:59.483 05:28:50 nvme_xnvme -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 01:33:59.483 05:28:50 nvme_xnvme -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 01:33:59.483 05:28:50 nvme_xnvme -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 01:33:59.483 05:28:50 nvme_xnvme -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 01:33:59.483 05:28:50 nvme_xnvme -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 01:33:59.483 05:28:50 nvme_xnvme -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 01:33:59.483 05:28:50 nvme_xnvme -- common/build_config.sh@42 -- # CONFIG_VHOST=y 01:33:59.483 05:28:50 nvme_xnvme -- common/build_config.sh@43 -- # CONFIG_DAOS=n 01:33:59.483 05:28:50 nvme_xnvme -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 01:33:59.483 05:28:50 nvme_xnvme -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 01:33:59.483 05:28:50 nvme_xnvme -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 01:33:59.483 05:28:50 nvme_xnvme -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 01:33:59.483 05:28:50 nvme_xnvme -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 01:33:59.483 05:28:50 nvme_xnvme -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 01:33:59.483 05:28:50 nvme_xnvme -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 01:33:59.483 05:28:50 nvme_xnvme -- common/build_config.sh@51 -- # CONFIG_RDMA=y 01:33:59.483 05:28:50 nvme_xnvme -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 01:33:59.483 05:28:50 nvme_xnvme -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 01:33:59.483 05:28:50 nvme_xnvme -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 01:33:59.483 05:28:50 nvme_xnvme -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 01:33:59.483 05:28:50 nvme_xnvme -- common/build_config.sh@56 -- # CONFIG_XNVME=y 01:33:59.483 05:28:50 nvme_xnvme -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 01:33:59.483 05:28:50 nvme_xnvme -- common/build_config.sh@58 -- # CONFIG_ARCH=native 01:33:59.483 05:28:50 nvme_xnvme -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 01:33:59.483 05:28:50 nvme_xnvme -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 01:33:59.483 05:28:50 nvme_xnvme -- common/build_config.sh@61 -- # CONFIG_WERROR=y 01:33:59.483 05:28:50 nvme_xnvme -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 01:33:59.483 05:28:50 nvme_xnvme -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 01:33:59.483 05:28:50 nvme_xnvme -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 01:33:59.483 05:28:50 nvme_xnvme -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 01:33:59.483 05:28:50 nvme_xnvme -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 01:33:59.483 05:28:50 nvme_xnvme -- common/build_config.sh@67 -- # CONFIG_ISAL=y 01:33:59.483 05:28:50 nvme_xnvme -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 01:33:59.483 05:28:50 nvme_xnvme -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 01:33:59.483 05:28:50 nvme_xnvme -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 01:33:59.483 05:28:50 nvme_xnvme -- common/build_config.sh@71 -- # CONFIG_APPS=y 01:33:59.483 05:28:50 nvme_xnvme -- common/build_config.sh@72 -- # CONFIG_SHARED=y 01:33:59.483 05:28:50 nvme_xnvme -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 01:33:59.483 05:28:50 nvme_xnvme -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 01:33:59.483 05:28:50 nvme_xnvme -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 01:33:59.483 05:28:50 nvme_xnvme -- common/build_config.sh@76 -- # CONFIG_FC=n 01:33:59.483 05:28:50 nvme_xnvme -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 01:33:59.483 05:28:50 nvme_xnvme -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 01:33:59.483 05:28:50 nvme_xnvme -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 01:33:59.483 05:28:50 nvme_xnvme -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 01:33:59.483 05:28:50 nvme_xnvme -- common/build_config.sh@81 -- # CONFIG_TESTS=y 01:33:59.483 05:28:50 nvme_xnvme -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 01:33:59.483 05:28:50 nvme_xnvme -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 01:33:59.483 05:28:50 nvme_xnvme -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 01:33:59.483 05:28:50 nvme_xnvme -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 01:33:59.483 05:28:50 nvme_xnvme -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 01:33:59.483 05:28:50 nvme_xnvme -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 01:33:59.483 05:28:50 nvme_xnvme -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 01:33:59.483 05:28:50 nvme_xnvme -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 01:33:59.483 05:28:50 nvme_xnvme -- common/build_config.sh@90 -- # CONFIG_URING=n 01:33:59.483 05:28:50 nvme_xnvme -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 01:33:59.483 05:28:50 nvme_xnvme -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 01:33:59.483 05:28:50 nvme_xnvme -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 01:33:59.483 05:28:50 nvme_xnvme -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 01:33:59.483 05:28:50 nvme_xnvme -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 01:33:59.483 05:28:50 nvme_xnvme -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 01:33:59.483 05:28:50 nvme_xnvme -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 01:33:59.483 05:28:50 nvme_xnvme -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 01:33:59.483 05:28:50 nvme_xnvme -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 01:33:59.483 05:28:50 nvme_xnvme -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 01:33:59.483 05:28:50 nvme_xnvme -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 01:33:59.483 05:28:50 nvme_xnvme -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 01:33:59.483 05:28:50 nvme_xnvme -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 01:33:59.483 05:28:50 nvme_xnvme -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 01:33:59.483 05:28:50 nvme_xnvme -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 01:33:59.483 05:28:50 nvme_xnvme -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 01:33:59.483 #define SPDK_CONFIG_H 01:33:59.483 #define SPDK_CONFIG_AIO_FSDEV 1 01:33:59.483 #define SPDK_CONFIG_APPS 1 01:33:59.483 #define SPDK_CONFIG_ARCH native 01:33:59.483 #define SPDK_CONFIG_ASAN 1 01:33:59.483 #undef SPDK_CONFIG_AVAHI 01:33:59.483 #undef SPDK_CONFIG_CET 01:33:59.483 #define SPDK_CONFIG_COPY_FILE_RANGE 1 01:33:59.483 #define SPDK_CONFIG_COVERAGE 1 01:33:59.483 #define SPDK_CONFIG_CROSS_PREFIX 01:33:59.484 #undef SPDK_CONFIG_CRYPTO 01:33:59.484 #undef SPDK_CONFIG_CRYPTO_MLX5 01:33:59.484 #undef SPDK_CONFIG_CUSTOMOCF 01:33:59.484 #undef SPDK_CONFIG_DAOS 01:33:59.484 #define SPDK_CONFIG_DAOS_DIR 01:33:59.484 #define SPDK_CONFIG_DEBUG 1 01:33:59.484 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 01:33:59.484 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 01:33:59.484 #define SPDK_CONFIG_DPDK_INC_DIR 01:33:59.484 #define SPDK_CONFIG_DPDK_LIB_DIR 01:33:59.484 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 01:33:59.484 #undef SPDK_CONFIG_DPDK_UADK 01:33:59.484 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 01:33:59.484 #define SPDK_CONFIG_EXAMPLES 1 01:33:59.484 #undef SPDK_CONFIG_FC 01:33:59.484 #define SPDK_CONFIG_FC_PATH 01:33:59.484 #define SPDK_CONFIG_FIO_PLUGIN 1 01:33:59.484 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 01:33:59.484 #define SPDK_CONFIG_FSDEV 1 01:33:59.484 #undef SPDK_CONFIG_FUSE 01:33:59.484 #undef SPDK_CONFIG_FUZZER 01:33:59.484 #define SPDK_CONFIG_FUZZER_LIB 01:33:59.484 #undef SPDK_CONFIG_GOLANG 01:33:59.484 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 01:33:59.484 #define SPDK_CONFIG_HAVE_EVP_MAC 1 01:33:59.484 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 01:33:59.484 #define SPDK_CONFIG_HAVE_KEYUTILS 1 01:33:59.484 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 01:33:59.484 #undef SPDK_CONFIG_HAVE_LIBBSD 01:33:59.484 #undef SPDK_CONFIG_HAVE_LZ4 01:33:59.484 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 01:33:59.484 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 01:33:59.484 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 01:33:59.484 #define SPDK_CONFIG_IDXD 1 01:33:59.484 #define SPDK_CONFIG_IDXD_KERNEL 1 01:33:59.484 #undef SPDK_CONFIG_IPSEC_MB 01:33:59.484 #define SPDK_CONFIG_IPSEC_MB_DIR 01:33:59.484 #define SPDK_CONFIG_ISAL 1 01:33:59.484 #define SPDK_CONFIG_ISAL_CRYPTO 1 01:33:59.484 #define SPDK_CONFIG_ISCSI_INITIATOR 1 01:33:59.484 #define SPDK_CONFIG_LIBDIR 01:33:59.484 #undef SPDK_CONFIG_LTO 01:33:59.484 #define SPDK_CONFIG_MAX_LCORES 128 01:33:59.484 #define SPDK_CONFIG_MAX_NUMA_NODES 1 01:33:59.484 #define SPDK_CONFIG_NVME_CUSE 1 01:33:59.484 #undef SPDK_CONFIG_OCF 01:33:59.484 #define SPDK_CONFIG_OCF_PATH 01:33:59.484 #define SPDK_CONFIG_OPENSSL_PATH 01:33:59.484 #undef SPDK_CONFIG_PGO_CAPTURE 01:33:59.484 #define SPDK_CONFIG_PGO_DIR 01:33:59.484 #undef SPDK_CONFIG_PGO_USE 01:33:59.484 #define SPDK_CONFIG_PREFIX /usr/local 01:33:59.484 #undef SPDK_CONFIG_RAID5F 01:33:59.484 #undef SPDK_CONFIG_RBD 01:33:59.484 #define SPDK_CONFIG_RDMA 1 01:33:59.484 #define SPDK_CONFIG_RDMA_PROV verbs 01:33:59.484 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 01:33:59.484 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 01:33:59.484 #define SPDK_CONFIG_RDMA_SET_TOS 1 01:33:59.484 #define SPDK_CONFIG_SHARED 1 01:33:59.484 #undef SPDK_CONFIG_SMA 01:33:59.484 #define SPDK_CONFIG_TESTS 1 01:33:59.484 #undef SPDK_CONFIG_TSAN 01:33:59.484 #define SPDK_CONFIG_UBLK 1 01:33:59.484 #define SPDK_CONFIG_UBSAN 1 01:33:59.484 #undef SPDK_CONFIG_UNIT_TESTS 01:33:59.484 #undef SPDK_CONFIG_URING 01:33:59.484 #define SPDK_CONFIG_URING_PATH 01:33:59.484 #undef SPDK_CONFIG_URING_ZNS 01:33:59.484 #undef SPDK_CONFIG_USDT 01:33:59.484 #undef SPDK_CONFIG_VBDEV_COMPRESS 01:33:59.484 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 01:33:59.484 #undef SPDK_CONFIG_VFIO_USER 01:33:59.484 #define SPDK_CONFIG_VFIO_USER_DIR 01:33:59.484 #define SPDK_CONFIG_VHOST 1 01:33:59.484 #define SPDK_CONFIG_VIRTIO 1 01:33:59.484 #undef SPDK_CONFIG_VTUNE 01:33:59.484 #define SPDK_CONFIG_VTUNE_DIR 01:33:59.484 #define SPDK_CONFIG_WERROR 1 01:33:59.484 #define SPDK_CONFIG_WPDK_DIR 01:33:59.484 #define SPDK_CONFIG_XNVME 1 01:33:59.484 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 01:33:59.484 05:28:50 nvme_xnvme -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 01:33:59.484 05:28:50 nvme_xnvme -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:33:59.484 05:28:50 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 01:33:59.484 05:28:50 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:33:59.484 05:28:50 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:33:59.484 05:28:50 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:33:59.484 05:28:50 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:33:59.484 05:28:50 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:33:59.484 05:28:50 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:33:59.484 05:28:50 nvme_xnvme -- paths/export.sh@5 -- # export PATH 01:33:59.484 05:28:50 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:33:59.484 05:28:50 nvme_xnvme -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 01:33:59.484 05:28:50 nvme_xnvme -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 01:33:59.484 05:28:50 nvme_xnvme -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 01:33:59.484 05:28:50 nvme_xnvme -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 01:33:59.484 05:28:50 nvme_xnvme -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 01:33:59.484 05:28:50 nvme_xnvme -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 01:33:59.484 05:28:50 nvme_xnvme -- pm/common@64 -- # TEST_TAG=N/A 01:33:59.484 05:28:50 nvme_xnvme -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 01:33:59.484 05:28:50 nvme_xnvme -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 01:33:59.484 05:28:50 nvme_xnvme -- pm/common@68 -- # uname -s 01:33:59.484 05:28:50 nvme_xnvme -- pm/common@68 -- # PM_OS=Linux 01:33:59.484 05:28:50 nvme_xnvme -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 01:33:59.484 05:28:50 nvme_xnvme -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 01:33:59.484 05:28:50 nvme_xnvme -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 01:33:59.484 05:28:50 nvme_xnvme -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 01:33:59.484 05:28:50 nvme_xnvme -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 01:33:59.484 05:28:50 nvme_xnvme -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 01:33:59.484 05:28:50 nvme_xnvme -- pm/common@76 -- # SUDO[0]= 01:33:59.484 05:28:50 nvme_xnvme -- pm/common@76 -- # SUDO[1]='sudo -E' 01:33:59.484 05:28:50 nvme_xnvme -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 01:33:59.484 05:28:50 nvme_xnvme -- pm/common@79 -- # [[ Linux == FreeBSD ]] 01:33:59.484 05:28:50 nvme_xnvme -- pm/common@81 -- # [[ Linux == Linux ]] 01:33:59.484 05:28:50 nvme_xnvme -- pm/common@81 -- # [[ QEMU != QEMU ]] 01:33:59.484 05:28:50 nvme_xnvme -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 01:33:59.484 05:28:50 nvme_xnvme -- common/autotest_common.sh@58 -- # : 0 01:33:59.484 05:28:50 nvme_xnvme -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 01:33:59.484 05:28:50 nvme_xnvme -- common/autotest_common.sh@62 -- # : 0 01:33:59.485 05:28:50 nvme_xnvme -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 01:33:59.485 05:28:50 nvme_xnvme -- common/autotest_common.sh@64 -- # : 0 01:33:59.485 05:28:50 nvme_xnvme -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 01:33:59.485 05:28:50 nvme_xnvme -- common/autotest_common.sh@66 -- # : 1 01:33:59.485 05:28:50 nvme_xnvme -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 01:33:59.485 05:28:50 nvme_xnvme -- common/autotest_common.sh@68 -- # : 0 01:33:59.485 05:28:50 nvme_xnvme -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 01:33:59.485 05:28:50 nvme_xnvme -- common/autotest_common.sh@70 -- # : 01:33:59.485 05:28:50 nvme_xnvme -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 01:33:59.485 05:28:50 nvme_xnvme -- common/autotest_common.sh@72 -- # : 0 01:33:59.485 05:28:50 nvme_xnvme -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 01:33:59.485 05:28:50 nvme_xnvme -- common/autotest_common.sh@74 -- # : 1 01:33:59.485 05:28:50 nvme_xnvme -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 01:33:59.485 05:28:50 nvme_xnvme -- common/autotest_common.sh@76 -- # : 0 01:33:59.485 05:28:50 nvme_xnvme -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 01:33:59.485 05:28:50 nvme_xnvme -- common/autotest_common.sh@78 -- # : 0 01:33:59.485 05:28:50 nvme_xnvme -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 01:33:59.485 05:28:50 nvme_xnvme -- common/autotest_common.sh@80 -- # : 1 01:33:59.485 05:28:50 nvme_xnvme -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 01:33:59.485 05:28:50 nvme_xnvme -- common/autotest_common.sh@82 -- # : 0 01:33:59.485 05:28:50 nvme_xnvme -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 01:33:59.485 05:28:50 nvme_xnvme -- common/autotest_common.sh@84 -- # : 0 01:33:59.485 05:28:50 nvme_xnvme -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 01:33:59.485 05:28:50 nvme_xnvme -- common/autotest_common.sh@86 -- # : 0 01:33:59.485 05:28:50 nvme_xnvme -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 01:33:59.485 05:28:50 nvme_xnvme -- common/autotest_common.sh@88 -- # : 0 01:33:59.485 05:28:50 nvme_xnvme -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 01:33:59.485 05:28:50 nvme_xnvme -- common/autotest_common.sh@90 -- # : 1 01:33:59.485 05:28:50 nvme_xnvme -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 01:33:59.485 05:28:50 nvme_xnvme -- common/autotest_common.sh@92 -- # : 0 01:33:59.485 05:28:50 nvme_xnvme -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 01:33:59.485 05:28:50 nvme_xnvme -- common/autotest_common.sh@94 -- # : 0 01:33:59.485 05:28:50 nvme_xnvme -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 01:33:59.485 05:28:50 nvme_xnvme -- common/autotest_common.sh@96 -- # : 0 01:33:59.485 05:28:50 nvme_xnvme -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 01:33:59.485 05:28:50 nvme_xnvme -- common/autotest_common.sh@98 -- # : 0 01:33:59.485 05:28:50 nvme_xnvme -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 01:33:59.485 05:28:50 nvme_xnvme -- common/autotest_common.sh@100 -- # : 0 01:33:59.485 05:28:50 nvme_xnvme -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 01:33:59.485 05:28:50 nvme_xnvme -- common/autotest_common.sh@102 -- # : rdma 01:33:59.485 05:28:50 nvme_xnvme -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 01:33:59.485 05:28:50 nvme_xnvme -- common/autotest_common.sh@104 -- # : 0 01:33:59.485 05:28:50 nvme_xnvme -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 01:33:59.485 05:28:50 nvme_xnvme -- common/autotest_common.sh@106 -- # : 0 01:33:59.485 05:28:50 nvme_xnvme -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 01:33:59.485 05:28:50 nvme_xnvme -- common/autotest_common.sh@108 -- # : 0 01:33:59.485 05:28:50 nvme_xnvme -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 01:33:59.485 05:28:50 nvme_xnvme -- common/autotest_common.sh@110 -- # : 0 01:33:59.485 05:28:50 nvme_xnvme -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 01:33:59.485 05:28:50 nvme_xnvme -- common/autotest_common.sh@112 -- # : 0 01:33:59.485 05:28:50 nvme_xnvme -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 01:33:59.485 05:28:50 nvme_xnvme -- common/autotest_common.sh@114 -- # : 0 01:33:59.485 05:28:50 nvme_xnvme -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 01:33:59.485 05:28:50 nvme_xnvme -- common/autotest_common.sh@116 -- # : 0 01:33:59.485 05:28:50 nvme_xnvme -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 01:33:59.485 05:28:50 nvme_xnvme -- common/autotest_common.sh@118 -- # : 0 01:33:59.485 05:28:50 nvme_xnvme -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 01:33:59.485 05:28:50 nvme_xnvme -- common/autotest_common.sh@120 -- # : 0 01:33:59.485 05:28:50 nvme_xnvme -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 01:33:59.485 05:28:50 nvme_xnvme -- common/autotest_common.sh@122 -- # : 1 01:33:59.485 05:28:50 nvme_xnvme -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 01:33:59.485 05:28:50 nvme_xnvme -- common/autotest_common.sh@124 -- # : 1 01:33:59.485 05:28:50 nvme_xnvme -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 01:33:59.485 05:28:50 nvme_xnvme -- common/autotest_common.sh@126 -- # : 01:33:59.485 05:28:50 nvme_xnvme -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 01:33:59.485 05:28:50 nvme_xnvme -- common/autotest_common.sh@128 -- # : 0 01:33:59.485 05:28:50 nvme_xnvme -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 01:33:59.485 05:28:50 nvme_xnvme -- common/autotest_common.sh@130 -- # : 0 01:33:59.485 05:28:50 nvme_xnvme -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 01:33:59.485 05:28:50 nvme_xnvme -- common/autotest_common.sh@132 -- # : 1 01:33:59.485 05:28:50 nvme_xnvme -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 01:33:59.485 05:28:50 nvme_xnvme -- common/autotest_common.sh@134 -- # : 0 01:33:59.485 05:28:50 nvme_xnvme -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 01:33:59.485 05:28:50 nvme_xnvme -- common/autotest_common.sh@136 -- # : 0 01:33:59.485 05:28:50 nvme_xnvme -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 01:33:59.485 05:28:50 nvme_xnvme -- common/autotest_common.sh@138 -- # : 0 01:33:59.485 05:28:50 nvme_xnvme -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 01:33:59.485 05:28:50 nvme_xnvme -- common/autotest_common.sh@140 -- # : 01:33:59.485 05:28:50 nvme_xnvme -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 01:33:59.485 05:28:50 nvme_xnvme -- common/autotest_common.sh@142 -- # : true 01:33:59.485 05:28:50 nvme_xnvme -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 01:33:59.485 05:28:50 nvme_xnvme -- common/autotest_common.sh@144 -- # : 0 01:33:59.485 05:28:50 nvme_xnvme -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 01:33:59.485 05:28:50 nvme_xnvme -- common/autotest_common.sh@146 -- # : 0 01:33:59.485 05:28:50 nvme_xnvme -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 01:33:59.485 05:28:50 nvme_xnvme -- common/autotest_common.sh@148 -- # : 0 01:33:59.485 05:28:50 nvme_xnvme -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 01:33:59.485 05:28:50 nvme_xnvme -- common/autotest_common.sh@150 -- # : 0 01:33:59.485 05:28:50 nvme_xnvme -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 01:33:59.485 05:28:50 nvme_xnvme -- common/autotest_common.sh@152 -- # : 0 01:33:59.485 05:28:50 nvme_xnvme -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 01:33:59.485 05:28:50 nvme_xnvme -- common/autotest_common.sh@154 -- # : 01:33:59.485 05:28:50 nvme_xnvme -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 01:33:59.485 05:28:50 nvme_xnvme -- common/autotest_common.sh@156 -- # : 0 01:33:59.485 05:28:50 nvme_xnvme -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 01:33:59.485 05:28:50 nvme_xnvme -- common/autotest_common.sh@158 -- # : 0 01:33:59.485 05:28:50 nvme_xnvme -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 01:33:59.485 05:28:50 nvme_xnvme -- common/autotest_common.sh@160 -- # : 1 01:33:59.485 05:28:50 nvme_xnvme -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 01:33:59.485 05:28:50 nvme_xnvme -- common/autotest_common.sh@162 -- # : 0 01:33:59.486 05:28:50 nvme_xnvme -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 01:33:59.486 05:28:50 nvme_xnvme -- common/autotest_common.sh@164 -- # : 0 01:33:59.486 05:28:50 nvme_xnvme -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 01:33:59.486 05:28:50 nvme_xnvme -- common/autotest_common.sh@166 -- # : 0 01:33:59.486 05:28:50 nvme_xnvme -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 01:33:59.486 05:28:50 nvme_xnvme -- common/autotest_common.sh@169 -- # : 01:33:59.486 05:28:50 nvme_xnvme -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 01:33:59.486 05:28:50 nvme_xnvme -- common/autotest_common.sh@171 -- # : 0 01:33:59.486 05:28:50 nvme_xnvme -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 01:33:59.486 05:28:50 nvme_xnvme -- common/autotest_common.sh@173 -- # : 0 01:33:59.486 05:28:50 nvme_xnvme -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 01:33:59.486 05:28:50 nvme_xnvme -- common/autotest_common.sh@175 -- # : 0 01:33:59.486 05:28:50 nvme_xnvme -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 01:33:59.486 05:28:50 nvme_xnvme -- common/autotest_common.sh@177 -- # : 0 01:33:59.486 05:28:50 nvme_xnvme -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 01:33:59.486 05:28:50 nvme_xnvme -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 01:33:59.486 05:28:50 nvme_xnvme -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 01:33:59.486 05:28:50 nvme_xnvme -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 01:33:59.486 05:28:50 nvme_xnvme -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 01:33:59.486 05:28:50 nvme_xnvme -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 01:33:59.486 05:28:50 nvme_xnvme -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 01:33:59.486 05:28:50 nvme_xnvme -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 01:33:59.486 05:28:50 nvme_xnvme -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 01:33:59.486 05:28:50 nvme_xnvme -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 01:33:59.486 05:28:50 nvme_xnvme -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 01:33:59.486 05:28:50 nvme_xnvme -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 01:33:59.486 05:28:50 nvme_xnvme -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 01:33:59.486 05:28:50 nvme_xnvme -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 01:33:59.486 05:28:50 nvme_xnvme -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 01:33:59.486 05:28:50 nvme_xnvme -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 01:33:59.486 05:28:50 nvme_xnvme -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 01:33:59.486 05:28:50 nvme_xnvme -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 01:33:59.486 05:28:50 nvme_xnvme -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 01:33:59.486 05:28:50 nvme_xnvme -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 01:33:59.486 05:28:50 nvme_xnvme -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 01:33:59.486 05:28:50 nvme_xnvme -- common/autotest_common.sh@206 -- # cat 01:33:59.486 05:28:50 nvme_xnvme -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 01:33:59.486 05:28:50 nvme_xnvme -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 01:33:59.486 05:28:50 nvme_xnvme -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 01:33:59.486 05:28:50 nvme_xnvme -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 01:33:59.486 05:28:50 nvme_xnvme -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 01:33:59.486 05:28:50 nvme_xnvme -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 01:33:59.486 05:28:50 nvme_xnvme -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 01:33:59.486 05:28:50 nvme_xnvme -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 01:33:59.486 05:28:50 nvme_xnvme -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 01:33:59.486 05:28:50 nvme_xnvme -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 01:33:59.486 05:28:50 nvme_xnvme -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 01:33:59.486 05:28:50 nvme_xnvme -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 01:33:59.486 05:28:50 nvme_xnvme -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 01:33:59.486 05:28:50 nvme_xnvme -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 01:33:59.486 05:28:50 nvme_xnvme -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 01:33:59.486 05:28:50 nvme_xnvme -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 01:33:59.486 05:28:50 nvme_xnvme -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 01:33:59.486 05:28:50 nvme_xnvme -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 01:33:59.486 05:28:50 nvme_xnvme -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 01:33:59.486 05:28:50 nvme_xnvme -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 01:33:59.486 05:28:50 nvme_xnvme -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 01:33:59.486 05:28:50 nvme_xnvme -- common/autotest_common.sh@269 -- # _LCOV= 01:33:59.486 05:28:50 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 01:33:59.486 05:28:50 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 01:33:59.486 05:28:50 nvme_xnvme -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 01:33:59.486 05:28:50 nvme_xnvme -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 01:33:59.486 05:28:50 nvme_xnvme -- common/autotest_common.sh@275 -- # lcov_opt= 01:33:59.486 05:28:50 nvme_xnvme -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 01:33:59.486 05:28:50 nvme_xnvme -- common/autotest_common.sh@279 -- # export valgrind= 01:33:59.486 05:28:50 nvme_xnvme -- common/autotest_common.sh@279 -- # valgrind= 01:33:59.486 05:28:50 nvme_xnvme -- common/autotest_common.sh@285 -- # uname -s 01:33:59.486 05:28:50 nvme_xnvme -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 01:33:59.486 05:28:50 nvme_xnvme -- common/autotest_common.sh@286 -- # HUGEMEM=4096 01:33:59.486 05:28:50 nvme_xnvme -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 01:33:59.486 05:28:50 nvme_xnvme -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 01:33:59.486 05:28:50 nvme_xnvme -- common/autotest_common.sh@289 -- # MAKE=make 01:33:59.486 05:28:50 nvme_xnvme -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 01:33:59.486 05:28:50 nvme_xnvme -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 01:33:59.486 05:28:50 nvme_xnvme -- common/autotest_common.sh@306 -- # HUGEMEM=4096 01:33:59.486 05:28:50 nvme_xnvme -- common/autotest_common.sh@308 -- # NO_HUGE=() 01:33:59.486 05:28:50 nvme_xnvme -- common/autotest_common.sh@309 -- # TEST_MODE= 01:33:59.487 05:28:50 nvme_xnvme -- common/autotest_common.sh@331 -- # [[ -z 70176 ]] 01:33:59.487 05:28:50 nvme_xnvme -- common/autotest_common.sh@331 -- # kill -0 70176 01:33:59.487 05:28:50 nvme_xnvme -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 01:33:59.487 05:28:50 nvme_xnvme -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 01:33:59.487 05:28:50 nvme_xnvme -- common/autotest_common.sh@343 -- # local requested_size=2147483648 01:33:59.487 05:28:50 nvme_xnvme -- common/autotest_common.sh@344 -- # local mount target_dir 01:33:59.487 05:28:50 nvme_xnvme -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 01:33:59.487 05:28:50 nvme_xnvme -- common/autotest_common.sh@347 -- # local source fs size avail mount use 01:33:59.487 05:28:50 nvme_xnvme -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 01:33:59.487 05:28:50 nvme_xnvme -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 01:33:59.487 05:28:50 nvme_xnvme -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.930nMC 01:33:59.487 05:28:50 nvme_xnvme -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 01:33:59.487 05:28:50 nvme_xnvme -- common/autotest_common.sh@358 -- # [[ -n '' ]] 01:33:59.487 05:28:50 nvme_xnvme -- common/autotest_common.sh@363 -- # [[ -n '' ]] 01:33:59.487 05:28:50 nvme_xnvme -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvme/xnvme /tmp/spdk.930nMC/tests/xnvme /tmp/spdk.930nMC 01:33:59.487 05:28:50 nvme_xnvme -- common/autotest_common.sh@371 -- # requested_size=2214592512 01:33:59.487 05:28:50 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 01:33:59.487 05:28:50 nvme_xnvme -- common/autotest_common.sh@340 -- # df -T 01:33:59.487 05:28:50 nvme_xnvme -- common/autotest_common.sh@340 -- # grep -v Filesystem 01:33:59.487 05:28:50 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 01:33:59.487 05:28:50 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 01:33:59.487 05:28:50 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13966848000 01:33:59.487 05:28:50 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 01:33:59.487 05:28:50 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5601005568 01:33:59.487 05:28:50 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 01:33:59.487 05:28:50 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 01:33:59.487 05:28:50 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 01:33:59.487 05:28:50 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 01:33:59.487 05:28:50 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 01:33:59.487 05:28:50 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=0 01:33:59.487 05:28:50 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 01:33:59.487 05:28:50 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 01:33:59.487 05:28:50 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 01:33:59.487 05:28:50 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6261661696 01:33:59.487 05:28:50 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266425344 01:33:59.487 05:28:50 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=4763648 01:33:59.487 05:28:50 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 01:33:59.487 05:28:50 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 01:33:59.487 05:28:50 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 01:33:59.487 05:28:50 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=2493775872 01:33:59.487 05:28:50 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506571776 01:33:59.487 05:28:50 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12795904 01:33:59.487 05:28:50 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 01:33:59.487 05:28:50 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 01:33:59.487 05:28:50 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 01:33:59.487 05:28:50 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13966848000 01:33:59.487 05:28:50 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 01:33:59.487 05:28:50 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5601005568 01:33:59.487 05:28:50 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 01:33:59.487 05:28:50 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 01:33:59.487 05:28:50 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 01:33:59.487 05:28:50 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6266277888 01:33:59.487 05:28:50 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266425344 01:33:59.487 05:28:50 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=147456 01:33:59.487 05:28:50 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 01:33:59.487 05:28:50 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 01:33:59.487 05:28:50 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 01:33:59.487 05:28:50 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 01:33:59.487 05:28:50 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 01:33:59.487 05:28:50 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 01:33:59.487 05:28:50 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 01:33:59.487 05:28:50 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 01:33:59.487 05:28:50 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 01:33:59.487 05:28:50 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 01:33:59.487 05:28:50 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 01:33:59.487 05:28:50 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 01:33:59.487 05:28:50 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 01:33:59.487 05:28:50 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 01:33:59.487 05:28:50 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 01:33:59.487 05:28:50 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=1253269504 01:33:59.487 05:28:50 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253281792 01:33:59.487 05:28:50 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 01:33:59.487 05:28:50 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 01:33:59.487 05:28:50 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output 01:33:59.487 05:28:50 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 01:33:59.487 05:28:50 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=96459661312 01:33:59.487 05:28:50 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 01:33:59.487 05:28:50 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=3243118592 01:33:59.487 05:28:50 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 01:33:59.487 05:28:50 nvme_xnvme -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 01:33:59.487 * Looking for test storage... 01:33:59.487 05:28:50 nvme_xnvme -- common/autotest_common.sh@381 -- # local target_space new_size 01:33:59.487 05:28:50 nvme_xnvme -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 01:33:59.487 05:28:50 nvme_xnvme -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 01:33:59.487 05:28:50 nvme_xnvme -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 01:33:59.487 05:28:50 nvme_xnvme -- common/autotest_common.sh@385 -- # mount=/home 01:33:59.487 05:28:50 nvme_xnvme -- common/autotest_common.sh@387 -- # target_space=13966848000 01:33:59.487 05:28:50 nvme_xnvme -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 01:33:59.488 05:28:50 nvme_xnvme -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 01:33:59.488 05:28:50 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 01:33:59.488 05:28:50 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 01:33:59.488 05:28:50 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ /home == / ]] 01:33:59.488 05:28:50 nvme_xnvme -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 01:33:59.488 05:28:50 nvme_xnvme -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 01:33:59.488 05:28:50 nvme_xnvme -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 01:33:59.488 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 01:33:59.488 05:28:50 nvme_xnvme -- common/autotest_common.sh@402 -- # return 0 01:33:59.488 05:28:50 nvme_xnvme -- common/autotest_common.sh@1680 -- # set -o errtrace 01:33:59.488 05:28:50 nvme_xnvme -- common/autotest_common.sh@1681 -- # shopt -s extdebug 01:33:59.488 05:28:50 nvme_xnvme -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 01:33:59.488 05:28:50 nvme_xnvme -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 01:33:59.488 05:28:50 nvme_xnvme -- common/autotest_common.sh@1685 -- # true 01:33:59.488 05:28:50 nvme_xnvme -- common/autotest_common.sh@1687 -- # xtrace_fd 01:33:59.488 05:28:50 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 01:33:59.488 05:28:50 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 01:33:59.488 05:28:50 nvme_xnvme -- common/autotest_common.sh@27 -- # exec 01:33:59.488 05:28:50 nvme_xnvme -- common/autotest_common.sh@29 -- # exec 01:33:59.488 05:28:50 nvme_xnvme -- common/autotest_common.sh@31 -- # xtrace_restore 01:33:59.488 05:28:50 nvme_xnvme -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 01:33:59.488 05:28:50 nvme_xnvme -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 01:33:59.488 05:28:50 nvme_xnvme -- common/autotest_common.sh@18 -- # set -x 01:33:59.488 05:28:50 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:33:59.488 05:28:50 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 01:33:59.488 05:28:50 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:33:59.488 05:28:51 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:33:59.488 05:28:51 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:33:59.488 05:28:51 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 01:33:59.488 05:28:51 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 01:33:59.488 05:28:51 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 01:33:59.488 05:28:51 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 01:33:59.488 05:28:51 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 01:33:59.488 05:28:51 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 01:33:59.488 05:28:51 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 01:33:59.488 05:28:51 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 01:33:59.488 05:28:51 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 01:33:59.488 05:28:51 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:33:59.488 05:28:51 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 01:33:59.488 05:28:51 nvme_xnvme -- scripts/common.sh@345 -- # : 1 01:33:59.488 05:28:51 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 01:33:59.488 05:28:51 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:33:59.488 05:28:51 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 01:33:59.488 05:28:51 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 01:33:59.488 05:28:51 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:33:59.488 05:28:51 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 01:33:59.488 05:28:51 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 01:33:59.488 05:28:51 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 01:33:59.488 05:28:51 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 01:33:59.488 05:28:51 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:33:59.488 05:28:51 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 01:33:59.488 05:28:51 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 01:33:59.488 05:28:51 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:33:59.488 05:28:51 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:33:59.488 05:28:51 nvme_xnvme -- scripts/common.sh@368 -- # return 0 01:33:59.488 05:28:51 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:33:59.488 05:28:51 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:33:59.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:33:59.488 --rc genhtml_branch_coverage=1 01:33:59.488 --rc genhtml_function_coverage=1 01:33:59.488 --rc genhtml_legend=1 01:33:59.488 --rc geninfo_all_blocks=1 01:33:59.488 --rc geninfo_unexecuted_blocks=1 01:33:59.488 01:33:59.488 ' 01:33:59.488 05:28:51 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:33:59.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:33:59.488 --rc genhtml_branch_coverage=1 01:33:59.488 --rc genhtml_function_coverage=1 01:33:59.488 --rc genhtml_legend=1 01:33:59.488 --rc geninfo_all_blocks=1 01:33:59.488 --rc geninfo_unexecuted_blocks=1 01:33:59.488 01:33:59.488 ' 01:33:59.488 05:28:51 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:33:59.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:33:59.488 --rc genhtml_branch_coverage=1 01:33:59.488 --rc genhtml_function_coverage=1 01:33:59.488 --rc genhtml_legend=1 01:33:59.488 --rc geninfo_all_blocks=1 01:33:59.488 --rc geninfo_unexecuted_blocks=1 01:33:59.488 01:33:59.488 ' 01:33:59.488 05:28:51 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:33:59.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:33:59.488 --rc genhtml_branch_coverage=1 01:33:59.488 --rc genhtml_function_coverage=1 01:33:59.488 --rc genhtml_legend=1 01:33:59.488 --rc geninfo_all_blocks=1 01:33:59.488 --rc geninfo_unexecuted_blocks=1 01:33:59.488 01:33:59.488 ' 01:33:59.488 05:28:51 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:33:59.488 05:28:51 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 01:33:59.488 05:28:51 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:33:59.488 05:28:51 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:33:59.488 05:28:51 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:33:59.488 05:28:51 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:33:59.488 05:28:51 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:33:59.489 05:28:51 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:33:59.489 05:28:51 nvme_xnvme -- paths/export.sh@5 -- # export PATH 01:33:59.489 05:28:51 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:33:59.489 05:28:51 nvme_xnvme -- xnvme/common.sh@12 -- # xnvme_io=('libaio' 'io_uring' 'io_uring_cmd') 01:33:59.489 05:28:51 nvme_xnvme -- xnvme/common.sh@12 -- # declare -a xnvme_io 01:33:59.489 05:28:51 nvme_xnvme -- xnvme/common.sh@18 -- # libaio=('randread' 'randwrite') 01:33:59.489 05:28:51 nvme_xnvme -- xnvme/common.sh@18 -- # declare -a libaio 01:33:59.489 05:28:51 nvme_xnvme -- xnvme/common.sh@23 -- # io_uring=('randread' 'randwrite') 01:33:59.489 05:28:51 nvme_xnvme -- xnvme/common.sh@23 -- # declare -a io_uring 01:33:59.489 05:28:51 nvme_xnvme -- xnvme/common.sh@27 -- # io_uring_cmd=('randread' 'randwrite' 'unmap' 'write_zeroes') 01:33:59.489 05:28:51 nvme_xnvme -- xnvme/common.sh@27 -- # declare -a io_uring_cmd 01:33:59.489 05:28:51 nvme_xnvme -- xnvme/common.sh@33 -- # libaio_fio=('randread' 'randwrite') 01:33:59.489 05:28:51 nvme_xnvme -- xnvme/common.sh@33 -- # declare -a libaio_fio 01:33:59.489 05:28:51 nvme_xnvme -- xnvme/common.sh@37 -- # io_uring_fio=('randread' 'randwrite') 01:33:59.489 05:28:51 nvme_xnvme -- xnvme/common.sh@37 -- # declare -a io_uring_fio 01:33:59.489 05:28:51 nvme_xnvme -- xnvme/common.sh@41 -- # io_uring_cmd_fio=('randread' 'randwrite') 01:33:59.489 05:28:51 nvme_xnvme -- xnvme/common.sh@41 -- # declare -a io_uring_cmd_fio 01:33:59.489 05:28:51 nvme_xnvme -- xnvme/common.sh@45 -- # xnvme_filename=(['libaio']='/dev/nvme0n1' ['io_uring']='/dev/nvme0n1' ['io_uring_cmd']='/dev/ng0n1') 01:33:59.489 05:28:51 nvme_xnvme -- xnvme/common.sh@45 -- # declare -A xnvme_filename 01:33:59.489 05:28:51 nvme_xnvme -- xnvme/common.sh@51 -- # xnvme_conserve_cpu=('false' 'true') 01:33:59.489 05:28:51 nvme_xnvme -- xnvme/common.sh@51 -- # declare -a xnvme_conserve_cpu 01:33:59.489 05:28:51 nvme_xnvme -- xnvme/common.sh@57 -- # method_bdev_xnvme_create_0=(['name']='xnvme_bdev' ['filename']='/dev/nvme0n1' ['io_mechanism']='libaio' ['conserve_cpu']='false') 01:33:59.489 05:28:51 nvme_xnvme -- xnvme/common.sh@57 -- # declare -A method_bdev_xnvme_create_0 01:33:59.489 05:28:51 nvme_xnvme -- xnvme/common.sh@89 -- # prep_nvme 01:33:59.489 05:28:51 nvme_xnvme -- xnvme/common.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 01:34:00.056 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:34:00.056 Waiting for block devices as requested 01:34:00.056 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 01:34:00.314 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 01:34:00.314 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 01:34:00.314 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 01:34:05.579 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 01:34:05.579 05:28:56 nvme_xnvme -- xnvme/common.sh@73 -- # modprobe -r nvme 01:34:05.837 05:28:57 nvme_xnvme -- xnvme/common.sh@74 -- # nproc 01:34:05.838 05:28:57 nvme_xnvme -- xnvme/common.sh@74 -- # modprobe nvme poll_queues=10 01:34:06.096 05:28:57 nvme_xnvme -- xnvme/common.sh@77 -- # local nvme 01:34:06.096 05:28:57 nvme_xnvme -- xnvme/common.sh@78 -- # for nvme in /dev/nvme*n!(*p*) 01:34:06.096 05:28:57 nvme_xnvme -- xnvme/common.sh@79 -- # block_in_use /dev/nvme0n1 01:34:06.096 05:28:57 nvme_xnvme -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 01:34:06.096 05:28:57 nvme_xnvme -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 01:34:06.096 No valid GPT data, bailing 01:34:06.096 05:28:57 nvme_xnvme -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 01:34:06.096 05:28:57 nvme_xnvme -- scripts/common.sh@394 -- # pt= 01:34:06.096 05:28:57 nvme_xnvme -- scripts/common.sh@395 -- # return 1 01:34:06.096 05:28:57 nvme_xnvme -- xnvme/common.sh@80 -- # xnvme_filename["libaio"]=/dev/nvme0n1 01:34:06.096 05:28:57 nvme_xnvme -- xnvme/common.sh@81 -- # xnvme_filename["io_uring"]=/dev/nvme0n1 01:34:06.096 05:28:57 nvme_xnvme -- xnvme/common.sh@82 -- # xnvme_filename["io_uring_cmd"]=/dev/ng0n1 01:34:06.096 05:28:57 nvme_xnvme -- xnvme/common.sh@83 -- # return 0 01:34:06.096 05:28:57 nvme_xnvme -- xnvme/xnvme.sh@73 -- # trap 'killprocess "$spdk_tgt"' EXIT 01:34:06.096 05:28:57 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 01:34:06.096 05:28:57 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 01:34:06.096 05:28:57 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 01:34:06.096 05:28:57 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 01:34:06.096 05:28:57 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 01:34:06.096 05:28:57 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 01:34:06.096 05:28:57 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 01:34:06.096 05:28:57 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 01:34:06.096 05:28:57 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 01:34:06.096 05:28:57 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:34:06.096 05:28:57 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 01:34:06.096 05:28:57 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 01:34:06.096 ************************************ 01:34:06.096 START TEST xnvme_rpc 01:34:06.096 ************************************ 01:34:06.096 05:28:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 01:34:06.096 05:28:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 01:34:06.096 05:28:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 01:34:06.096 05:28:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 01:34:06.096 05:28:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 01:34:06.096 05:28:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70568 01:34:06.096 05:28:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70568 01:34:06.096 05:28:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70568 ']' 01:34:06.096 05:28:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:34:06.096 05:28:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:34:06.096 05:28:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 01:34:06.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:34:06.096 05:28:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:34:06.096 05:28:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 01:34:06.096 05:28:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 01:34:06.353 [2024-12-09 05:28:57.787819] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:34:06.353 [2024-12-09 05:28:57.788045] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70568 ] 01:34:06.610 [2024-12-09 05:28:57.980466] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:34:06.610 [2024-12-09 05:28:58.134504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:34:07.542 05:28:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:34:07.542 05:28:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 01:34:07.542 05:28:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio '' 01:34:07.542 05:28:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:34:07.543 05:28:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 01:34:07.543 xnvme_bdev 01:34:07.543 05:28:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:34:07.543 05:28:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 01:34:07.543 05:28:58 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 01:34:07.543 05:28:58 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 01:34:07.543 05:28:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:34:07.543 05:28:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 01:34:07.543 05:28:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:34:07.543 05:28:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 01:34:07.543 05:28:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 01:34:07.543 05:28:59 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 01:34:07.543 05:28:59 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 01:34:07.543 05:28:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:34:07.543 05:28:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 01:34:07.543 05:28:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:34:07.543 05:28:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 01:34:07.543 05:28:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 01:34:07.543 05:28:59 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 01:34:07.543 05:28:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:34:07.543 05:28:59 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 01:34:07.543 05:28:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 01:34:07.543 05:28:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:34:07.543 05:28:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 01:34:07.543 05:28:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 01:34:07.543 05:28:59 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 01:34:07.543 05:28:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:34:07.543 05:28:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 01:34:07.543 05:28:59 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 01:34:07.801 05:28:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:34:07.801 05:28:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 01:34:07.801 05:28:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 01:34:07.801 05:28:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:34:07.801 05:28:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 01:34:07.801 05:28:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:34:07.801 05:28:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70568 01:34:07.801 05:28:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70568 ']' 01:34:07.801 05:28:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70568 01:34:07.801 05:28:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 01:34:07.801 05:28:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:34:07.801 05:28:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70568 01:34:07.801 05:28:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:34:07.801 05:28:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:34:07.801 killing process with pid 70568 01:34:07.801 05:28:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70568' 01:34:07.801 05:28:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70568 01:34:07.801 05:28:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70568 01:34:10.332 01:34:10.332 real 0m3.879s 01:34:10.332 user 0m4.051s 01:34:10.332 sys 0m0.573s 01:34:10.332 05:29:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 01:34:10.332 05:29:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 01:34:10.332 ************************************ 01:34:10.332 END TEST xnvme_rpc 01:34:10.332 ************************************ 01:34:10.332 05:29:01 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 01:34:10.332 05:29:01 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:34:10.332 05:29:01 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 01:34:10.332 05:29:01 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 01:34:10.332 ************************************ 01:34:10.332 START TEST xnvme_bdevperf 01:34:10.332 ************************************ 01:34:10.332 05:29:01 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 01:34:10.332 05:29:01 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 01:34:10.332 05:29:01 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 01:34:10.332 05:29:01 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 01:34:10.332 05:29:01 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 01:34:10.332 05:29:01 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 01:34:10.332 05:29:01 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 01:34:10.332 05:29:01 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 01:34:10.332 { 01:34:10.332 "subsystems": [ 01:34:10.332 { 01:34:10.332 "subsystem": "bdev", 01:34:10.332 "config": [ 01:34:10.332 { 01:34:10.332 "params": { 01:34:10.332 "io_mechanism": "libaio", 01:34:10.332 "conserve_cpu": false, 01:34:10.332 "filename": "/dev/nvme0n1", 01:34:10.332 "name": "xnvme_bdev" 01:34:10.332 }, 01:34:10.332 "method": "bdev_xnvme_create" 01:34:10.332 }, 01:34:10.332 { 01:34:10.332 "method": "bdev_wait_for_examine" 01:34:10.332 } 01:34:10.332 ] 01:34:10.332 } 01:34:10.332 ] 01:34:10.332 } 01:34:10.332 [2024-12-09 05:29:01.684178] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:34:10.332 [2024-12-09 05:29:01.684389] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70653 ] 01:34:10.332 [2024-12-09 05:29:01.860219] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:34:10.590 [2024-12-09 05:29:01.994112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:34:10.850 Running I/O for 5 seconds... 01:34:13.157 31527.00 IOPS, 123.15 MiB/s [2024-12-09T05:29:05.716Z] 29441.00 IOPS, 115.00 MiB/s [2024-12-09T05:29:06.650Z] 28855.33 IOPS, 112.72 MiB/s [2024-12-09T05:29:07.599Z] 27575.50 IOPS, 107.72 MiB/s [2024-12-09T05:29:07.599Z] 26453.20 IOPS, 103.33 MiB/s 01:34:15.982 Latency(us) 01:34:15.982 [2024-12-09T05:29:07.599Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:34:15.982 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 01:34:15.982 xnvme_bdev : 5.01 26418.33 103.20 0.00 0.00 2416.12 314.65 7000.44 01:34:15.982 [2024-12-09T05:29:07.599Z] =================================================================================================================== 01:34:15.982 [2024-12-09T05:29:07.599Z] Total : 26418.33 103.20 0.00 0.00 2416.12 314.65 7000.44 01:34:16.936 05:29:08 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 01:34:16.936 05:29:08 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 01:34:16.936 05:29:08 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 01:34:16.936 05:29:08 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 01:34:16.936 05:29:08 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 01:34:16.936 { 01:34:16.936 "subsystems": [ 01:34:16.936 { 01:34:16.936 "subsystem": "bdev", 01:34:16.936 "config": [ 01:34:16.936 { 01:34:16.936 "params": { 01:34:16.936 "io_mechanism": "libaio", 01:34:16.936 "conserve_cpu": false, 01:34:16.936 "filename": "/dev/nvme0n1", 01:34:16.936 "name": "xnvme_bdev" 01:34:16.936 }, 01:34:16.936 "method": "bdev_xnvme_create" 01:34:16.936 }, 01:34:16.936 { 01:34:16.936 "method": "bdev_wait_for_examine" 01:34:16.936 } 01:34:16.936 ] 01:34:16.936 } 01:34:16.936 ] 01:34:16.936 } 01:34:17.195 [2024-12-09 05:29:08.564881] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:34:17.195 [2024-12-09 05:29:08.565098] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70731 ] 01:34:17.195 [2024-12-09 05:29:08.749392] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:34:17.453 [2024-12-09 05:29:08.873810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:34:17.710 Running I/O for 5 seconds... 01:34:20.031 20949.00 IOPS, 81.83 MiB/s [2024-12-09T05:29:12.584Z] 21348.50 IOPS, 83.39 MiB/s [2024-12-09T05:29:13.521Z] 22843.00 IOPS, 89.23 MiB/s [2024-12-09T05:29:14.455Z] 24356.75 IOPS, 95.14 MiB/s 01:34:22.838 Latency(us) 01:34:22.838 [2024-12-09T05:29:14.455Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:34:22.838 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 01:34:22.838 xnvme_bdev : 5.00 26175.82 102.25 0.00 0.00 2438.09 271.83 5719.51 01:34:22.838 [2024-12-09T05:29:14.455Z] =================================================================================================================== 01:34:22.838 [2024-12-09T05:29:14.455Z] Total : 26175.82 102.25 0.00 0.00 2438.09 271.83 5719.51 01:34:23.773 01:34:23.773 real 0m13.686s 01:34:23.773 user 0m5.064s 01:34:23.773 sys 0m6.095s 01:34:23.773 05:29:15 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 01:34:23.773 05:29:15 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 01:34:23.773 ************************************ 01:34:23.773 END TEST xnvme_bdevperf 01:34:23.773 ************************************ 01:34:23.773 05:29:15 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 01:34:23.773 05:29:15 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:34:23.773 05:29:15 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 01:34:23.773 05:29:15 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 01:34:23.773 ************************************ 01:34:23.773 START TEST xnvme_fio_plugin 01:34:23.773 ************************************ 01:34:23.773 05:29:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 01:34:23.773 05:29:15 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 01:34:23.773 05:29:15 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 01:34:23.773 05:29:15 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 01:34:23.773 05:29:15 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 01:34:23.773 05:29:15 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 01:34:23.773 05:29:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 01:34:23.773 05:29:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 01:34:23.773 05:29:15 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 01:34:23.773 05:29:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:34:23.773 05:29:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 01:34:23.773 05:29:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 01:34:23.773 05:29:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:34:23.773 05:29:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 01:34:23.773 05:29:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 01:34:23.773 05:29:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:34:23.773 05:29:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:34:23.773 05:29:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 01:34:23.773 05:29:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:34:23.773 05:29:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 01:34:23.773 05:29:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 01:34:23.773 05:29:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 01:34:23.773 05:29:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 01:34:23.773 05:29:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 01:34:23.773 { 01:34:23.773 "subsystems": [ 01:34:23.773 { 01:34:23.773 "subsystem": "bdev", 01:34:23.773 "config": [ 01:34:23.773 { 01:34:23.773 "params": { 01:34:23.773 "io_mechanism": "libaio", 01:34:23.773 "conserve_cpu": false, 01:34:23.773 "filename": "/dev/nvme0n1", 01:34:23.773 "name": "xnvme_bdev" 01:34:23.773 }, 01:34:23.773 "method": "bdev_xnvme_create" 01:34:23.773 }, 01:34:23.773 { 01:34:23.773 "method": "bdev_wait_for_examine" 01:34:23.773 } 01:34:23.773 ] 01:34:23.773 } 01:34:23.773 ] 01:34:23.773 } 01:34:24.039 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 01:34:24.039 fio-3.35 01:34:24.039 Starting 1 thread 01:34:30.612 01:34:30.612 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70856: Mon Dec 9 05:29:21 2024 01:34:30.612 read: IOPS=25.6k, BW=100MiB/s (105MB/s)(500MiB/5001msec) 01:34:30.612 slat (usec): min=5, max=1276, avg=34.93, stdev=27.64 01:34:30.612 clat (usec): min=114, max=5900, avg=1363.25, stdev=747.98 01:34:30.612 lat (usec): min=178, max=5924, avg=1398.18, stdev=750.85 01:34:30.612 clat percentiles (usec): 01:34:30.612 | 1.00th=[ 233], 5.00th=[ 343], 10.00th=[ 453], 20.00th=[ 660], 01:34:30.612 | 30.00th=[ 873], 40.00th=[ 1074], 50.00th=[ 1287], 60.00th=[ 1500], 01:34:30.612 | 70.00th=[ 1713], 80.00th=[ 1975], 90.00th=[ 2343], 95.00th=[ 2671], 01:34:30.612 | 99.00th=[ 3523], 99.50th=[ 3949], 99.90th=[ 4621], 99.95th=[ 4883], 01:34:30.612 | 99.99th=[ 5342] 01:34:30.612 bw ( KiB/s): min=92942, max=130272, per=100.00%, avg=104509.11, stdev=11780.90, samples=9 01:34:30.612 iops : min=23235, max=32568, avg=26127.22, stdev=2945.29, samples=9 01:34:30.612 lat (usec) : 250=1.46%, 500=10.86%, 750=11.74%, 1000=12.28% 01:34:30.612 lat (msec) : 2=44.23%, 4=18.98%, 10=0.44% 01:34:30.612 cpu : usr=24.28%, sys=54.76%, ctx=60, majf=0, minf=628 01:34:30.612 IO depths : 1=0.1%, 2=1.4%, 4=5.2%, 8=12.5%, 16=26.2%, 32=52.9%, >=64=1.7% 01:34:30.612 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:34:30.612 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 01:34:30.612 issued rwts: total=128081,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:34:30.612 latency : target=0, window=0, percentile=100.00%, depth=64 01:34:30.612 01:34:30.612 Run status group 0 (all jobs): 01:34:30.612 READ: bw=100MiB/s (105MB/s), 100MiB/s-100MiB/s (105MB/s-105MB/s), io=500MiB (525MB), run=5001-5001msec 01:34:31.179 ----------------------------------------------------- 01:34:31.179 Suppressions used: 01:34:31.179 count bytes template 01:34:31.179 1 11 /usr/src/fio/parse.c 01:34:31.179 1 8 libtcmalloc_minimal.so 01:34:31.179 1 904 libcrypto.so 01:34:31.179 ----------------------------------------------------- 01:34:31.179 01:34:31.179 05:29:22 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 01:34:31.179 05:29:22 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 01:34:31.179 05:29:22 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 01:34:31.179 05:29:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 01:34:31.179 05:29:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 01:34:31.179 05:29:22 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 01:34:31.179 05:29:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:34:31.179 05:29:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 01:34:31.179 05:29:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 01:34:31.179 05:29:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:34:31.179 05:29:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 01:34:31.179 05:29:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 01:34:31.179 05:29:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:34:31.179 05:29:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 01:34:31.179 05:29:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:34:31.179 05:29:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:34:31.437 05:29:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 01:34:31.437 05:29:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 01:34:31.437 05:29:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 01:34:31.437 05:29:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 01:34:31.437 05:29:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 01:34:31.437 { 01:34:31.437 "subsystems": [ 01:34:31.437 { 01:34:31.437 "subsystem": "bdev", 01:34:31.437 "config": [ 01:34:31.437 { 01:34:31.437 "params": { 01:34:31.437 "io_mechanism": "libaio", 01:34:31.437 "conserve_cpu": false, 01:34:31.437 "filename": "/dev/nvme0n1", 01:34:31.437 "name": "xnvme_bdev" 01:34:31.437 }, 01:34:31.437 "method": "bdev_xnvme_create" 01:34:31.437 }, 01:34:31.437 { 01:34:31.437 "method": "bdev_wait_for_examine" 01:34:31.437 } 01:34:31.437 ] 01:34:31.437 } 01:34:31.437 ] 01:34:31.437 } 01:34:31.695 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 01:34:31.695 fio-3.35 01:34:31.695 Starting 1 thread 01:34:38.293 01:34:38.293 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70948: Mon Dec 9 05:29:28 2024 01:34:38.293 write: IOPS=23.1k, BW=90.2MiB/s (94.6MB/s)(451MiB/5001msec); 0 zone resets 01:34:38.293 slat (usec): min=5, max=1207, avg=38.90, stdev=30.45 01:34:38.293 clat (usec): min=83, max=6017, avg=1511.36, stdev=857.19 01:34:38.293 lat (usec): min=102, max=6132, avg=1550.26, stdev=861.29 01:34:38.293 clat percentiles (usec): 01:34:38.293 | 1.00th=[ 253], 5.00th=[ 367], 10.00th=[ 486], 20.00th=[ 717], 01:34:38.293 | 30.00th=[ 947], 40.00th=[ 1172], 50.00th=[ 1385], 60.00th=[ 1631], 01:34:38.293 | 70.00th=[ 1909], 80.00th=[ 2245], 90.00th=[ 2671], 95.00th=[ 3032], 01:34:38.293 | 99.00th=[ 4015], 99.50th=[ 4424], 99.90th=[ 4948], 99.95th=[ 5145], 01:34:38.293 | 99.99th=[ 5538] 01:34:38.293 bw ( KiB/s): min=75584, max=108808, per=99.56%, avg=91949.33, stdev=9100.95, samples=9 01:34:38.293 iops : min=18896, max=27202, avg=22987.33, stdev=2275.24, samples=9 01:34:38.293 lat (usec) : 100=0.01%, 250=0.96%, 500=9.66%, 750=10.93%, 1000=11.11% 01:34:38.293 lat (msec) : 2=40.17%, 4=26.15%, 10=1.03% 01:34:38.293 cpu : usr=24.66%, sys=53.84%, ctx=108, majf=0, minf=765 01:34:38.293 IO depths : 1=0.1%, 2=1.4%, 4=5.3%, 8=12.5%, 16=26.1%, 32=52.9%, >=64=1.7% 01:34:38.293 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:34:38.293 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 01:34:38.293 issued rwts: total=0,115470,0,0 short=0,0,0,0 dropped=0,0,0,0 01:34:38.293 latency : target=0, window=0, percentile=100.00%, depth=64 01:34:38.293 01:34:38.293 Run status group 0 (all jobs): 01:34:38.293 WRITE: bw=90.2MiB/s (94.6MB/s), 90.2MiB/s-90.2MiB/s (94.6MB/s-94.6MB/s), io=451MiB (473MB), run=5001-5001msec 01:34:38.858 ----------------------------------------------------- 01:34:38.858 Suppressions used: 01:34:38.858 count bytes template 01:34:38.858 1 11 /usr/src/fio/parse.c 01:34:38.858 1 8 libtcmalloc_minimal.so 01:34:38.858 1 904 libcrypto.so 01:34:38.858 ----------------------------------------------------- 01:34:38.858 01:34:38.858 01:34:38.858 real 0m14.880s 01:34:38.858 user 0m6.233s 01:34:38.858 sys 0m6.173s 01:34:38.858 05:29:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 01:34:38.858 ************************************ 01:34:38.858 END TEST xnvme_fio_plugin 01:34:38.858 ************************************ 01:34:38.858 05:29:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 01:34:38.858 05:29:30 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 01:34:38.858 05:29:30 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 01:34:38.858 05:29:30 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 01:34:38.858 05:29:30 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 01:34:38.858 05:29:30 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:34:38.858 05:29:30 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 01:34:38.858 05:29:30 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 01:34:38.858 ************************************ 01:34:38.858 START TEST xnvme_rpc 01:34:38.858 ************************************ 01:34:38.858 05:29:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 01:34:38.858 05:29:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 01:34:38.858 05:29:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 01:34:38.858 05:29:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 01:34:38.858 05:29:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 01:34:38.859 05:29:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71040 01:34:38.859 05:29:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71040 01:34:38.859 05:29:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71040 ']' 01:34:38.859 05:29:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:34:38.859 05:29:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:34:38.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:34:38.859 05:29:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 01:34:38.859 05:29:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:34:38.859 05:29:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 01:34:38.859 05:29:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 01:34:38.859 [2024-12-09 05:29:30.363810] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:34:38.859 [2024-12-09 05:29:30.363987] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71040 ] 01:34:39.116 [2024-12-09 05:29:30.546722] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:34:39.116 [2024-12-09 05:29:30.703225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:34:40.050 05:29:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:34:40.050 05:29:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 01:34:40.050 05:29:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio -c 01:34:40.050 05:29:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:34:40.050 05:29:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 01:34:40.050 xnvme_bdev 01:34:40.050 05:29:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:34:40.050 05:29:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 01:34:40.050 05:29:31 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 01:34:40.050 05:29:31 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 01:34:40.050 05:29:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:34:40.050 05:29:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 01:34:40.050 05:29:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:34:40.050 05:29:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 01:34:40.050 05:29:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 01:34:40.050 05:29:31 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 01:34:40.050 05:29:31 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 01:34:40.050 05:29:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:34:40.050 05:29:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 01:34:40.050 05:29:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:34:40.050 05:29:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 01:34:40.050 05:29:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 01:34:40.050 05:29:31 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 01:34:40.050 05:29:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:34:40.050 05:29:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 01:34:40.050 05:29:31 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 01:34:40.050 05:29:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:34:40.307 05:29:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 01:34:40.307 05:29:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 01:34:40.307 05:29:31 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 01:34:40.307 05:29:31 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 01:34:40.307 05:29:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:34:40.307 05:29:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 01:34:40.307 05:29:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:34:40.307 05:29:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 01:34:40.307 05:29:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 01:34:40.307 05:29:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:34:40.307 05:29:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 01:34:40.307 05:29:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:34:40.307 05:29:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71040 01:34:40.307 05:29:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71040 ']' 01:34:40.307 05:29:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71040 01:34:40.307 05:29:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 01:34:40.307 05:29:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:34:40.307 05:29:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71040 01:34:40.307 05:29:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:34:40.307 05:29:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:34:40.307 05:29:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71040' 01:34:40.307 killing process with pid 71040 01:34:40.307 05:29:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71040 01:34:40.307 05:29:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71040 01:34:42.834 01:34:42.834 real 0m3.666s 01:34:42.834 user 0m3.825s 01:34:42.834 sys 0m0.595s 01:34:42.834 05:29:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 01:34:42.834 ************************************ 01:34:42.834 END TEST xnvme_rpc 01:34:42.834 ************************************ 01:34:42.834 05:29:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 01:34:42.834 05:29:33 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 01:34:42.834 05:29:33 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:34:42.834 05:29:33 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 01:34:42.834 05:29:33 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 01:34:42.834 ************************************ 01:34:42.834 START TEST xnvme_bdevperf 01:34:42.834 ************************************ 01:34:42.834 05:29:33 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 01:34:42.834 05:29:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 01:34:42.834 05:29:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 01:34:42.834 05:29:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 01:34:42.834 05:29:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 01:34:42.834 05:29:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 01:34:42.834 05:29:33 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 01:34:42.834 05:29:33 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 01:34:42.834 { 01:34:42.834 "subsystems": [ 01:34:42.834 { 01:34:42.834 "subsystem": "bdev", 01:34:42.834 "config": [ 01:34:42.834 { 01:34:42.834 "params": { 01:34:42.834 "io_mechanism": "libaio", 01:34:42.834 "conserve_cpu": true, 01:34:42.834 "filename": "/dev/nvme0n1", 01:34:42.834 "name": "xnvme_bdev" 01:34:42.834 }, 01:34:42.834 "method": "bdev_xnvme_create" 01:34:42.834 }, 01:34:42.834 { 01:34:42.834 "method": "bdev_wait_for_examine" 01:34:42.834 } 01:34:42.834 ] 01:34:42.834 } 01:34:42.834 ] 01:34:42.834 } 01:34:42.834 [2024-12-09 05:29:34.093087] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:34:42.834 [2024-12-09 05:29:34.093305] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71114 ] 01:34:42.834 [2024-12-09 05:29:34.281949] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:34:42.834 [2024-12-09 05:29:34.407158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:34:43.400 Running I/O for 5 seconds... 01:34:45.264 29857.00 IOPS, 116.63 MiB/s [2024-12-09T05:29:37.814Z] 28709.00 IOPS, 112.14 MiB/s [2024-12-09T05:29:39.188Z] 26977.33 IOPS, 105.38 MiB/s [2024-12-09T05:29:40.122Z] 26354.00 IOPS, 102.95 MiB/s [2024-12-09T05:29:40.122Z] 26099.60 IOPS, 101.95 MiB/s 01:34:48.505 Latency(us) 01:34:48.505 [2024-12-09T05:29:40.122Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:34:48.505 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 01:34:48.505 xnvme_bdev : 5.01 26067.98 101.83 0.00 0.00 2448.58 385.40 5332.25 01:34:48.505 [2024-12-09T05:29:40.122Z] =================================================================================================================== 01:34:48.505 [2024-12-09T05:29:40.122Z] Total : 26067.98 101.83 0.00 0.00 2448.58 385.40 5332.25 01:34:49.442 05:29:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 01:34:49.442 05:29:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 01:34:49.442 05:29:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 01:34:49.442 05:29:40 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 01:34:49.442 05:29:40 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 01:34:49.442 { 01:34:49.442 "subsystems": [ 01:34:49.442 { 01:34:49.442 "subsystem": "bdev", 01:34:49.442 "config": [ 01:34:49.442 { 01:34:49.442 "params": { 01:34:49.442 "io_mechanism": "libaio", 01:34:49.442 "conserve_cpu": true, 01:34:49.442 "filename": "/dev/nvme0n1", 01:34:49.442 "name": "xnvme_bdev" 01:34:49.442 }, 01:34:49.442 "method": "bdev_xnvme_create" 01:34:49.442 }, 01:34:49.442 { 01:34:49.442 "method": "bdev_wait_for_examine" 01:34:49.442 } 01:34:49.442 ] 01:34:49.442 } 01:34:49.442 ] 01:34:49.442 } 01:34:49.442 [2024-12-09 05:29:40.875026] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:34:49.442 [2024-12-09 05:29:40.875209] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71195 ] 01:34:49.442 [2024-12-09 05:29:41.043034] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:34:49.700 [2024-12-09 05:29:41.153290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:34:49.958 Running I/O for 5 seconds... 01:34:51.905 22207.00 IOPS, 86.75 MiB/s [2024-12-09T05:29:44.893Z] 23648.00 IOPS, 92.38 MiB/s [2024-12-09T05:29:45.878Z] 24781.67 IOPS, 96.80 MiB/s [2024-12-09T05:29:46.814Z] 24190.50 IOPS, 94.49 MiB/s [2024-12-09T05:29:46.814Z] 23934.80 IOPS, 93.50 MiB/s 01:34:55.197 Latency(us) 01:34:55.197 [2024-12-09T05:29:46.814Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:34:55.197 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 01:34:55.197 xnvme_bdev : 5.00 23912.00 93.41 0.00 0.00 2668.83 359.33 6017.40 01:34:55.197 [2024-12-09T05:29:46.814Z] =================================================================================================================== 01:34:55.197 [2024-12-09T05:29:46.814Z] Total : 23912.00 93.41 0.00 0.00 2668.83 359.33 6017.40 01:34:56.132 01:34:56.132 real 0m13.586s 01:34:56.132 user 0m4.993s 01:34:56.132 sys 0m6.080s 01:34:56.132 05:29:47 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 01:34:56.132 05:29:47 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 01:34:56.132 ************************************ 01:34:56.132 END TEST xnvme_bdevperf 01:34:56.132 ************************************ 01:34:56.132 05:29:47 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 01:34:56.132 05:29:47 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:34:56.132 05:29:47 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 01:34:56.132 05:29:47 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 01:34:56.132 ************************************ 01:34:56.132 START TEST xnvme_fio_plugin 01:34:56.132 ************************************ 01:34:56.132 05:29:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 01:34:56.132 05:29:47 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 01:34:56.132 05:29:47 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 01:34:56.132 05:29:47 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 01:34:56.132 05:29:47 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 01:34:56.132 05:29:47 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 01:34:56.132 05:29:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 01:34:56.132 05:29:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 01:34:56.132 05:29:47 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 01:34:56.132 05:29:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:34:56.132 05:29:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 01:34:56.132 05:29:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 01:34:56.132 05:29:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:34:56.132 05:29:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 01:34:56.132 05:29:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 01:34:56.132 05:29:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:34:56.132 05:29:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 01:34:56.132 05:29:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:34:56.132 05:29:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:34:56.132 05:29:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 01:34:56.132 05:29:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 01:34:56.132 05:29:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 01:34:56.132 05:29:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 01:34:56.132 05:29:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 01:34:56.132 { 01:34:56.132 "subsystems": [ 01:34:56.132 { 01:34:56.132 "subsystem": "bdev", 01:34:56.132 "config": [ 01:34:56.132 { 01:34:56.132 "params": { 01:34:56.132 "io_mechanism": "libaio", 01:34:56.132 "conserve_cpu": true, 01:34:56.132 "filename": "/dev/nvme0n1", 01:34:56.132 "name": "xnvme_bdev" 01:34:56.132 }, 01:34:56.132 "method": "bdev_xnvme_create" 01:34:56.132 }, 01:34:56.132 { 01:34:56.132 "method": "bdev_wait_for_examine" 01:34:56.132 } 01:34:56.132 ] 01:34:56.132 } 01:34:56.132 ] 01:34:56.132 } 01:34:56.390 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 01:34:56.390 fio-3.35 01:34:56.390 Starting 1 thread 01:35:02.956 01:35:02.956 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71314: Mon Dec 9 05:29:53 2024 01:35:02.956 read: IOPS=23.4k, BW=91.3MiB/s (95.7MB/s)(457MiB/5001msec) 01:35:02.956 slat (usec): min=4, max=644, avg=38.52, stdev=29.34 01:35:02.956 clat (usec): min=116, max=5594, avg=1496.18, stdev=807.14 01:35:02.956 lat (usec): min=174, max=5673, avg=1534.70, stdev=809.25 01:35:02.956 clat percentiles (usec): 01:35:02.956 | 1.00th=[ 253], 5.00th=[ 367], 10.00th=[ 490], 20.00th=[ 725], 01:35:02.956 | 30.00th=[ 955], 40.00th=[ 1188], 50.00th=[ 1418], 60.00th=[ 1663], 01:35:02.956 | 70.00th=[ 1926], 80.00th=[ 2212], 90.00th=[ 2573], 95.00th=[ 2835], 01:35:02.956 | 99.00th=[ 3720], 99.50th=[ 4113], 99.90th=[ 4752], 99.95th=[ 4948], 01:35:02.956 | 99.99th=[ 5211] 01:35:02.956 bw ( KiB/s): min=86152, max=102792, per=100.00%, avg=94512.00, stdev=5681.37, samples=9 01:35:02.956 iops : min=21538, max=25698, avg=23628.00, stdev=1420.34, samples=9 01:35:02.956 lat (usec) : 250=0.95%, 500=9.53%, 750=10.63%, 1000=10.94% 01:35:02.956 lat (msec) : 2=40.76%, 4=26.55%, 10=0.64% 01:35:02.956 cpu : usr=22.84%, sys=54.74%, ctx=147, majf=0, minf=615 01:35:02.956 IO depths : 1=0.1%, 2=1.6%, 4=5.5%, 8=12.5%, 16=26.0%, 32=52.7%, >=64=1.6% 01:35:02.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:35:02.956 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 01:35:02.956 issued rwts: total=116905,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:35:02.956 latency : target=0, window=0, percentile=100.00%, depth=64 01:35:02.956 01:35:02.956 Run status group 0 (all jobs): 01:35:02.956 READ: bw=91.3MiB/s (95.7MB/s), 91.3MiB/s-91.3MiB/s (95.7MB/s-95.7MB/s), io=457MiB (479MB), run=5001-5001msec 01:35:03.522 ----------------------------------------------------- 01:35:03.522 Suppressions used: 01:35:03.522 count bytes template 01:35:03.522 1 11 /usr/src/fio/parse.c 01:35:03.522 1 8 libtcmalloc_minimal.so 01:35:03.522 1 904 libcrypto.so 01:35:03.522 ----------------------------------------------------- 01:35:03.522 01:35:03.522 05:29:55 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 01:35:03.522 05:29:55 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 01:35:03.523 05:29:55 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 01:35:03.523 05:29:55 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 01:35:03.523 05:29:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 01:35:03.523 05:29:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 01:35:03.523 05:29:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 01:35:03.523 05:29:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:35:03.523 05:29:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 01:35:03.523 05:29:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:35:03.523 05:29:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 01:35:03.523 05:29:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 01:35:03.523 05:29:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:35:03.523 05:29:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:35:03.523 05:29:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 01:35:03.523 05:29:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:35:03.523 05:29:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 01:35:03.523 05:29:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 01:35:03.523 05:29:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 01:35:03.523 05:29:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 01:35:03.523 05:29:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 01:35:03.523 { 01:35:03.523 "subsystems": [ 01:35:03.523 { 01:35:03.523 "subsystem": "bdev", 01:35:03.523 "config": [ 01:35:03.523 { 01:35:03.523 "params": { 01:35:03.523 "io_mechanism": "libaio", 01:35:03.523 "conserve_cpu": true, 01:35:03.523 "filename": "/dev/nvme0n1", 01:35:03.523 "name": "xnvme_bdev" 01:35:03.523 }, 01:35:03.523 "method": "bdev_xnvme_create" 01:35:03.523 }, 01:35:03.523 { 01:35:03.523 "method": "bdev_wait_for_examine" 01:35:03.523 } 01:35:03.523 ] 01:35:03.523 } 01:35:03.523 ] 01:35:03.523 } 01:35:03.780 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 01:35:03.780 fio-3.35 01:35:03.780 Starting 1 thread 01:35:10.347 01:35:10.347 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71412: Mon Dec 9 05:30:01 2024 01:35:10.347 write: IOPS=21.0k, BW=81.9MiB/s (85.8MB/s)(409MiB/5001msec); 0 zone resets 01:35:10.347 slat (usec): min=5, max=769, avg=43.15, stdev=28.85 01:35:10.347 clat (usec): min=111, max=6863, avg=1646.06, stdev=904.62 01:35:10.348 lat (usec): min=176, max=6969, avg=1689.21, stdev=907.59 01:35:10.348 clat percentiles (usec): 01:35:10.348 | 1.00th=[ 260], 5.00th=[ 379], 10.00th=[ 515], 20.00th=[ 775], 01:35:10.348 | 30.00th=[ 1037], 40.00th=[ 1303], 50.00th=[ 1565], 60.00th=[ 1844], 01:35:10.348 | 70.00th=[ 2114], 80.00th=[ 2442], 90.00th=[ 2835], 95.00th=[ 3163], 01:35:10.348 | 99.00th=[ 4080], 99.50th=[ 4555], 99.90th=[ 5342], 99.95th=[ 5604], 01:35:10.348 | 99.99th=[ 6128] 01:35:10.348 bw ( KiB/s): min=75344, max=93840, per=99.73%, avg=83608.67, stdev=5572.76, samples=9 01:35:10.348 iops : min=18836, max=23460, avg=20902.11, stdev=1393.18, samples=9 01:35:10.348 lat (usec) : 250=0.81%, 500=8.70%, 750=9.53%, 1000=9.50% 01:35:10.348 lat (msec) : 2=37.32%, 4=33.01%, 10=1.14% 01:35:10.348 cpu : usr=24.04%, sys=53.80%, ctx=122, majf=0, minf=600 01:35:10.348 IO depths : 1=0.1%, 2=1.7%, 4=5.8%, 8=12.7%, 16=25.9%, 32=52.2%, >=64=1.6% 01:35:10.348 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:35:10.348 complete : 0=0.0%, 4=98.4%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.6%, >=64=0.0% 01:35:10.348 issued rwts: total=0,104818,0,0 short=0,0,0,0 dropped=0,0,0,0 01:35:10.348 latency : target=0, window=0, percentile=100.00%, depth=64 01:35:10.348 01:35:10.348 Run status group 0 (all jobs): 01:35:10.348 WRITE: bw=81.9MiB/s (85.8MB/s), 81.9MiB/s-81.9MiB/s (85.8MB/s-85.8MB/s), io=409MiB (429MB), run=5001-5001msec 01:35:10.914 ----------------------------------------------------- 01:35:10.914 Suppressions used: 01:35:10.914 count bytes template 01:35:10.914 1 11 /usr/src/fio/parse.c 01:35:10.914 1 8 libtcmalloc_minimal.so 01:35:10.914 1 904 libcrypto.so 01:35:10.914 ----------------------------------------------------- 01:35:10.914 01:35:11.196 01:35:11.196 real 0m14.920s 01:35:11.196 user 0m6.154s 01:35:11.196 sys 0m6.196s 01:35:11.196 05:30:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 01:35:11.196 05:30:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 01:35:11.196 ************************************ 01:35:11.196 END TEST xnvme_fio_plugin 01:35:11.196 ************************************ 01:35:11.196 05:30:02 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 01:35:11.196 05:30:02 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 01:35:11.196 05:30:02 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 01:35:11.196 05:30:02 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 01:35:11.196 05:30:02 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 01:35:11.196 05:30:02 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 01:35:11.196 05:30:02 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 01:35:11.196 05:30:02 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 01:35:11.196 05:30:02 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 01:35:11.196 05:30:02 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:35:11.196 05:30:02 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 01:35:11.196 05:30:02 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 01:35:11.196 ************************************ 01:35:11.196 START TEST xnvme_rpc 01:35:11.196 ************************************ 01:35:11.196 05:30:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 01:35:11.196 05:30:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 01:35:11.196 05:30:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 01:35:11.196 05:30:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 01:35:11.196 05:30:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 01:35:11.196 05:30:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71498 01:35:11.196 05:30:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71498 01:35:11.196 05:30:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:35:11.196 05:30:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71498 ']' 01:35:11.196 05:30:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:35:11.196 05:30:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 01:35:11.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:35:11.196 05:30:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:35:11.196 05:30:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 01:35:11.196 05:30:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 01:35:11.196 [2024-12-09 05:30:02.732690] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:35:11.196 [2024-12-09 05:30:02.732894] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71498 ] 01:35:11.488 [2024-12-09 05:30:02.922810] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:35:11.488 [2024-12-09 05:30:03.048030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:35:12.422 05:30:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:35:12.423 05:30:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 01:35:12.423 05:30:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring '' 01:35:12.423 05:30:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:35:12.423 05:30:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 01:35:12.423 xnvme_bdev 01:35:12.423 05:30:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:35:12.423 05:30:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 01:35:12.423 05:30:03 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 01:35:12.423 05:30:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:35:12.423 05:30:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 01:35:12.423 05:30:03 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 01:35:12.423 05:30:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:35:12.423 05:30:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 01:35:12.423 05:30:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 01:35:12.423 05:30:03 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 01:35:12.423 05:30:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:35:12.423 05:30:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 01:35:12.423 05:30:03 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 01:35:12.423 05:30:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:35:12.423 05:30:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 01:35:12.423 05:30:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 01:35:12.423 05:30:03 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 01:35:12.423 05:30:03 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 01:35:12.423 05:30:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:35:12.423 05:30:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 01:35:12.423 05:30:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:35:12.423 05:30:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 01:35:12.423 05:30:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 01:35:12.423 05:30:04 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 01:35:12.423 05:30:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:35:12.423 05:30:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 01:35:12.681 05:30:04 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 01:35:12.681 05:30:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:35:12.681 05:30:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 01:35:12.681 05:30:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 01:35:12.681 05:30:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:35:12.681 05:30:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 01:35:12.681 05:30:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:35:12.681 05:30:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71498 01:35:12.681 05:30:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71498 ']' 01:35:12.681 05:30:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71498 01:35:12.681 05:30:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 01:35:12.681 05:30:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:35:12.681 05:30:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71498 01:35:12.681 05:30:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:35:12.681 05:30:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:35:12.681 killing process with pid 71498 01:35:12.681 05:30:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71498' 01:35:12.681 05:30:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71498 01:35:12.681 05:30:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71498 01:35:14.585 01:35:14.585 real 0m3.592s 01:35:14.585 user 0m3.694s 01:35:14.585 sys 0m0.583s 01:35:14.585 05:30:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 01:35:14.585 05:30:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 01:35:14.585 ************************************ 01:35:14.585 END TEST xnvme_rpc 01:35:14.585 ************************************ 01:35:14.843 05:30:06 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 01:35:14.843 05:30:06 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:35:14.843 05:30:06 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 01:35:14.843 05:30:06 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 01:35:14.843 ************************************ 01:35:14.843 START TEST xnvme_bdevperf 01:35:14.843 ************************************ 01:35:14.843 05:30:06 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 01:35:14.843 05:30:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 01:35:14.843 05:30:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 01:35:14.843 05:30:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 01:35:14.843 05:30:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 01:35:14.843 05:30:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 01:35:14.843 05:30:06 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 01:35:14.843 05:30:06 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 01:35:14.843 { 01:35:14.843 "subsystems": [ 01:35:14.843 { 01:35:14.843 "subsystem": "bdev", 01:35:14.843 "config": [ 01:35:14.843 { 01:35:14.843 "params": { 01:35:14.843 "io_mechanism": "io_uring", 01:35:14.843 "conserve_cpu": false, 01:35:14.843 "filename": "/dev/nvme0n1", 01:35:14.843 "name": "xnvme_bdev" 01:35:14.843 }, 01:35:14.843 "method": "bdev_xnvme_create" 01:35:14.843 }, 01:35:14.843 { 01:35:14.843 "method": "bdev_wait_for_examine" 01:35:14.843 } 01:35:14.843 ] 01:35:14.843 } 01:35:14.843 ] 01:35:14.843 } 01:35:14.843 [2024-12-09 05:30:06.351842] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:35:14.843 [2024-12-09 05:30:06.352074] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71578 ] 01:35:15.102 [2024-12-09 05:30:06.547030] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:35:15.102 [2024-12-09 05:30:06.697127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:35:15.668 Running I/O for 5 seconds... 01:35:17.538 44549.00 IOPS, 174.02 MiB/s [2024-12-09T05:30:10.088Z] 47037.00 IOPS, 183.74 MiB/s [2024-12-09T05:30:11.463Z] 45542.67 IOPS, 177.90 MiB/s [2024-12-09T05:30:12.399Z] 44261.00 IOPS, 172.89 MiB/s [2024-12-09T05:30:12.399Z] 44610.60 IOPS, 174.26 MiB/s 01:35:20.782 Latency(us) 01:35:20.782 [2024-12-09T05:30:12.399Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:35:20.782 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 01:35:20.782 xnvme_bdev : 5.00 44584.18 174.16 0.00 0.00 1430.97 71.21 17277.67 01:35:20.782 [2024-12-09T05:30:12.399Z] =================================================================================================================== 01:35:20.782 [2024-12-09T05:30:12.399Z] Total : 44584.18 174.16 0.00 0.00 1430.97 71.21 17277.67 01:35:21.716 05:30:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 01:35:21.716 05:30:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 01:35:21.716 05:30:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 01:35:21.716 05:30:13 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 01:35:21.716 05:30:13 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 01:35:21.716 { 01:35:21.716 "subsystems": [ 01:35:21.716 { 01:35:21.716 "subsystem": "bdev", 01:35:21.716 "config": [ 01:35:21.716 { 01:35:21.716 "params": { 01:35:21.716 "io_mechanism": "io_uring", 01:35:21.717 "conserve_cpu": false, 01:35:21.717 "filename": "/dev/nvme0n1", 01:35:21.717 "name": "xnvme_bdev" 01:35:21.717 }, 01:35:21.717 "method": "bdev_xnvme_create" 01:35:21.717 }, 01:35:21.717 { 01:35:21.717 "method": "bdev_wait_for_examine" 01:35:21.717 } 01:35:21.717 ] 01:35:21.717 } 01:35:21.717 ] 01:35:21.717 } 01:35:21.717 [2024-12-09 05:30:13.308708] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:35:21.717 [2024-12-09 05:30:13.308878] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71659 ] 01:35:21.974 [2024-12-09 05:30:13.479104] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:35:22.233 [2024-12-09 05:30:13.597107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:35:22.491 Running I/O for 5 seconds... 01:35:24.364 42166.00 IOPS, 164.71 MiB/s [2024-12-09T05:30:17.355Z] 43126.50 IOPS, 168.46 MiB/s [2024-12-09T05:30:18.292Z] 43180.00 IOPS, 168.67 MiB/s [2024-12-09T05:30:19.245Z] 43273.75 IOPS, 169.04 MiB/s 01:35:27.628 Latency(us) 01:35:27.628 [2024-12-09T05:30:19.245Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:35:27.628 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 01:35:27.628 xnvme_bdev : 5.00 43647.37 170.50 0.00 0.00 1460.97 424.49 6285.50 01:35:27.628 [2024-12-09T05:30:19.245Z] =================================================================================================================== 01:35:27.628 [2024-12-09T05:30:19.245Z] Total : 43647.37 170.50 0.00 0.00 1460.97 424.49 6285.50 01:35:28.563 01:35:28.563 real 0m13.900s 01:35:28.563 user 0m7.010s 01:35:28.563 sys 0m6.671s 01:35:28.563 05:30:20 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 01:35:28.563 05:30:20 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 01:35:28.563 ************************************ 01:35:28.563 END TEST xnvme_bdevperf 01:35:28.563 ************************************ 01:35:28.822 05:30:20 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 01:35:28.822 05:30:20 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:35:28.822 05:30:20 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 01:35:28.822 05:30:20 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 01:35:28.822 ************************************ 01:35:28.822 START TEST xnvme_fio_plugin 01:35:28.822 ************************************ 01:35:28.822 05:30:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 01:35:28.822 05:30:20 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 01:35:28.822 05:30:20 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 01:35:28.822 05:30:20 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 01:35:28.822 05:30:20 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 01:35:28.822 05:30:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 01:35:28.822 05:30:20 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 01:35:28.822 05:30:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 01:35:28.822 05:30:20 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 01:35:28.822 05:30:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:35:28.822 05:30:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 01:35:28.822 05:30:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 01:35:28.822 05:30:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:35:28.822 05:30:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 01:35:28.822 05:30:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 01:35:28.822 05:30:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:35:28.823 05:30:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:35:28.823 05:30:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 01:35:28.823 05:30:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:35:28.823 05:30:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 01:35:28.823 05:30:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 01:35:28.823 05:30:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 01:35:28.823 05:30:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 01:35:28.823 05:30:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 01:35:28.823 { 01:35:28.823 "subsystems": [ 01:35:28.823 { 01:35:28.823 "subsystem": "bdev", 01:35:28.823 "config": [ 01:35:28.823 { 01:35:28.823 "params": { 01:35:28.823 "io_mechanism": "io_uring", 01:35:28.823 "conserve_cpu": false, 01:35:28.823 "filename": "/dev/nvme0n1", 01:35:28.823 "name": "xnvme_bdev" 01:35:28.823 }, 01:35:28.823 "method": "bdev_xnvme_create" 01:35:28.823 }, 01:35:28.823 { 01:35:28.823 "method": "bdev_wait_for_examine" 01:35:28.823 } 01:35:28.823 ] 01:35:28.823 } 01:35:28.823 ] 01:35:28.823 } 01:35:29.081 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 01:35:29.081 fio-3.35 01:35:29.081 Starting 1 thread 01:35:35.644 01:35:35.644 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71778: Mon Dec 9 05:30:26 2024 01:35:35.644 read: IOPS=48.7k, BW=190MiB/s (199MB/s)(951MiB/5001msec) 01:35:35.644 slat (usec): min=2, max=529, avg= 3.78, stdev= 2.73 01:35:35.644 clat (usec): min=420, max=5177, avg=1162.24, stdev=132.89 01:35:35.644 lat (usec): min=438, max=5181, avg=1166.02, stdev=133.29 01:35:35.644 clat percentiles (usec): 01:35:35.644 | 1.00th=[ 930], 5.00th=[ 979], 10.00th=[ 1012], 20.00th=[ 1057], 01:35:35.644 | 30.00th=[ 1090], 40.00th=[ 1123], 50.00th=[ 1156], 60.00th=[ 1188], 01:35:35.644 | 70.00th=[ 1221], 80.00th=[ 1254], 90.00th=[ 1319], 95.00th=[ 1385], 01:35:35.644 | 99.00th=[ 1582], 99.50th=[ 1663], 99.90th=[ 1909], 99.95th=[ 2073], 01:35:35.644 | 99.99th=[ 2966] 01:35:35.644 bw ( KiB/s): min=177928, max=202992, per=100.00%, avg=196301.33, stdev=7598.62, samples=9 01:35:35.644 iops : min=44486, max=50748, avg=49075.56, stdev=1898.29, samples=9 01:35:35.644 lat (usec) : 500=0.01%, 750=0.04%, 1000=7.41% 01:35:35.644 lat (msec) : 2=92.49%, 4=0.06%, 10=0.01% 01:35:35.644 cpu : usr=35.80%, sys=63.16%, ctx=13, majf=0, minf=762 01:35:35.644 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.1%, >=64=1.6% 01:35:35.644 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:35:35.644 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 01:35:35.644 issued rwts: total=243368,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:35:35.644 latency : target=0, window=0, percentile=100.00%, depth=64 01:35:35.644 01:35:35.644 Run status group 0 (all jobs): 01:35:35.644 READ: bw=190MiB/s (199MB/s), 190MiB/s-190MiB/s (199MB/s-199MB/s), io=951MiB (997MB), run=5001-5001msec 01:35:36.212 ----------------------------------------------------- 01:35:36.212 Suppressions used: 01:35:36.212 count bytes template 01:35:36.212 1 11 /usr/src/fio/parse.c 01:35:36.212 1 8 libtcmalloc_minimal.so 01:35:36.212 1 904 libcrypto.so 01:35:36.212 ----------------------------------------------------- 01:35:36.212 01:35:36.212 05:30:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 01:35:36.212 05:30:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 01:35:36.212 05:30:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 01:35:36.212 05:30:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 01:35:36.212 05:30:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:35:36.212 05:30:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 01:35:36.212 05:30:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 01:35:36.212 05:30:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:35:36.212 05:30:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 01:35:36.212 05:30:27 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 01:35:36.212 05:30:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 01:35:36.212 05:30:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:35:36.212 05:30:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 01:35:36.212 05:30:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:35:36.212 05:30:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 01:35:36.212 05:30:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:35:36.212 05:30:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 01:35:36.212 05:30:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 01:35:36.212 05:30:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 01:35:36.212 05:30:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 01:35:36.212 05:30:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 01:35:36.212 { 01:35:36.212 "subsystems": [ 01:35:36.212 { 01:35:36.212 "subsystem": "bdev", 01:35:36.212 "config": [ 01:35:36.212 { 01:35:36.212 "params": { 01:35:36.212 "io_mechanism": "io_uring", 01:35:36.212 "conserve_cpu": false, 01:35:36.212 "filename": "/dev/nvme0n1", 01:35:36.212 "name": "xnvme_bdev" 01:35:36.212 }, 01:35:36.212 "method": "bdev_xnvme_create" 01:35:36.212 }, 01:35:36.212 { 01:35:36.212 "method": "bdev_wait_for_examine" 01:35:36.212 } 01:35:36.212 ] 01:35:36.212 } 01:35:36.212 ] 01:35:36.212 } 01:35:36.472 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 01:35:36.472 fio-3.35 01:35:36.472 Starting 1 thread 01:35:43.035 01:35:43.035 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71876: Mon Dec 9 05:30:33 2024 01:35:43.035 write: IOPS=43.1k, BW=169MiB/s (177MB/s)(843MiB/5001msec); 0 zone resets 01:35:43.035 slat (nsec): min=2453, max=72560, avg=5023.58, stdev=2768.65 01:35:43.035 clat (usec): min=765, max=4463, avg=1282.27, stdev=183.98 01:35:43.035 lat (usec): min=770, max=4467, avg=1287.29, stdev=184.96 01:35:43.035 clat percentiles (usec): 01:35:43.035 | 1.00th=[ 988], 5.00th=[ 1045], 10.00th=[ 1074], 20.00th=[ 1123], 01:35:43.035 | 30.00th=[ 1172], 40.00th=[ 1221], 50.00th=[ 1254], 60.00th=[ 1303], 01:35:43.035 | 70.00th=[ 1336], 80.00th=[ 1401], 90.00th=[ 1532], 95.00th=[ 1647], 01:35:43.035 | 99.00th=[ 1844], 99.50th=[ 1893], 99.90th=[ 2040], 99.95th=[ 2147], 01:35:43.035 | 99.99th=[ 2671] 01:35:43.035 bw ( KiB/s): min=165376, max=186880, per=100.00%, avg=173224.00, stdev=7687.47, samples=9 01:35:43.035 iops : min=41344, max=46720, avg=43306.00, stdev=1921.87, samples=9 01:35:43.035 lat (usec) : 1000=1.68% 01:35:43.035 lat (msec) : 2=98.19%, 4=0.13%, 10=0.01% 01:35:43.035 cpu : usr=39.74%, sys=59.12%, ctx=13, majf=0, minf=763 01:35:43.035 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 01:35:43.035 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:35:43.035 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 01:35:43.035 issued rwts: total=0,215773,0,0 short=0,0,0,0 dropped=0,0,0,0 01:35:43.035 latency : target=0, window=0, percentile=100.00%, depth=64 01:35:43.035 01:35:43.035 Run status group 0 (all jobs): 01:35:43.035 WRITE: bw=169MiB/s (177MB/s), 169MiB/s-169MiB/s (177MB/s-177MB/s), io=843MiB (884MB), run=5001-5001msec 01:35:43.605 ----------------------------------------------------- 01:35:43.605 Suppressions used: 01:35:43.605 count bytes template 01:35:43.605 1 11 /usr/src/fio/parse.c 01:35:43.605 1 8 libtcmalloc_minimal.so 01:35:43.605 1 904 libcrypto.so 01:35:43.605 ----------------------------------------------------- 01:35:43.605 01:35:43.605 01:35:43.605 real 0m14.824s 01:35:43.605 user 0m7.543s 01:35:43.605 sys 0m6.906s 01:35:43.605 05:30:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 01:35:43.605 05:30:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 01:35:43.605 ************************************ 01:35:43.605 END TEST xnvme_fio_plugin 01:35:43.605 ************************************ 01:35:43.605 05:30:35 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 01:35:43.605 05:30:35 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 01:35:43.605 05:30:35 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 01:35:43.605 05:30:35 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 01:35:43.605 05:30:35 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:35:43.605 05:30:35 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 01:35:43.605 05:30:35 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 01:35:43.605 ************************************ 01:35:43.605 START TEST xnvme_rpc 01:35:43.605 ************************************ 01:35:43.605 05:30:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 01:35:43.605 05:30:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 01:35:43.605 05:30:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 01:35:43.605 05:30:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 01:35:43.605 05:30:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 01:35:43.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:35:43.605 05:30:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71963 01:35:43.605 05:30:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71963 01:35:43.605 05:30:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:35:43.605 05:30:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71963 ']' 01:35:43.605 05:30:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:35:43.605 05:30:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 01:35:43.605 05:30:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:35:43.605 05:30:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 01:35:43.605 05:30:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 01:35:43.605 [2024-12-09 05:30:35.206555] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:35:43.605 [2024-12-09 05:30:35.207037] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71963 ] 01:35:43.878 [2024-12-09 05:30:35.389466] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:35:43.878 [2024-12-09 05:30:35.494913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:35:44.813 05:30:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:35:44.813 05:30:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 01:35:44.813 05:30:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring -c 01:35:44.813 05:30:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:35:44.813 05:30:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 01:35:44.813 xnvme_bdev 01:35:44.813 05:30:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:35:44.813 05:30:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 01:35:44.813 05:30:36 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 01:35:44.813 05:30:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:35:44.813 05:30:36 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 01:35:44.813 05:30:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 01:35:44.813 05:30:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:35:44.813 05:30:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 01:35:44.813 05:30:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 01:35:44.813 05:30:36 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 01:35:44.813 05:30:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:35:44.813 05:30:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 01:35:44.813 05:30:36 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 01:35:44.813 05:30:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:35:44.813 05:30:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 01:35:44.813 05:30:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 01:35:44.813 05:30:36 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 01:35:44.813 05:30:36 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 01:35:44.813 05:30:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:35:44.813 05:30:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 01:35:44.813 05:30:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:35:45.072 05:30:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 01:35:45.072 05:30:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 01:35:45.072 05:30:36 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 01:35:45.072 05:30:36 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 01:35:45.072 05:30:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:35:45.072 05:30:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 01:35:45.072 05:30:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:35:45.072 05:30:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 01:35:45.072 05:30:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 01:35:45.072 05:30:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:35:45.072 05:30:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 01:35:45.072 05:30:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:35:45.072 05:30:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71963 01:35:45.072 05:30:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71963 ']' 01:35:45.072 05:30:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71963 01:35:45.072 05:30:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 01:35:45.072 05:30:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:35:45.072 05:30:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71963 01:35:45.072 killing process with pid 71963 01:35:45.072 05:30:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:35:45.072 05:30:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:35:45.072 05:30:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71963' 01:35:45.072 05:30:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71963 01:35:45.072 05:30:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71963 01:35:47.603 ************************************ 01:35:47.603 END TEST xnvme_rpc 01:35:47.603 ************************************ 01:35:47.603 01:35:47.603 real 0m3.866s 01:35:47.603 user 0m3.975s 01:35:47.603 sys 0m0.543s 01:35:47.603 05:30:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 01:35:47.603 05:30:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 01:35:47.603 05:30:38 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 01:35:47.603 05:30:38 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:35:47.603 05:30:38 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 01:35:47.603 05:30:38 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 01:35:47.603 ************************************ 01:35:47.603 START TEST xnvme_bdevperf 01:35:47.603 ************************************ 01:35:47.603 05:30:38 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 01:35:47.603 05:30:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 01:35:47.603 05:30:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 01:35:47.603 05:30:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 01:35:47.603 05:30:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 01:35:47.603 05:30:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 01:35:47.603 05:30:38 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 01:35:47.603 05:30:38 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 01:35:47.603 { 01:35:47.603 "subsystems": [ 01:35:47.603 { 01:35:47.603 "subsystem": "bdev", 01:35:47.603 "config": [ 01:35:47.603 { 01:35:47.603 "params": { 01:35:47.603 "io_mechanism": "io_uring", 01:35:47.603 "conserve_cpu": true, 01:35:47.603 "filename": "/dev/nvme0n1", 01:35:47.603 "name": "xnvme_bdev" 01:35:47.603 }, 01:35:47.603 "method": "bdev_xnvme_create" 01:35:47.603 }, 01:35:47.603 { 01:35:47.603 "method": "bdev_wait_for_examine" 01:35:47.603 } 01:35:47.603 ] 01:35:47.603 } 01:35:47.603 ] 01:35:47.603 } 01:35:47.603 [2024-12-09 05:30:39.106037] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:35:47.603 [2024-12-09 05:30:39.106211] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72043 ] 01:35:47.862 [2024-12-09 05:30:39.287025] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:35:47.862 [2024-12-09 05:30:39.417742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:35:48.428 Running I/O for 5 seconds... 01:35:50.301 47808.00 IOPS, 186.75 MiB/s [2024-12-09T05:30:42.851Z] 48160.00 IOPS, 188.12 MiB/s [2024-12-09T05:30:43.806Z] 47377.67 IOPS, 185.07 MiB/s [2024-12-09T05:30:45.178Z] 47133.00 IOPS, 184.11 MiB/s 01:35:53.561 Latency(us) 01:35:53.561 [2024-12-09T05:30:45.178Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:35:53.561 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 01:35:53.561 xnvme_bdev : 5.00 47200.70 184.38 0.00 0.00 1351.62 860.16 4289.63 01:35:53.561 [2024-12-09T05:30:45.178Z] =================================================================================================================== 01:35:53.561 [2024-12-09T05:30:45.178Z] Total : 47200.70 184.38 0.00 0.00 1351.62 860.16 4289.63 01:35:54.124 05:30:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 01:35:54.124 05:30:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 01:35:54.124 05:30:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 01:35:54.124 05:30:45 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 01:35:54.124 05:30:45 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 01:35:54.381 { 01:35:54.381 "subsystems": [ 01:35:54.381 { 01:35:54.381 "subsystem": "bdev", 01:35:54.381 "config": [ 01:35:54.381 { 01:35:54.381 "params": { 01:35:54.381 "io_mechanism": "io_uring", 01:35:54.381 "conserve_cpu": true, 01:35:54.381 "filename": "/dev/nvme0n1", 01:35:54.381 "name": "xnvme_bdev" 01:35:54.381 }, 01:35:54.381 "method": "bdev_xnvme_create" 01:35:54.381 }, 01:35:54.381 { 01:35:54.381 "method": "bdev_wait_for_examine" 01:35:54.381 } 01:35:54.381 ] 01:35:54.381 } 01:35:54.381 ] 01:35:54.381 } 01:35:54.381 [2024-12-09 05:30:45.838470] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:35:54.381 [2024-12-09 05:30:45.838643] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72118 ] 01:35:54.639 [2024-12-09 05:30:46.021224] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:35:54.639 [2024-12-09 05:30:46.124317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:35:54.898 Running I/O for 5 seconds... 01:35:57.222 40154.00 IOPS, 156.85 MiB/s [2024-12-09T05:30:49.772Z] 41133.00 IOPS, 160.68 MiB/s [2024-12-09T05:30:50.759Z] 40090.00 IOPS, 156.60 MiB/s [2024-12-09T05:30:51.694Z] 36048.75 IOPS, 140.82 MiB/s [2024-12-09T05:30:51.694Z] 34891.60 IOPS, 136.30 MiB/s 01:36:00.077 Latency(us) 01:36:00.077 [2024-12-09T05:30:51.694Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:36:00.077 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 01:36:00.077 xnvme_bdev : 5.00 34871.52 136.22 0.00 0.00 1828.93 62.84 12630.57 01:36:00.077 [2024-12-09T05:30:51.694Z] =================================================================================================================== 01:36:00.077 [2024-12-09T05:30:51.694Z] Total : 34871.52 136.22 0.00 0.00 1828.93 62.84 12630.57 01:36:01.013 01:36:01.013 real 0m13.394s 01:36:01.013 user 0m7.816s 01:36:01.013 sys 0m4.577s 01:36:01.013 05:30:52 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 01:36:01.013 05:30:52 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 01:36:01.013 ************************************ 01:36:01.013 END TEST xnvme_bdevperf 01:36:01.013 ************************************ 01:36:01.013 05:30:52 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 01:36:01.013 05:30:52 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:36:01.013 05:30:52 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 01:36:01.013 05:30:52 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 01:36:01.013 ************************************ 01:36:01.013 START TEST xnvme_fio_plugin 01:36:01.013 ************************************ 01:36:01.013 05:30:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 01:36:01.013 05:30:52 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 01:36:01.013 05:30:52 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 01:36:01.013 05:30:52 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 01:36:01.013 05:30:52 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 01:36:01.013 05:30:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 01:36:01.013 05:30:52 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 01:36:01.013 05:30:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 01:36:01.013 05:30:52 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 01:36:01.013 05:30:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:36:01.013 05:30:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 01:36:01.013 05:30:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 01:36:01.013 05:30:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:36:01.013 05:30:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 01:36:01.013 05:30:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 01:36:01.013 05:30:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:36:01.013 05:30:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:36:01.013 05:30:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 01:36:01.013 05:30:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:36:01.013 05:30:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 01:36:01.013 05:30:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 01:36:01.013 05:30:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 01:36:01.014 05:30:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 01:36:01.014 05:30:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 01:36:01.014 { 01:36:01.014 "subsystems": [ 01:36:01.014 { 01:36:01.014 "subsystem": "bdev", 01:36:01.014 "config": [ 01:36:01.014 { 01:36:01.014 "params": { 01:36:01.014 "io_mechanism": "io_uring", 01:36:01.014 "conserve_cpu": true, 01:36:01.014 "filename": "/dev/nvme0n1", 01:36:01.014 "name": "xnvme_bdev" 01:36:01.014 }, 01:36:01.014 "method": "bdev_xnvme_create" 01:36:01.014 }, 01:36:01.014 { 01:36:01.014 "method": "bdev_wait_for_examine" 01:36:01.014 } 01:36:01.014 ] 01:36:01.014 } 01:36:01.014 ] 01:36:01.014 } 01:36:01.272 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 01:36:01.272 fio-3.35 01:36:01.272 Starting 1 thread 01:36:07.835 01:36:07.835 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72242: Mon Dec 9 05:30:58 2024 01:36:07.835 read: IOPS=49.0k, BW=191MiB/s (201MB/s)(958MiB/5003msec) 01:36:07.835 slat (nsec): min=2394, max=79527, avg=3337.79, stdev=2143.79 01:36:07.835 clat (usec): min=673, max=9500, avg=1170.74, stdev=132.31 01:36:07.835 lat (usec): min=677, max=9505, avg=1174.08, stdev=132.65 01:36:07.835 clat percentiles (usec): 01:36:07.835 | 1.00th=[ 955], 5.00th=[ 1012], 10.00th=[ 1037], 20.00th=[ 1074], 01:36:07.835 | 30.00th=[ 1106], 40.00th=[ 1139], 50.00th=[ 1156], 60.00th=[ 1188], 01:36:07.835 | 70.00th=[ 1221], 80.00th=[ 1254], 90.00th=[ 1303], 95.00th=[ 1352], 01:36:07.835 | 99.00th=[ 1516], 99.50th=[ 1663], 99.90th=[ 2212], 99.95th=[ 2671], 01:36:07.835 | 99.99th=[ 4686] 01:36:07.835 bw ( KiB/s): min=180224, max=204288, per=100.00%, avg=196266.67, stdev=7485.54, samples=9 01:36:07.835 iops : min=45056, max=51072, avg=49066.67, stdev=1871.38, samples=9 01:36:07.835 lat (usec) : 750=0.01%, 1000=4.00% 01:36:07.835 lat (msec) : 2=95.86%, 4=0.11%, 10=0.02% 01:36:07.835 cpu : usr=38.88%, sys=55.92%, ctx=11, majf=0, minf=762 01:36:07.835 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.1%, >=64=1.6% 01:36:07.835 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:36:07.836 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 01:36:07.836 issued rwts: total=245198,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:36:07.836 latency : target=0, window=0, percentile=100.00%, depth=64 01:36:07.836 01:36:07.836 Run status group 0 (all jobs): 01:36:07.836 READ: bw=191MiB/s (201MB/s), 191MiB/s-191MiB/s (201MB/s-201MB/s), io=958MiB (1004MB), run=5003-5003msec 01:36:08.409 ----------------------------------------------------- 01:36:08.409 Suppressions used: 01:36:08.409 count bytes template 01:36:08.409 1 11 /usr/src/fio/parse.c 01:36:08.409 1 8 libtcmalloc_minimal.so 01:36:08.409 1 904 libcrypto.so 01:36:08.409 ----------------------------------------------------- 01:36:08.409 01:36:08.409 05:30:59 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 01:36:08.409 05:30:59 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 01:36:08.409 05:30:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 01:36:08.409 05:30:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 01:36:08.409 05:30:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:36:08.409 05:30:59 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 01:36:08.409 05:30:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 01:36:08.409 05:30:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:36:08.409 05:30:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 01:36:08.409 05:30:59 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 01:36:08.409 05:30:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 01:36:08.409 05:30:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:36:08.409 05:30:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 01:36:08.409 05:30:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:36:08.409 05:30:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 01:36:08.409 05:30:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:36:08.409 05:30:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 01:36:08.409 05:30:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 01:36:08.409 05:30:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 01:36:08.409 05:30:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 01:36:08.409 05:30:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 01:36:08.409 { 01:36:08.409 "subsystems": [ 01:36:08.409 { 01:36:08.409 "subsystem": "bdev", 01:36:08.409 "config": [ 01:36:08.409 { 01:36:08.409 "params": { 01:36:08.409 "io_mechanism": "io_uring", 01:36:08.409 "conserve_cpu": true, 01:36:08.409 "filename": "/dev/nvme0n1", 01:36:08.409 "name": "xnvme_bdev" 01:36:08.409 }, 01:36:08.409 "method": "bdev_xnvme_create" 01:36:08.409 }, 01:36:08.409 { 01:36:08.409 "method": "bdev_wait_for_examine" 01:36:08.409 } 01:36:08.409 ] 01:36:08.409 } 01:36:08.409 ] 01:36:08.409 } 01:36:08.409 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 01:36:08.409 fio-3.35 01:36:08.409 Starting 1 thread 01:36:14.973 01:36:14.973 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72336: Mon Dec 9 05:31:05 2024 01:36:14.973 write: IOPS=43.3k, BW=169MiB/s (177MB/s)(845MiB/5002msec); 0 zone resets 01:36:14.973 slat (usec): min=2, max=274, avg= 4.79, stdev= 3.06 01:36:14.973 clat (usec): min=571, max=3004, avg=1288.57, stdev=171.87 01:36:14.973 lat (usec): min=575, max=3038, avg=1293.36, stdev=172.57 01:36:14.973 clat percentiles (usec): 01:36:14.973 | 1.00th=[ 979], 5.00th=[ 1057], 10.00th=[ 1106], 20.00th=[ 1156], 01:36:14.974 | 30.00th=[ 1188], 40.00th=[ 1237], 50.00th=[ 1270], 60.00th=[ 1319], 01:36:14.974 | 70.00th=[ 1352], 80.00th=[ 1401], 90.00th=[ 1483], 95.00th=[ 1582], 01:36:14.974 | 99.00th=[ 1860], 99.50th=[ 1958], 99.90th=[ 2311], 99.95th=[ 2474], 01:36:14.974 | 99.99th=[ 2802] 01:36:14.974 bw ( KiB/s): min=166400, max=182528, per=99.78%, avg=172686.11, stdev=5279.35, samples=9 01:36:14.974 iops : min=41600, max=45632, avg=43171.44, stdev=1319.87, samples=9 01:36:14.974 lat (usec) : 750=0.05%, 1000=1.56% 01:36:14.974 lat (msec) : 2=98.04%, 4=0.35% 01:36:14.974 cpu : usr=48.79%, sys=46.09%, ctx=19, majf=0, minf=763 01:36:14.974 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.4%, 16=24.8%, 32=50.4%, >=64=1.6% 01:36:14.974 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:36:14.974 complete : 0=0.0%, 4=98.4%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 01:36:14.974 issued rwts: total=0,216416,0,0 short=0,0,0,0 dropped=0,0,0,0 01:36:14.974 latency : target=0, window=0, percentile=100.00%, depth=64 01:36:14.974 01:36:14.974 Run status group 0 (all jobs): 01:36:14.974 WRITE: bw=169MiB/s (177MB/s), 169MiB/s-169MiB/s (177MB/s-177MB/s), io=845MiB (886MB), run=5002-5002msec 01:36:15.909 ----------------------------------------------------- 01:36:15.909 Suppressions used: 01:36:15.909 count bytes template 01:36:15.909 1 11 /usr/src/fio/parse.c 01:36:15.909 1 8 libtcmalloc_minimal.so 01:36:15.909 1 904 libcrypto.so 01:36:15.909 ----------------------------------------------------- 01:36:15.909 01:36:15.909 ************************************ 01:36:15.909 END TEST xnvme_fio_plugin 01:36:15.909 ************************************ 01:36:15.909 01:36:15.909 real 0m14.811s 01:36:15.909 user 0m8.112s 01:36:15.909 sys 0m5.905s 01:36:15.909 05:31:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 01:36:15.909 05:31:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 01:36:15.909 05:31:07 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 01:36:15.909 05:31:07 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring_cmd 01:36:15.909 05:31:07 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/ng0n1 01:36:15.909 05:31:07 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/ng0n1 01:36:15.909 05:31:07 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 01:36:15.909 05:31:07 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 01:36:15.909 05:31:07 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 01:36:15.909 05:31:07 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 01:36:15.909 05:31:07 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 01:36:15.909 05:31:07 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:36:15.909 05:31:07 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 01:36:15.909 05:31:07 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 01:36:15.909 ************************************ 01:36:15.909 START TEST xnvme_rpc 01:36:15.909 ************************************ 01:36:15.909 05:31:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 01:36:15.909 05:31:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 01:36:15.909 05:31:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 01:36:15.909 05:31:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 01:36:15.909 05:31:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 01:36:15.909 05:31:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=72421 01:36:15.909 05:31:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 72421 01:36:15.909 05:31:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:36:15.909 05:31:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 72421 ']' 01:36:15.909 05:31:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:36:15.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:36:15.909 05:31:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 01:36:15.909 05:31:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:36:15.909 05:31:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 01:36:15.909 05:31:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 01:36:15.909 [2024-12-09 05:31:07.448224] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:36:15.909 [2024-12-09 05:31:07.449203] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72421 ] 01:36:16.168 [2024-12-09 05:31:07.651274] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:36:16.428 [2024-12-09 05:31:07.815948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:36:17.363 05:31:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:36:17.363 05:31:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 01:36:17.363 05:31:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd '' 01:36:17.363 05:31:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:36:17.363 05:31:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 01:36:17.363 xnvme_bdev 01:36:17.363 05:31:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:36:17.363 05:31:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 01:36:17.363 05:31:08 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 01:36:17.363 05:31:08 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 01:36:17.363 05:31:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:36:17.363 05:31:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 01:36:17.363 05:31:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:36:17.363 05:31:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 01:36:17.363 05:31:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 01:36:17.363 05:31:08 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 01:36:17.363 05:31:08 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 01:36:17.363 05:31:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:36:17.363 05:31:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 01:36:17.363 05:31:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:36:17.363 05:31:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 01:36:17.363 05:31:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 01:36:17.363 05:31:08 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 01:36:17.363 05:31:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:36:17.363 05:31:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 01:36:17.363 05:31:08 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 01:36:17.363 05:31:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:36:17.363 05:31:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 01:36:17.363 05:31:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 01:36:17.363 05:31:08 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 01:36:17.363 05:31:08 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 01:36:17.363 05:31:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:36:17.363 05:31:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 01:36:17.363 05:31:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:36:17.363 05:31:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 01:36:17.363 05:31:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 01:36:17.363 05:31:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:36:17.363 05:31:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 01:36:17.363 05:31:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:36:17.363 05:31:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 72421 01:36:17.363 05:31:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 72421 ']' 01:36:17.363 05:31:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 72421 01:36:17.363 05:31:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 01:36:17.363 05:31:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:36:17.363 05:31:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72421 01:36:17.364 killing process with pid 72421 01:36:17.364 05:31:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:36:17.364 05:31:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:36:17.364 05:31:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72421' 01:36:17.364 05:31:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 72421 01:36:17.364 05:31:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 72421 01:36:19.899 ************************************ 01:36:19.899 END TEST xnvme_rpc 01:36:19.899 ************************************ 01:36:19.899 01:36:19.899 real 0m3.879s 01:36:19.899 user 0m4.006s 01:36:19.899 sys 0m0.614s 01:36:19.899 05:31:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 01:36:19.899 05:31:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 01:36:19.899 05:31:11 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 01:36:19.899 05:31:11 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:36:19.899 05:31:11 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 01:36:19.899 05:31:11 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 01:36:19.899 ************************************ 01:36:19.899 START TEST xnvme_bdevperf 01:36:19.899 ************************************ 01:36:19.899 05:31:11 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 01:36:19.899 05:31:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 01:36:19.899 05:31:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 01:36:19.899 05:31:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 01:36:19.899 05:31:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 01:36:19.899 05:31:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 01:36:19.899 05:31:11 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 01:36:19.899 05:31:11 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 01:36:19.899 { 01:36:19.899 "subsystems": [ 01:36:19.899 { 01:36:19.899 "subsystem": "bdev", 01:36:19.899 "config": [ 01:36:19.899 { 01:36:19.899 "params": { 01:36:19.899 "io_mechanism": "io_uring_cmd", 01:36:19.899 "conserve_cpu": false, 01:36:19.899 "filename": "/dev/ng0n1", 01:36:19.899 "name": "xnvme_bdev" 01:36:19.899 }, 01:36:19.899 "method": "bdev_xnvme_create" 01:36:19.899 }, 01:36:19.899 { 01:36:19.899 "method": "bdev_wait_for_examine" 01:36:19.899 } 01:36:19.899 ] 01:36:19.899 } 01:36:19.899 ] 01:36:19.899 } 01:36:19.899 [2024-12-09 05:31:11.329803] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:36:19.899 [2024-12-09 05:31:11.329950] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72502 ] 01:36:19.899 [2024-12-09 05:31:11.498769] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:36:20.179 [2024-12-09 05:31:11.622900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:36:20.440 Running I/O for 5 seconds... 01:36:22.748 51065.00 IOPS, 199.47 MiB/s [2024-12-09T05:31:15.308Z] 50240.50 IOPS, 196.25 MiB/s [2024-12-09T05:31:16.243Z] 49941.67 IOPS, 195.08 MiB/s [2024-12-09T05:31:17.175Z] 49616.25 IOPS, 193.81 MiB/s 01:36:25.558 Latency(us) 01:36:25.558 [2024-12-09T05:31:17.175Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:36:25.558 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 01:36:25.558 xnvme_bdev : 5.00 50085.97 195.65 0.00 0.00 1274.04 517.59 3813.00 01:36:25.558 [2024-12-09T05:31:17.175Z] =================================================================================================================== 01:36:25.558 [2024-12-09T05:31:17.175Z] Total : 50085.97 195.65 0.00 0.00 1274.04 517.59 3813.00 01:36:26.494 05:31:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 01:36:26.494 05:31:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 01:36:26.494 05:31:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 01:36:26.494 05:31:17 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 01:36:26.494 05:31:17 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 01:36:26.494 { 01:36:26.494 "subsystems": [ 01:36:26.494 { 01:36:26.494 "subsystem": "bdev", 01:36:26.494 "config": [ 01:36:26.494 { 01:36:26.494 "params": { 01:36:26.494 "io_mechanism": "io_uring_cmd", 01:36:26.494 "conserve_cpu": false, 01:36:26.494 "filename": "/dev/ng0n1", 01:36:26.494 "name": "xnvme_bdev" 01:36:26.494 }, 01:36:26.494 "method": "bdev_xnvme_create" 01:36:26.494 }, 01:36:26.494 { 01:36:26.494 "method": "bdev_wait_for_examine" 01:36:26.494 } 01:36:26.494 ] 01:36:26.494 } 01:36:26.494 ] 01:36:26.494 } 01:36:26.494 [2024-12-09 05:31:18.075873] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:36:26.494 [2024-12-09 05:31:18.076053] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72583 ] 01:36:26.753 [2024-12-09 05:31:18.257242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:36:27.011 [2024-12-09 05:31:18.381110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:36:27.273 Running I/O for 5 seconds... 01:36:29.150 45184.00 IOPS, 176.50 MiB/s [2024-12-09T05:31:22.143Z] 46105.00 IOPS, 180.10 MiB/s [2024-12-09T05:31:23.079Z] 46375.67 IOPS, 181.15 MiB/s [2024-12-09T05:31:24.016Z] 46710.75 IOPS, 182.46 MiB/s 01:36:32.399 Latency(us) 01:36:32.399 [2024-12-09T05:31:24.016Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:36:32.399 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 01:36:32.399 xnvme_bdev : 5.00 46784.90 182.75 0.00 0.00 1363.24 621.85 4379.00 01:36:32.399 [2024-12-09T05:31:24.016Z] =================================================================================================================== 01:36:32.399 [2024-12-09T05:31:24.016Z] Total : 46784.90 182.75 0.00 0.00 1363.24 621.85 4379.00 01:36:33.334 05:31:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 01:36:33.334 05:31:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 01:36:33.334 05:31:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 01:36:33.334 05:31:24 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 01:36:33.334 05:31:24 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 01:36:33.334 { 01:36:33.334 "subsystems": [ 01:36:33.334 { 01:36:33.334 "subsystem": "bdev", 01:36:33.334 "config": [ 01:36:33.334 { 01:36:33.334 "params": { 01:36:33.334 "io_mechanism": "io_uring_cmd", 01:36:33.334 "conserve_cpu": false, 01:36:33.334 "filename": "/dev/ng0n1", 01:36:33.334 "name": "xnvme_bdev" 01:36:33.334 }, 01:36:33.334 "method": "bdev_xnvme_create" 01:36:33.334 }, 01:36:33.334 { 01:36:33.334 "method": "bdev_wait_for_examine" 01:36:33.334 } 01:36:33.334 ] 01:36:33.334 } 01:36:33.334 ] 01:36:33.334 } 01:36:33.334 [2024-12-09 05:31:24.817352] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:36:33.334 [2024-12-09 05:31:24.817579] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72657 ] 01:36:33.592 [2024-12-09 05:31:24.996295] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:36:33.592 [2024-12-09 05:31:25.108890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:36:33.850 Running I/O for 5 seconds... 01:36:36.172 80192.00 IOPS, 313.25 MiB/s [2024-12-09T05:31:28.721Z] 79072.00 IOPS, 308.88 MiB/s [2024-12-09T05:31:29.654Z] 79573.33 IOPS, 310.83 MiB/s [2024-12-09T05:31:30.588Z] 79952.00 IOPS, 312.31 MiB/s [2024-12-09T05:31:30.588Z] 80051.20 IOPS, 312.70 MiB/s 01:36:38.971 Latency(us) 01:36:38.971 [2024-12-09T05:31:30.588Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:36:38.971 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 01:36:38.971 xnvme_bdev : 5.00 80032.52 312.63 0.00 0.00 796.32 472.90 2606.55 01:36:38.971 [2024-12-09T05:31:30.588Z] =================================================================================================================== 01:36:38.971 [2024-12-09T05:31:30.588Z] Total : 80032.52 312.63 0.00 0.00 796.32 472.90 2606.55 01:36:39.907 05:31:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 01:36:39.907 05:31:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 01:36:39.907 05:31:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 01:36:39.907 05:31:31 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 01:36:39.907 05:31:31 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 01:36:39.907 { 01:36:39.907 "subsystems": [ 01:36:39.907 { 01:36:39.907 "subsystem": "bdev", 01:36:39.907 "config": [ 01:36:39.907 { 01:36:39.907 "params": { 01:36:39.907 "io_mechanism": "io_uring_cmd", 01:36:39.907 "conserve_cpu": false, 01:36:39.907 "filename": "/dev/ng0n1", 01:36:39.907 "name": "xnvme_bdev" 01:36:39.907 }, 01:36:39.907 "method": "bdev_xnvme_create" 01:36:39.907 }, 01:36:39.907 { 01:36:39.907 "method": "bdev_wait_for_examine" 01:36:39.907 } 01:36:39.907 ] 01:36:39.907 } 01:36:39.907 ] 01:36:39.907 } 01:36:40.166 [2024-12-09 05:31:31.545791] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:36:40.166 [2024-12-09 05:31:31.546288] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72739 ] 01:36:40.166 [2024-12-09 05:31:31.729132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:36:40.426 [2024-12-09 05:31:31.834938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:36:40.684 Running I/O for 5 seconds... 01:36:42.628 8423.00 IOPS, 32.90 MiB/s [2024-12-09T05:31:35.180Z] 9293.00 IOPS, 36.30 MiB/s [2024-12-09T05:31:36.549Z] 21418.00 IOPS, 83.66 MiB/s [2024-12-09T05:31:37.480Z] 27928.00 IOPS, 109.09 MiB/s [2024-12-09T05:31:37.480Z] 32096.60 IOPS, 125.38 MiB/s 01:36:45.863 Latency(us) 01:36:45.863 [2024-12-09T05:31:37.480Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:36:45.863 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 01:36:45.863 xnvme_bdev : 5.00 32082.29 125.32 0.00 0.00 1990.70 87.97 42657.98 01:36:45.863 [2024-12-09T05:31:37.480Z] =================================================================================================================== 01:36:45.863 [2024-12-09T05:31:37.480Z] Total : 32082.29 125.32 0.00 0.00 1990.70 87.97 42657.98 01:36:46.796 01:36:46.796 real 0m27.110s 01:36:46.796 user 0m14.505s 01:36:46.796 sys 0m12.208s 01:36:46.796 ************************************ 01:36:46.796 END TEST xnvme_bdevperf 01:36:46.796 ************************************ 01:36:46.796 05:31:38 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 01:36:46.796 05:31:38 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 01:36:46.796 05:31:38 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 01:36:46.796 05:31:38 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:36:46.796 05:31:38 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 01:36:46.796 05:31:38 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 01:36:46.796 ************************************ 01:36:46.796 START TEST xnvme_fio_plugin 01:36:46.796 ************************************ 01:36:46.796 05:31:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 01:36:46.796 05:31:38 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 01:36:46.796 05:31:38 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 01:36:46.796 05:31:38 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 01:36:46.796 05:31:38 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 01:36:46.796 05:31:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 01:36:46.796 05:31:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 01:36:46.797 05:31:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:36:46.797 05:31:38 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 01:36:46.797 05:31:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 01:36:46.797 05:31:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:36:46.797 05:31:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 01:36:46.797 05:31:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 01:36:46.797 05:31:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:36:46.797 05:31:38 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 01:36:46.797 05:31:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 01:36:46.797 05:31:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:36:46.797 05:31:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:36:46.797 05:31:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 01:36:47.054 05:31:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 01:36:47.054 05:31:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 01:36:47.054 05:31:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 01:36:47.054 05:31:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 01:36:47.054 05:31:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 01:36:47.054 { 01:36:47.054 "subsystems": [ 01:36:47.054 { 01:36:47.054 "subsystem": "bdev", 01:36:47.054 "config": [ 01:36:47.054 { 01:36:47.055 "params": { 01:36:47.055 "io_mechanism": "io_uring_cmd", 01:36:47.055 "conserve_cpu": false, 01:36:47.055 "filename": "/dev/ng0n1", 01:36:47.055 "name": "xnvme_bdev" 01:36:47.055 }, 01:36:47.055 "method": "bdev_xnvme_create" 01:36:47.055 }, 01:36:47.055 { 01:36:47.055 "method": "bdev_wait_for_examine" 01:36:47.055 } 01:36:47.055 ] 01:36:47.055 } 01:36:47.055 ] 01:36:47.055 } 01:36:47.312 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 01:36:47.312 fio-3.35 01:36:47.312 Starting 1 thread 01:36:53.873 01:36:53.873 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72866: Mon Dec 9 05:31:44 2024 01:36:53.873 read: IOPS=49.8k, BW=195MiB/s (204MB/s)(974MiB/5001msec) 01:36:53.873 slat (nsec): min=2449, max=64043, avg=3688.20, stdev=2096.23 01:36:53.873 clat (usec): min=788, max=2900, avg=1134.82, stdev=137.81 01:36:53.873 lat (usec): min=791, max=2914, avg=1138.51, stdev=138.28 01:36:53.873 clat percentiles (usec): 01:36:53.873 | 1.00th=[ 906], 5.00th=[ 955], 10.00th=[ 988], 20.00th=[ 1029], 01:36:53.873 | 30.00th=[ 1057], 40.00th=[ 1090], 50.00th=[ 1123], 60.00th=[ 1156], 01:36:53.873 | 70.00th=[ 1188], 80.00th=[ 1221], 90.00th=[ 1287], 95.00th=[ 1352], 01:36:53.873 | 99.00th=[ 1598], 99.50th=[ 1729], 99.90th=[ 2073], 99.95th=[ 2376], 01:36:53.873 | 99.99th=[ 2802] 01:36:53.873 bw ( KiB/s): min=183952, max=213504, per=99.39%, avg=198160.00, stdev=10275.65, samples=9 01:36:53.873 iops : min=45988, max=53376, avg=49540.00, stdev=2568.91, samples=9 01:36:53.873 lat (usec) : 1000=12.99% 01:36:53.873 lat (msec) : 2=86.89%, 4=0.12% 01:36:53.873 cpu : usr=36.98%, sys=61.94%, ctx=12, majf=0, minf=762 01:36:53.873 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 01:36:53.873 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:36:53.873 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 01:36:53.874 issued rwts: total=249280,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:36:53.874 latency : target=0, window=0, percentile=100.00%, depth=64 01:36:53.874 01:36:53.874 Run status group 0 (all jobs): 01:36:53.874 READ: bw=195MiB/s (204MB/s), 195MiB/s-195MiB/s (204MB/s-204MB/s), io=974MiB (1021MB), run=5001-5001msec 01:36:54.448 ----------------------------------------------------- 01:36:54.448 Suppressions used: 01:36:54.448 count bytes template 01:36:54.448 1 11 /usr/src/fio/parse.c 01:36:54.448 1 8 libtcmalloc_minimal.so 01:36:54.448 1 904 libcrypto.so 01:36:54.448 ----------------------------------------------------- 01:36:54.448 01:36:54.448 05:31:45 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 01:36:54.448 05:31:45 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 01:36:54.448 05:31:45 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 01:36:54.448 05:31:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 01:36:54.448 05:31:45 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 01:36:54.448 05:31:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 01:36:54.448 05:31:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 01:36:54.448 05:31:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:36:54.448 05:31:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 01:36:54.448 05:31:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:36:54.448 05:31:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 01:36:54.448 05:31:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 01:36:54.448 05:31:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:36:54.448 05:31:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 01:36:54.448 05:31:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:36:54.448 05:31:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:36:54.448 05:31:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 01:36:54.448 05:31:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 01:36:54.448 05:31:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 01:36:54.448 05:31:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 01:36:54.448 05:31:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 01:36:54.448 { 01:36:54.448 "subsystems": [ 01:36:54.448 { 01:36:54.448 "subsystem": "bdev", 01:36:54.448 "config": [ 01:36:54.448 { 01:36:54.448 "params": { 01:36:54.448 "io_mechanism": "io_uring_cmd", 01:36:54.448 "conserve_cpu": false, 01:36:54.448 "filename": "/dev/ng0n1", 01:36:54.448 "name": "xnvme_bdev" 01:36:54.448 }, 01:36:54.448 "method": "bdev_xnvme_create" 01:36:54.448 }, 01:36:54.448 { 01:36:54.448 "method": "bdev_wait_for_examine" 01:36:54.448 } 01:36:54.448 ] 01:36:54.448 } 01:36:54.448 ] 01:36:54.448 } 01:36:54.706 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 01:36:54.706 fio-3.35 01:36:54.706 Starting 1 thread 01:37:01.266 01:37:01.266 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72961: Mon Dec 9 05:31:51 2024 01:37:01.266 write: IOPS=44.8k, BW=175MiB/s (184MB/s)(876MiB/5001msec); 0 zone resets 01:37:01.266 slat (usec): min=2, max=128, avg= 4.50, stdev= 2.76 01:37:01.266 clat (usec): min=306, max=3741, avg=1247.00, stdev=163.08 01:37:01.266 lat (usec): min=311, max=3743, avg=1251.50, stdev=163.76 01:37:01.266 clat percentiles (usec): 01:37:01.266 | 1.00th=[ 971], 5.00th=[ 1037], 10.00th=[ 1074], 20.00th=[ 1123], 01:37:01.266 | 30.00th=[ 1156], 40.00th=[ 1188], 50.00th=[ 1221], 60.00th=[ 1270], 01:37:01.266 | 70.00th=[ 1303], 80.00th=[ 1352], 90.00th=[ 1434], 95.00th=[ 1532], 01:37:01.266 | 99.00th=[ 1811], 99.50th=[ 1893], 99.90th=[ 2147], 99.95th=[ 2376], 01:37:01.266 | 99.99th=[ 2868] 01:37:01.266 bw ( KiB/s): min=159232, max=192000, per=99.78%, avg=178963.56, stdev=10352.77, samples=9 01:37:01.266 iops : min=39808, max=48000, avg=44740.67, stdev=2588.22, samples=9 01:37:01.266 lat (usec) : 500=0.01%, 750=0.06%, 1000=2.15% 01:37:01.267 lat (msec) : 2=97.57%, 4=0.21% 01:37:01.267 cpu : usr=40.32%, sys=58.68%, ctx=17, majf=0, minf=763 01:37:01.267 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 01:37:01.267 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:37:01.267 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 01:37:01.267 issued rwts: total=0,224253,0,0 short=0,0,0,0 dropped=0,0,0,0 01:37:01.267 latency : target=0, window=0, percentile=100.00%, depth=64 01:37:01.267 01:37:01.267 Run status group 0 (all jobs): 01:37:01.267 WRITE: bw=175MiB/s (184MB/s), 175MiB/s-175MiB/s (184MB/s-184MB/s), io=876MiB (919MB), run=5001-5001msec 01:37:01.834 ----------------------------------------------------- 01:37:01.834 Suppressions used: 01:37:01.834 count bytes template 01:37:01.834 1 11 /usr/src/fio/parse.c 01:37:01.834 1 8 libtcmalloc_minimal.so 01:37:01.834 1 904 libcrypto.so 01:37:01.834 ----------------------------------------------------- 01:37:01.834 01:37:01.834 ************************************ 01:37:01.834 END TEST xnvme_fio_plugin 01:37:01.834 ************************************ 01:37:01.834 01:37:01.834 real 0m14.834s 01:37:01.834 user 0m7.661s 01:37:01.834 sys 0m6.800s 01:37:01.834 05:31:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 01:37:01.834 05:31:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 01:37:01.834 05:31:53 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 01:37:01.834 05:31:53 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 01:37:01.834 05:31:53 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 01:37:01.834 05:31:53 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 01:37:01.834 05:31:53 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:37:01.834 05:31:53 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 01:37:01.834 05:31:53 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 01:37:01.834 ************************************ 01:37:01.834 START TEST xnvme_rpc 01:37:01.834 ************************************ 01:37:01.834 05:31:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 01:37:01.834 05:31:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 01:37:01.834 05:31:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 01:37:01.834 05:31:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 01:37:01.834 05:31:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 01:37:01.834 05:31:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=73046 01:37:01.834 05:31:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 73046 01:37:01.834 05:31:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:37:01.834 05:31:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 73046 ']' 01:37:01.834 05:31:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:37:01.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:37:01.834 05:31:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 01:37:01.834 05:31:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:37:01.834 05:31:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 01:37:01.834 05:31:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 01:37:01.834 [2024-12-09 05:31:53.433007] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:37:01.834 [2024-12-09 05:31:53.433217] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73046 ] 01:37:02.093 [2024-12-09 05:31:53.612501] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:37:02.351 [2024-12-09 05:31:53.719928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:37:02.917 05:31:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:37:02.917 05:31:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 01:37:02.917 05:31:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd -c 01:37:02.918 05:31:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:37:02.918 05:31:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 01:37:02.918 xnvme_bdev 01:37:02.918 05:31:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:37:02.918 05:31:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 01:37:02.918 05:31:54 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 01:37:02.918 05:31:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:37:02.918 05:31:54 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 01:37:02.918 05:31:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 01:37:02.918 05:31:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:37:03.176 05:31:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 01:37:03.176 05:31:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 01:37:03.176 05:31:54 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 01:37:03.176 05:31:54 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 01:37:03.176 05:31:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:37:03.176 05:31:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 01:37:03.176 05:31:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:37:03.176 05:31:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 01:37:03.176 05:31:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 01:37:03.176 05:31:54 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 01:37:03.176 05:31:54 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 01:37:03.176 05:31:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:37:03.176 05:31:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 01:37:03.176 05:31:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:37:03.176 05:31:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 01:37:03.176 05:31:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 01:37:03.176 05:31:54 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 01:37:03.176 05:31:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:37:03.176 05:31:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 01:37:03.176 05:31:54 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 01:37:03.176 05:31:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:37:03.176 05:31:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 01:37:03.176 05:31:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 01:37:03.176 05:31:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:37:03.176 05:31:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 01:37:03.176 05:31:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:37:03.176 05:31:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 73046 01:37:03.176 05:31:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 73046 ']' 01:37:03.176 05:31:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 73046 01:37:03.176 05:31:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 01:37:03.176 05:31:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:37:03.176 05:31:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73046 01:37:03.176 killing process with pid 73046 01:37:03.176 05:31:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:37:03.176 05:31:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:37:03.176 05:31:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73046' 01:37:03.176 05:31:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 73046 01:37:03.176 05:31:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 73046 01:37:05.787 ************************************ 01:37:05.787 END TEST xnvme_rpc 01:37:05.787 ************************************ 01:37:05.787 01:37:05.787 real 0m3.627s 01:37:05.787 user 0m3.782s 01:37:05.787 sys 0m0.554s 01:37:05.787 05:31:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 01:37:05.787 05:31:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 01:37:05.787 05:31:56 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 01:37:05.787 05:31:56 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:37:05.787 05:31:56 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 01:37:05.787 05:31:56 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 01:37:05.787 ************************************ 01:37:05.787 START TEST xnvme_bdevperf 01:37:05.787 ************************************ 01:37:05.787 05:31:56 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 01:37:05.787 05:31:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 01:37:05.787 05:31:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 01:37:05.787 05:31:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 01:37:05.787 05:31:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 01:37:05.787 05:31:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 01:37:05.787 05:31:56 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 01:37:05.787 05:31:56 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 01:37:05.787 { 01:37:05.787 "subsystems": [ 01:37:05.787 { 01:37:05.787 "subsystem": "bdev", 01:37:05.787 "config": [ 01:37:05.787 { 01:37:05.787 "params": { 01:37:05.787 "io_mechanism": "io_uring_cmd", 01:37:05.787 "conserve_cpu": true, 01:37:05.787 "filename": "/dev/ng0n1", 01:37:05.787 "name": "xnvme_bdev" 01:37:05.787 }, 01:37:05.787 "method": "bdev_xnvme_create" 01:37:05.787 }, 01:37:05.787 { 01:37:05.787 "method": "bdev_wait_for_examine" 01:37:05.787 } 01:37:05.787 ] 01:37:05.787 } 01:37:05.787 ] 01:37:05.787 } 01:37:05.787 [2024-12-09 05:31:57.094005] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:37:05.787 [2024-12-09 05:31:57.094198] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73126 ] 01:37:05.787 [2024-12-09 05:31:57.284219] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:37:06.045 [2024-12-09 05:31:57.407935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:37:06.303 Running I/O for 5 seconds... 01:37:08.167 47488.00 IOPS, 185.50 MiB/s [2024-12-09T05:32:01.158Z] 49584.00 IOPS, 193.69 MiB/s [2024-12-09T05:32:02.093Z] 49717.33 IOPS, 194.21 MiB/s [2024-12-09T05:32:03.029Z] 50472.00 IOPS, 197.16 MiB/s 01:37:11.412 Latency(us) 01:37:11.412 [2024-12-09T05:32:03.029Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:37:11.412 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 01:37:11.412 xnvme_bdev : 5.00 50784.16 198.38 0.00 0.00 1256.68 793.13 3932.16 01:37:11.412 [2024-12-09T05:32:03.029Z] =================================================================================================================== 01:37:11.412 [2024-12-09T05:32:03.029Z] Total : 50784.16 198.38 0.00 0.00 1256.68 793.13 3932.16 01:37:12.346 05:32:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 01:37:12.346 05:32:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 01:37:12.346 05:32:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 01:37:12.346 05:32:03 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 01:37:12.346 05:32:03 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 01:37:12.346 { 01:37:12.346 "subsystems": [ 01:37:12.346 { 01:37:12.346 "subsystem": "bdev", 01:37:12.346 "config": [ 01:37:12.346 { 01:37:12.346 "params": { 01:37:12.346 "io_mechanism": "io_uring_cmd", 01:37:12.346 "conserve_cpu": true, 01:37:12.346 "filename": "/dev/ng0n1", 01:37:12.346 "name": "xnvme_bdev" 01:37:12.346 }, 01:37:12.346 "method": "bdev_xnvme_create" 01:37:12.346 }, 01:37:12.346 { 01:37:12.346 "method": "bdev_wait_for_examine" 01:37:12.346 } 01:37:12.346 ] 01:37:12.346 } 01:37:12.346 ] 01:37:12.346 } 01:37:12.346 [2024-12-09 05:32:03.789046] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:37:12.346 [2024-12-09 05:32:03.789215] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73195 ] 01:37:12.346 [2024-12-09 05:32:03.958236] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:37:12.605 [2024-12-09 05:32:04.070139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:37:12.863 Running I/O for 5 seconds... 01:37:15.184 41787.00 IOPS, 163.23 MiB/s [2024-12-09T05:32:07.392Z] 39893.00 IOPS, 155.83 MiB/s [2024-12-09T05:32:08.767Z] 40615.00 IOPS, 158.65 MiB/s [2024-12-09T05:32:09.702Z] 40989.00 IOPS, 160.11 MiB/s 01:37:18.085 Latency(us) 01:37:18.085 [2024-12-09T05:32:09.702Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:37:18.085 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 01:37:18.085 xnvme_bdev : 5.00 41056.96 160.38 0.00 0.00 1553.31 77.27 7298.33 01:37:18.085 [2024-12-09T05:32:09.702Z] =================================================================================================================== 01:37:18.085 [2024-12-09T05:32:09.702Z] Total : 41056.96 160.38 0.00 0.00 1553.31 77.27 7298.33 01:37:19.020 05:32:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 01:37:19.020 05:32:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 01:37:19.020 05:32:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 01:37:19.020 05:32:10 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 01:37:19.020 05:32:10 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 01:37:19.020 { 01:37:19.020 "subsystems": [ 01:37:19.020 { 01:37:19.020 "subsystem": "bdev", 01:37:19.020 "config": [ 01:37:19.020 { 01:37:19.020 "params": { 01:37:19.020 "io_mechanism": "io_uring_cmd", 01:37:19.020 "conserve_cpu": true, 01:37:19.020 "filename": "/dev/ng0n1", 01:37:19.020 "name": "xnvme_bdev" 01:37:19.020 }, 01:37:19.020 "method": "bdev_xnvme_create" 01:37:19.020 }, 01:37:19.020 { 01:37:19.020 "method": "bdev_wait_for_examine" 01:37:19.020 } 01:37:19.020 ] 01:37:19.020 } 01:37:19.020 ] 01:37:19.020 } 01:37:19.020 [2024-12-09 05:32:10.544243] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:37:19.020 [2024-12-09 05:32:10.544452] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73275 ] 01:37:19.279 [2024-12-09 05:32:10.725280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:37:19.279 [2024-12-09 05:32:10.828599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:37:19.538 Running I/O for 5 seconds... 01:37:21.851 82368.00 IOPS, 321.75 MiB/s [2024-12-09T05:32:14.402Z] 82720.00 IOPS, 323.12 MiB/s [2024-12-09T05:32:15.338Z] 81962.67 IOPS, 320.17 MiB/s [2024-12-09T05:32:16.300Z] 81792.00 IOPS, 319.50 MiB/s 01:37:24.683 Latency(us) 01:37:24.683 [2024-12-09T05:32:16.300Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:37:24.683 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 01:37:24.683 xnvme_bdev : 5.00 81625.21 318.85 0.00 0.00 780.85 467.32 4110.89 01:37:24.683 [2024-12-09T05:32:16.300Z] =================================================================================================================== 01:37:24.683 [2024-12-09T05:32:16.300Z] Total : 81625.21 318.85 0.00 0.00 780.85 467.32 4110.89 01:37:25.617 05:32:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 01:37:25.617 05:32:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 01:37:25.617 05:32:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 01:37:25.617 05:32:17 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 01:37:25.617 05:32:17 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 01:37:25.617 { 01:37:25.617 "subsystems": [ 01:37:25.617 { 01:37:25.617 "subsystem": "bdev", 01:37:25.617 "config": [ 01:37:25.617 { 01:37:25.617 "params": { 01:37:25.617 "io_mechanism": "io_uring_cmd", 01:37:25.617 "conserve_cpu": true, 01:37:25.617 "filename": "/dev/ng0n1", 01:37:25.617 "name": "xnvme_bdev" 01:37:25.617 }, 01:37:25.617 "method": "bdev_xnvme_create" 01:37:25.617 }, 01:37:25.617 { 01:37:25.617 "method": "bdev_wait_for_examine" 01:37:25.617 } 01:37:25.617 ] 01:37:25.617 } 01:37:25.617 ] 01:37:25.617 } 01:37:25.617 [2024-12-09 05:32:17.213603] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:37:25.617 [2024-12-09 05:32:17.213825] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73349 ] 01:37:25.874 [2024-12-09 05:32:17.396201] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:37:26.133 [2024-12-09 05:32:17.510793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:37:26.391 Running I/O for 5 seconds... 01:37:28.256 42769.00 IOPS, 167.07 MiB/s [2024-12-09T05:32:21.248Z] 42157.50 IOPS, 164.68 MiB/s [2024-12-09T05:32:22.183Z] 41618.67 IOPS, 162.57 MiB/s [2024-12-09T05:32:23.118Z] 41561.25 IOPS, 162.35 MiB/s [2024-12-09T05:32:23.118Z] 41490.80 IOPS, 162.07 MiB/s 01:37:31.501 Latency(us) 01:37:31.501 [2024-12-09T05:32:23.118Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:37:31.501 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 01:37:31.501 xnvme_bdev : 5.00 41462.40 161.96 0.00 0.00 1534.94 61.44 16920.20 01:37:31.501 [2024-12-09T05:32:23.118Z] =================================================================================================================== 01:37:31.501 [2024-12-09T05:32:23.118Z] Total : 41462.40 161.96 0.00 0.00 1534.94 61.44 16920.20 01:37:32.437 01:37:32.437 real 0m27.007s 01:37:32.437 user 0m16.362s 01:37:32.437 sys 0m8.157s 01:37:32.437 05:32:23 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 01:37:32.437 ************************************ 01:37:32.437 END TEST xnvme_bdevperf 01:37:32.437 ************************************ 01:37:32.437 05:32:23 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 01:37:32.437 05:32:24 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 01:37:32.437 05:32:24 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:37:32.437 05:32:24 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 01:37:32.437 05:32:24 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 01:37:32.437 ************************************ 01:37:32.437 START TEST xnvme_fio_plugin 01:37:32.437 ************************************ 01:37:32.437 05:32:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 01:37:32.437 05:32:24 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 01:37:32.437 05:32:24 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 01:37:32.437 05:32:24 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 01:37:32.437 05:32:24 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 01:37:32.437 05:32:24 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 01:37:32.437 05:32:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 01:37:32.437 05:32:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 01:37:32.437 05:32:24 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 01:37:32.437 05:32:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:37:32.437 05:32:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 01:37:32.437 05:32:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 01:37:32.437 05:32:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:37:32.437 05:32:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 01:37:32.437 05:32:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 01:37:32.437 05:32:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:37:32.437 05:32:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:37:32.437 05:32:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 01:37:32.437 05:32:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:37:32.697 05:32:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 01:37:32.697 05:32:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 01:37:32.697 05:32:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 01:37:32.697 05:32:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 01:37:32.697 05:32:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 01:37:32.697 { 01:37:32.697 "subsystems": [ 01:37:32.697 { 01:37:32.697 "subsystem": "bdev", 01:37:32.697 "config": [ 01:37:32.697 { 01:37:32.697 "params": { 01:37:32.697 "io_mechanism": "io_uring_cmd", 01:37:32.697 "conserve_cpu": true, 01:37:32.697 "filename": "/dev/ng0n1", 01:37:32.697 "name": "xnvme_bdev" 01:37:32.697 }, 01:37:32.697 "method": "bdev_xnvme_create" 01:37:32.697 }, 01:37:32.697 { 01:37:32.697 "method": "bdev_wait_for_examine" 01:37:32.697 } 01:37:32.697 ] 01:37:32.697 } 01:37:32.697 ] 01:37:32.697 } 01:37:32.697 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 01:37:32.697 fio-3.35 01:37:32.697 Starting 1 thread 01:37:39.305 01:37:39.305 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73468: Mon Dec 9 05:32:30 2024 01:37:39.305 read: IOPS=50.7k, BW=198MiB/s (208MB/s)(991MiB/5001msec) 01:37:39.305 slat (nsec): min=2547, max=72326, avg=3684.29, stdev=1921.60 01:37:39.305 clat (usec): min=770, max=5229, avg=1114.90, stdev=149.16 01:37:39.305 lat (usec): min=773, max=5256, avg=1118.58, stdev=149.57 01:37:39.305 clat percentiles (usec): 01:37:39.305 | 1.00th=[ 881], 5.00th=[ 930], 10.00th=[ 963], 20.00th=[ 1004], 01:37:39.305 | 30.00th=[ 1037], 40.00th=[ 1074], 50.00th=[ 1090], 60.00th=[ 1123], 01:37:39.305 | 70.00th=[ 1156], 80.00th=[ 1205], 90.00th=[ 1270], 95.00th=[ 1352], 01:37:39.305 | 99.00th=[ 1598], 99.50th=[ 1696], 99.90th=[ 1909], 99.95th=[ 2073], 01:37:39.305 | 99.99th=[ 4948] 01:37:39.305 bw ( KiB/s): min=189952, max=216064, per=99.91%, avg=202638.22, stdev=6834.66, samples=9 01:37:39.305 iops : min=47488, max=54016, avg=50659.56, stdev=1708.67, samples=9 01:37:39.305 lat (usec) : 1000=18.59% 01:37:39.305 lat (msec) : 2=81.34%, 4=0.04%, 10=0.03% 01:37:39.305 cpu : usr=56.36%, sys=40.30%, ctx=9, majf=0, minf=762 01:37:39.305 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 01:37:39.305 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:37:39.305 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 01:37:39.305 issued rwts: total=253568,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:37:39.305 latency : target=0, window=0, percentile=100.00%, depth=64 01:37:39.305 01:37:39.305 Run status group 0 (all jobs): 01:37:39.305 READ: bw=198MiB/s (208MB/s), 198MiB/s-198MiB/s (208MB/s-208MB/s), io=991MiB (1039MB), run=5001-5001msec 01:37:40.242 ----------------------------------------------------- 01:37:40.242 Suppressions used: 01:37:40.242 count bytes template 01:37:40.242 1 11 /usr/src/fio/parse.c 01:37:40.242 1 8 libtcmalloc_minimal.so 01:37:40.242 1 904 libcrypto.so 01:37:40.242 ----------------------------------------------------- 01:37:40.242 01:37:40.242 05:32:31 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 01:37:40.242 05:32:31 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 01:37:40.242 05:32:31 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 01:37:40.242 05:32:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 01:37:40.242 05:32:31 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 01:37:40.242 05:32:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 01:37:40.242 05:32:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 01:37:40.242 05:32:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:37:40.242 05:32:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 01:37:40.242 05:32:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:37:40.242 05:32:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 01:37:40.242 05:32:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 01:37:40.242 05:32:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:37:40.242 05:32:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 01:37:40.242 05:32:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:37:40.242 05:32:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:37:40.242 05:32:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 01:37:40.242 05:32:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 01:37:40.242 05:32:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 01:37:40.242 05:32:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 01:37:40.242 05:32:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 01:37:40.242 { 01:37:40.242 "subsystems": [ 01:37:40.242 { 01:37:40.242 "subsystem": "bdev", 01:37:40.242 "config": [ 01:37:40.242 { 01:37:40.242 "params": { 01:37:40.242 "io_mechanism": "io_uring_cmd", 01:37:40.242 "conserve_cpu": true, 01:37:40.242 "filename": "/dev/ng0n1", 01:37:40.242 "name": "xnvme_bdev" 01:37:40.242 }, 01:37:40.242 "method": "bdev_xnvme_create" 01:37:40.242 }, 01:37:40.242 { 01:37:40.242 "method": "bdev_wait_for_examine" 01:37:40.242 } 01:37:40.242 ] 01:37:40.242 } 01:37:40.242 ] 01:37:40.242 } 01:37:40.242 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 01:37:40.242 fio-3.35 01:37:40.242 Starting 1 thread 01:37:46.805 01:37:46.805 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73570: Mon Dec 9 05:32:37 2024 01:37:46.805 write: IOPS=44.3k, BW=173MiB/s (181MB/s)(864MiB/5001msec); 0 zone resets 01:37:46.805 slat (usec): min=2, max=144, avg= 4.89, stdev= 2.80 01:37:46.805 clat (usec): min=91, max=6742, avg=1255.71, stdev=266.35 01:37:46.805 lat (usec): min=96, max=6748, avg=1260.60, stdev=266.82 01:37:46.805 clat percentiles (usec): 01:37:46.805 | 1.00th=[ 840], 5.00th=[ 1020], 10.00th=[ 1057], 20.00th=[ 1106], 01:37:46.805 | 30.00th=[ 1156], 40.00th=[ 1188], 50.00th=[ 1221], 60.00th=[ 1270], 01:37:46.805 | 70.00th=[ 1303], 80.00th=[ 1352], 90.00th=[ 1434], 95.00th=[ 1565], 01:37:46.805 | 99.00th=[ 1958], 99.50th=[ 2966], 99.90th=[ 4359], 99.95th=[ 4686], 01:37:46.805 | 99.99th=[ 5342] 01:37:46.805 bw ( KiB/s): min=174072, max=186880, per=100.00%, avg=178798.22, stdev=4500.27, samples=9 01:37:46.805 iops : min=43518, max=46720, avg=44699.56, stdev=1125.07, samples=9 01:37:46.805 lat (usec) : 100=0.01%, 250=0.08%, 500=0.30%, 750=0.41%, 1000=2.57% 01:37:46.805 lat (msec) : 2=95.68%, 4=0.79%, 10=0.15% 01:37:46.805 cpu : usr=64.96%, sys=31.26%, ctx=13, majf=0, minf=763 01:37:46.805 IO depths : 1=1.5%, 2=3.0%, 4=6.1%, 8=12.2%, 16=24.6%, 32=50.9%, >=64=1.7% 01:37:46.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:37:46.805 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 01:37:46.805 issued rwts: total=0,221305,0,0 short=0,0,0,0 dropped=0,0,0,0 01:37:46.805 latency : target=0, window=0, percentile=100.00%, depth=64 01:37:46.805 01:37:46.805 Run status group 0 (all jobs): 01:37:46.805 WRITE: bw=173MiB/s (181MB/s), 173MiB/s-173MiB/s (181MB/s-181MB/s), io=864MiB (906MB), run=5001-5001msec 01:37:47.798 ----------------------------------------------------- 01:37:47.798 Suppressions used: 01:37:47.798 count bytes template 01:37:47.798 1 11 /usr/src/fio/parse.c 01:37:47.798 1 8 libtcmalloc_minimal.so 01:37:47.798 1 904 libcrypto.so 01:37:47.798 ----------------------------------------------------- 01:37:47.798 01:37:47.798 01:37:47.798 real 0m15.141s 01:37:47.798 user 0m10.100s 01:37:47.798 sys 0m4.400s 01:37:47.798 05:32:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 01:37:47.798 ************************************ 01:37:47.798 END TEST xnvme_fio_plugin 01:37:47.798 05:32:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 01:37:47.798 ************************************ 01:37:47.798 05:32:39 nvme_xnvme -- xnvme/xnvme.sh@1 -- # killprocess 73046 01:37:47.798 05:32:39 nvme_xnvme -- common/autotest_common.sh@954 -- # '[' -z 73046 ']' 01:37:47.798 05:32:39 nvme_xnvme -- common/autotest_common.sh@958 -- # kill -0 73046 01:37:47.798 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (73046) - No such process 01:37:47.798 Process with pid 73046 is not found 01:37:47.798 05:32:39 nvme_xnvme -- common/autotest_common.sh@981 -- # echo 'Process with pid 73046 is not found' 01:37:47.798 01:37:47.798 real 3m48.588s 01:37:47.798 user 2m5.844s 01:37:47.798 sys 1m25.226s 01:37:47.798 05:32:39 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 01:37:47.798 05:32:39 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 01:37:47.798 ************************************ 01:37:47.798 END TEST nvme_xnvme 01:37:47.798 ************************************ 01:37:47.798 05:32:39 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 01:37:47.798 05:32:39 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:37:47.798 05:32:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:37:47.798 05:32:39 -- common/autotest_common.sh@10 -- # set +x 01:37:47.798 ************************************ 01:37:47.798 START TEST blockdev_xnvme 01:37:47.798 ************************************ 01:37:47.798 05:32:39 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 01:37:47.798 * Looking for test storage... 01:37:47.798 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 01:37:47.798 05:32:39 blockdev_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:37:47.798 05:32:39 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 01:37:47.798 05:32:39 blockdev_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:37:48.057 05:32:39 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:37:48.057 05:32:39 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:37:48.057 05:32:39 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 01:37:48.057 05:32:39 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 01:37:48.057 05:32:39 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 01:37:48.057 05:32:39 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 01:37:48.057 05:32:39 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 01:37:48.057 05:32:39 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 01:37:48.057 05:32:39 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 01:37:48.057 05:32:39 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 01:37:48.057 05:32:39 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 01:37:48.057 05:32:39 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:37:48.057 05:32:39 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 01:37:48.057 05:32:39 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 01:37:48.057 05:32:39 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 01:37:48.057 05:32:39 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:37:48.057 05:32:39 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 01:37:48.057 05:32:39 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 01:37:48.057 05:32:39 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:37:48.057 05:32:39 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 01:37:48.057 05:32:39 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 01:37:48.057 05:32:39 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 01:37:48.057 05:32:39 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 01:37:48.057 05:32:39 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:37:48.057 05:32:39 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 01:37:48.057 05:32:39 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 01:37:48.057 05:32:39 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:37:48.057 05:32:39 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:37:48.057 05:32:39 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 01:37:48.057 05:32:39 blockdev_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:37:48.057 05:32:39 blockdev_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:37:48.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:37:48.057 --rc genhtml_branch_coverage=1 01:37:48.057 --rc genhtml_function_coverage=1 01:37:48.057 --rc genhtml_legend=1 01:37:48.057 --rc geninfo_all_blocks=1 01:37:48.057 --rc geninfo_unexecuted_blocks=1 01:37:48.057 01:37:48.057 ' 01:37:48.057 05:32:39 blockdev_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:37:48.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:37:48.057 --rc genhtml_branch_coverage=1 01:37:48.057 --rc genhtml_function_coverage=1 01:37:48.057 --rc genhtml_legend=1 01:37:48.057 --rc geninfo_all_blocks=1 01:37:48.057 --rc geninfo_unexecuted_blocks=1 01:37:48.057 01:37:48.057 ' 01:37:48.057 05:32:39 blockdev_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:37:48.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:37:48.057 --rc genhtml_branch_coverage=1 01:37:48.057 --rc genhtml_function_coverage=1 01:37:48.057 --rc genhtml_legend=1 01:37:48.057 --rc geninfo_all_blocks=1 01:37:48.057 --rc geninfo_unexecuted_blocks=1 01:37:48.058 01:37:48.058 ' 01:37:48.058 05:32:39 blockdev_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:37:48.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:37:48.058 --rc genhtml_branch_coverage=1 01:37:48.058 --rc genhtml_function_coverage=1 01:37:48.058 --rc genhtml_legend=1 01:37:48.058 --rc geninfo_all_blocks=1 01:37:48.058 --rc geninfo_unexecuted_blocks=1 01:37:48.058 01:37:48.058 ' 01:37:48.058 05:32:39 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 01:37:48.058 05:32:39 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 01:37:48.058 05:32:39 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 01:37:48.058 05:32:39 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 01:37:48.058 05:32:39 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 01:37:48.058 05:32:39 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 01:37:48.058 05:32:39 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 01:37:48.058 05:32:39 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 01:37:48.058 05:32:39 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 01:37:48.058 05:32:39 blockdev_xnvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 01:37:48.058 05:32:39 blockdev_xnvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 01:37:48.058 05:32:39 blockdev_xnvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 01:37:48.058 05:32:39 blockdev_xnvme -- bdev/blockdev.sh@711 -- # uname -s 01:37:48.058 05:32:39 blockdev_xnvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 01:37:48.058 05:32:39 blockdev_xnvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 01:37:48.058 05:32:39 blockdev_xnvme -- bdev/blockdev.sh@719 -- # test_type=xnvme 01:37:48.058 05:32:39 blockdev_xnvme -- bdev/blockdev.sh@720 -- # crypto_device= 01:37:48.058 05:32:39 blockdev_xnvme -- bdev/blockdev.sh@721 -- # dek= 01:37:48.058 05:32:39 blockdev_xnvme -- bdev/blockdev.sh@722 -- # env_ctx= 01:37:48.058 05:32:39 blockdev_xnvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 01:37:48.058 05:32:39 blockdev_xnvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 01:37:48.058 05:32:39 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == bdev ]] 01:37:48.058 05:32:39 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == crypto_* ]] 01:37:48.058 05:32:39 blockdev_xnvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 01:37:48.058 05:32:39 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=73703 01:37:48.058 05:32:39 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 01:37:48.058 05:32:39 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 73703 01:37:48.058 05:32:39 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 73703 ']' 01:37:48.058 05:32:39 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 01:37:48.058 05:32:39 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:37:48.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:37:48.058 05:32:39 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 01:37:48.058 05:32:39 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:37:48.058 05:32:39 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 01:37:48.058 05:32:39 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 01:37:48.058 [2024-12-09 05:32:39.605618] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:37:48.058 [2024-12-09 05:32:39.606582] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73703 ] 01:37:48.317 [2024-12-09 05:32:39.800887] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:37:48.576 [2024-12-09 05:32:39.976727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:37:49.511 05:32:40 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:37:49.511 05:32:40 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 01:37:49.511 05:32:40 blockdev_xnvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 01:37:49.511 05:32:40 blockdev_xnvme -- bdev/blockdev.sh@766 -- # setup_xnvme_conf 01:37:49.511 05:32:40 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 01:37:49.511 05:32:40 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 01:37:49.511 05:32:40 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 01:37:50.078 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:37:50.647 0000:00:11.0 (1b36 0010): Already using the nvme driver 01:37:50.647 0000:00:10.0 (1b36 0010): Already using the nvme driver 01:37:50.647 0000:00:12.0 (1b36 0010): Already using the nvme driver 01:37:50.647 0000:00:13.0 (1b36 0010): Already using the nvme driver 01:37:50.647 05:32:42 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 01:37:50.647 05:32:42 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 01:37:50.647 05:32:42 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 01:37:50.647 05:32:42 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local nvme bdf 01:37:50.647 05:32:42 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 01:37:50.647 05:32:42 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 01:37:50.647 05:32:42 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 01:37:50.647 05:32:42 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 01:37:50.647 05:32:42 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:37:50.647 05:32:42 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 01:37:50.647 05:32:42 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n2 01:37:50.647 05:32:42 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n2 01:37:50.647 05:32:42 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 01:37:50.647 05:32:42 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:37:50.647 05:32:42 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 01:37:50.647 05:32:42 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n3 01:37:50.647 05:32:42 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n3 01:37:50.647 05:32:42 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 01:37:50.647 05:32:42 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:37:50.647 05:32:42 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 01:37:50.647 05:32:42 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 01:37:50.647 05:32:42 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1n1 01:37:50.647 05:32:42 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 01:37:50.647 05:32:42 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:37:50.647 05:32:42 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 01:37:50.647 05:32:42 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2c2n1 01:37:50.647 05:32:42 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2c2n1 01:37:50.647 05:32:42 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2c2n1/queue/zoned ]] 01:37:50.647 05:32:42 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:37:50.647 05:32:42 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 01:37:50.647 05:32:42 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 01:37:50.647 05:32:42 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 01:37:50.647 05:32:42 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 01:37:50.647 05:32:42 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:37:50.647 05:32:42 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 01:37:50.647 05:32:42 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 01:37:50.647 05:32:42 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 01:37:50.647 05:32:42 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 01:37:50.647 05:32:42 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:37:50.647 05:32:42 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 01:37:50.647 05:32:42 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 01:37:50.647 05:32:42 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 01:37:50.647 05:32:42 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 01:37:50.647 05:32:42 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 01:37:50.647 05:32:42 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n2 ]] 01:37:50.647 05:32:42 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 01:37:50.647 05:32:42 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 01:37:50.647 05:32:42 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 01:37:50.647 05:32:42 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n3 ]] 01:37:50.647 05:32:42 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 01:37:50.647 05:32:42 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 01:37:50.647 05:32:42 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 01:37:50.647 05:32:42 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 01:37:50.647 05:32:42 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 01:37:50.647 05:32:42 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 01:37:50.647 05:32:42 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 01:37:50.647 05:32:42 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 01:37:50.647 05:32:42 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 01:37:50.647 05:32:42 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 01:37:50.647 05:32:42 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 01:37:50.647 05:32:42 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 01:37:50.647 05:32:42 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 01:37:50.647 05:32:42 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 01:37:50.647 05:32:42 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 01:37:50.647 05:32:42 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 01:37:50.647 05:32:42 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 01:37:50.647 05:32:42 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 01:37:50.647 05:32:42 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring -c' 'bdev_xnvme_create /dev/nvme0n2 nvme0n2 io_uring -c' 'bdev_xnvme_create /dev/nvme0n3 nvme0n3 io_uring -c' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring -c' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring -c' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring -c' 01:37:50.647 nvme0n1 01:37:50.647 nvme0n2 01:37:50.647 nvme0n3 01:37:50.647 nvme1n1 01:37:50.647 nvme2n1 01:37:50.647 nvme3n1 01:37:50.647 05:32:42 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:37:50.647 05:32:42 blockdev_xnvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 01:37:50.647 05:32:42 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 01:37:50.647 05:32:42 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 01:37:50.647 05:32:42 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:37:50.647 05:32:42 blockdev_xnvme -- bdev/blockdev.sh@777 -- # cat 01:37:50.647 05:32:42 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 01:37:50.647 05:32:42 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 01:37:50.647 05:32:42 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 01:37:50.647 05:32:42 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:37:50.647 05:32:42 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 01:37:50.647 05:32:42 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 01:37:50.647 05:32:42 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 01:37:50.647 05:32:42 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:37:50.647 05:32:42 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 01:37:50.647 05:32:42 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 01:37:50.647 05:32:42 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 01:37:50.647 05:32:42 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:37:50.647 05:32:42 blockdev_xnvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 01:37:50.647 05:32:42 blockdev_xnvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 01:37:50.647 05:32:42 blockdev_xnvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 01:37:50.647 05:32:42 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 01:37:50.647 05:32:42 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 01:37:50.907 05:32:42 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:37:50.907 05:32:42 blockdev_xnvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 01:37:50.907 05:32:42 blockdev_xnvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "e75c492b-eefc-42a6-b707-c56bb54feb0d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e75c492b-eefc-42a6-b707-c56bb54feb0d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "07586d11-2f1c-44b3-ab26-ccf8a28717c1"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "07586d11-2f1c-44b3-ab26-ccf8a28717c1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "e8b72853-92ce-4300-9d39-03b8ee6f390a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e8b72853-92ce-4300-9d39-03b8ee6f390a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "9a3fbaa9-f17c-498b-9d98-371bf4314b4f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "9a3fbaa9-f17c-498b-9d98-371bf4314b4f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "e9383fc8-08e0-4a9b-8447-2b1d8ec05561"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "e9383fc8-08e0-4a9b-8447-2b1d8ec05561",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "81a7f009-d0ef-4393-8153-7e064a5db10b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "81a7f009-d0ef-4393-8153-7e064a5db10b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 01:37:50.907 05:32:42 blockdev_xnvme -- bdev/blockdev.sh@786 -- # jq -r .name 01:37:50.907 05:32:42 blockdev_xnvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 01:37:50.907 05:32:42 blockdev_xnvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=nvme0n1 01:37:50.907 05:32:42 blockdev_xnvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 01:37:50.907 05:32:42 blockdev_xnvme -- bdev/blockdev.sh@791 -- # killprocess 73703 01:37:50.907 05:32:42 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 73703 ']' 01:37:50.907 05:32:42 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 73703 01:37:50.908 05:32:42 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 01:37:50.908 05:32:42 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:37:50.908 05:32:42 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73703 01:37:50.908 killing process with pid 73703 01:37:50.908 05:32:42 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:37:50.908 05:32:42 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:37:50.908 05:32:42 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73703' 01:37:50.908 05:32:42 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 73703 01:37:50.908 05:32:42 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 73703 01:37:53.442 05:32:44 blockdev_xnvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 01:37:53.442 05:32:44 blockdev_xnvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 01:37:53.442 05:32:44 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 01:37:53.442 05:32:44 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 01:37:53.442 05:32:44 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 01:37:53.442 ************************************ 01:37:53.442 START TEST bdev_hello_world 01:37:53.442 ************************************ 01:37:53.442 05:32:44 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 01:37:53.442 [2024-12-09 05:32:44.879763] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:37:53.442 [2024-12-09 05:32:44.879926] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74000 ] 01:37:53.442 [2024-12-09 05:32:45.054564] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:37:53.700 [2024-12-09 05:32:45.191921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:37:54.266 [2024-12-09 05:32:45.643290] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 01:37:54.266 [2024-12-09 05:32:45.643359] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 01:37:54.266 [2024-12-09 05:32:45.643382] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 01:37:54.266 [2024-12-09 05:32:45.645843] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 01:37:54.266 [2024-12-09 05:32:45.646560] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 01:37:54.266 [2024-12-09 05:32:45.646601] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 01:37:54.266 [2024-12-09 05:32:45.646784] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 01:37:54.266 01:37:54.266 [2024-12-09 05:32:45.646815] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 01:37:55.198 01:37:55.198 real 0m2.000s 01:37:55.198 user 0m1.565s 01:37:55.198 sys 0m0.317s 01:37:55.198 05:32:46 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 01:37:55.198 ************************************ 01:37:55.198 END TEST bdev_hello_world 01:37:55.198 05:32:46 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 01:37:55.198 ************************************ 01:37:55.456 05:32:46 blockdev_xnvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 01:37:55.456 05:32:46 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:37:55.456 05:32:46 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 01:37:55.456 05:32:46 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 01:37:55.456 ************************************ 01:37:55.456 START TEST bdev_bounds 01:37:55.456 ************************************ 01:37:55.456 Process bdevio pid: 74042 01:37:55.456 05:32:46 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 01:37:55.456 05:32:46 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=74042 01:37:55.456 05:32:46 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 01:37:55.456 05:32:46 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 74042' 01:37:55.456 05:32:46 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 74042 01:37:55.456 05:32:46 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 01:37:55.456 05:32:46 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 74042 ']' 01:37:55.456 05:32:46 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:37:55.456 05:32:46 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 01:37:55.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:37:55.456 05:32:46 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:37:55.456 05:32:46 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 01:37:55.456 05:32:46 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 01:37:55.456 [2024-12-09 05:32:46.948764] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:37:55.456 [2024-12-09 05:32:46.949234] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74042 ] 01:37:55.713 [2024-12-09 05:32:47.133889] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 01:37:55.713 [2024-12-09 05:32:47.279368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:37:55.713 [2024-12-09 05:32:47.279529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:37:55.713 [2024-12-09 05:32:47.279543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:37:56.310 05:32:47 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:37:56.310 05:32:47 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 01:37:56.310 05:32:47 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 01:37:56.567 I/O targets: 01:37:56.567 nvme0n1: 1048576 blocks of 4096 bytes (4096 MiB) 01:37:56.567 nvme0n2: 1048576 blocks of 4096 bytes (4096 MiB) 01:37:56.567 nvme0n3: 1048576 blocks of 4096 bytes (4096 MiB) 01:37:56.567 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 01:37:56.567 nvme2n1: 262144 blocks of 4096 bytes (1024 MiB) 01:37:56.567 nvme3n1: 1310720 blocks of 4096 bytes (5120 MiB) 01:37:56.567 01:37:56.567 01:37:56.567 CUnit - A unit testing framework for C - Version 2.1-3 01:37:56.567 http://cunit.sourceforge.net/ 01:37:56.567 01:37:56.567 01:37:56.567 Suite: bdevio tests on: nvme3n1 01:37:56.567 Test: blockdev write read block ...passed 01:37:56.567 Test: blockdev write zeroes read block ...passed 01:37:56.567 Test: blockdev write zeroes read no split ...passed 01:37:56.567 Test: blockdev write zeroes read split ...passed 01:37:56.567 Test: blockdev write zeroes read split partial ...passed 01:37:56.567 Test: blockdev reset ...passed 01:37:56.567 Test: blockdev write read 8 blocks ...passed 01:37:56.567 Test: blockdev write read size > 128k ...passed 01:37:56.567 Test: blockdev write read invalid size ...passed 01:37:56.567 Test: blockdev write read offset + nbytes == size of blockdev ...passed 01:37:56.567 Test: blockdev write read offset + nbytes > size of blockdev ...passed 01:37:56.567 Test: blockdev write read max offset ...passed 01:37:56.567 Test: blockdev write read 2 blocks on overlapped address offset ...passed 01:37:56.567 Test: blockdev writev readv 8 blocks ...passed 01:37:56.567 Test: blockdev writev readv 30 x 1block ...passed 01:37:56.567 Test: blockdev writev readv block ...passed 01:37:56.567 Test: blockdev writev readv size > 128k ...passed 01:37:56.567 Test: blockdev writev readv size > 128k in two iovs ...passed 01:37:56.567 Test: blockdev comparev and writev ...passed 01:37:56.567 Test: blockdev nvme passthru rw ...passed 01:37:56.567 Test: blockdev nvme passthru vendor specific ...passed 01:37:56.567 Test: blockdev nvme admin passthru ...passed 01:37:56.567 Test: blockdev copy ...passed 01:37:56.567 Suite: bdevio tests on: nvme2n1 01:37:56.567 Test: blockdev write read block ...passed 01:37:56.567 Test: blockdev write zeroes read block ...passed 01:37:56.567 Test: blockdev write zeroes read no split ...passed 01:37:56.567 Test: blockdev write zeroes read split ...passed 01:37:56.567 Test: blockdev write zeroes read split partial ...passed 01:37:56.567 Test: blockdev reset ...passed 01:37:56.567 Test: blockdev write read 8 blocks ...passed 01:37:56.567 Test: blockdev write read size > 128k ...passed 01:37:56.567 Test: blockdev write read invalid size ...passed 01:37:56.567 Test: blockdev write read offset + nbytes == size of blockdev ...passed 01:37:56.567 Test: blockdev write read offset + nbytes > size of blockdev ...passed 01:37:56.567 Test: blockdev write read max offset ...passed 01:37:56.567 Test: blockdev write read 2 blocks on overlapped address offset ...passed 01:37:56.567 Test: blockdev writev readv 8 blocks ...passed 01:37:56.567 Test: blockdev writev readv 30 x 1block ...passed 01:37:56.567 Test: blockdev writev readv block ...passed 01:37:56.567 Test: blockdev writev readv size > 128k ...passed 01:37:56.567 Test: blockdev writev readv size > 128k in two iovs ...passed 01:37:56.567 Test: blockdev comparev and writev ...passed 01:37:56.567 Test: blockdev nvme passthru rw ...passed 01:37:56.567 Test: blockdev nvme passthru vendor specific ...passed 01:37:56.567 Test: blockdev nvme admin passthru ...passed 01:37:56.567 Test: blockdev copy ...passed 01:37:56.567 Suite: bdevio tests on: nvme1n1 01:37:56.567 Test: blockdev write read block ...passed 01:37:56.567 Test: blockdev write zeroes read block ...passed 01:37:56.567 Test: blockdev write zeroes read no split ...passed 01:37:56.824 Test: blockdev write zeroes read split ...passed 01:37:56.824 Test: blockdev write zeroes read split partial ...passed 01:37:56.824 Test: blockdev reset ...passed 01:37:56.824 Test: blockdev write read 8 blocks ...passed 01:37:56.824 Test: blockdev write read size > 128k ...passed 01:37:56.824 Test: blockdev write read invalid size ...passed 01:37:56.824 Test: blockdev write read offset + nbytes == size of blockdev ...passed 01:37:56.824 Test: blockdev write read offset + nbytes > size of blockdev ...passed 01:37:56.824 Test: blockdev write read max offset ...passed 01:37:56.824 Test: blockdev write read 2 blocks on overlapped address offset ...passed 01:37:56.824 Test: blockdev writev readv 8 blocks ...passed 01:37:56.824 Test: blockdev writev readv 30 x 1block ...passed 01:37:56.824 Test: blockdev writev readv block ...passed 01:37:56.824 Test: blockdev writev readv size > 128k ...passed 01:37:56.824 Test: blockdev writev readv size > 128k in two iovs ...passed 01:37:56.824 Test: blockdev comparev and writev ...passed 01:37:56.824 Test: blockdev nvme passthru rw ...passed 01:37:56.824 Test: blockdev nvme passthru vendor specific ...passed 01:37:56.824 Test: blockdev nvme admin passthru ...passed 01:37:56.824 Test: blockdev copy ...passed 01:37:56.824 Suite: bdevio tests on: nvme0n3 01:37:56.824 Test: blockdev write read block ...passed 01:37:56.824 Test: blockdev write zeroes read block ...passed 01:37:56.824 Test: blockdev write zeroes read no split ...passed 01:37:56.824 Test: blockdev write zeroes read split ...passed 01:37:56.824 Test: blockdev write zeroes read split partial ...passed 01:37:56.824 Test: blockdev reset ...passed 01:37:56.824 Test: blockdev write read 8 blocks ...passed 01:37:56.824 Test: blockdev write read size > 128k ...passed 01:37:56.824 Test: blockdev write read invalid size ...passed 01:37:56.824 Test: blockdev write read offset + nbytes == size of blockdev ...passed 01:37:56.824 Test: blockdev write read offset + nbytes > size of blockdev ...passed 01:37:56.824 Test: blockdev write read max offset ...passed 01:37:56.824 Test: blockdev write read 2 blocks on overlapped address offset ...passed 01:37:56.824 Test: blockdev writev readv 8 blocks ...passed 01:37:56.824 Test: blockdev writev readv 30 x 1block ...passed 01:37:56.824 Test: blockdev writev readv block ...passed 01:37:56.824 Test: blockdev writev readv size > 128k ...passed 01:37:56.824 Test: blockdev writev readv size > 128k in two iovs ...passed 01:37:56.824 Test: blockdev comparev and writev ...passed 01:37:56.824 Test: blockdev nvme passthru rw ...passed 01:37:56.824 Test: blockdev nvme passthru vendor specific ...passed 01:37:56.824 Test: blockdev nvme admin passthru ...passed 01:37:56.824 Test: blockdev copy ...passed 01:37:56.824 Suite: bdevio tests on: nvme0n2 01:37:56.824 Test: blockdev write read block ...passed 01:37:56.824 Test: blockdev write zeroes read block ...passed 01:37:56.824 Test: blockdev write zeroes read no split ...passed 01:37:56.824 Test: blockdev write zeroes read split ...passed 01:37:56.824 Test: blockdev write zeroes read split partial ...passed 01:37:56.824 Test: blockdev reset ...passed 01:37:56.824 Test: blockdev write read 8 blocks ...passed 01:37:56.824 Test: blockdev write read size > 128k ...passed 01:37:56.824 Test: blockdev write read invalid size ...passed 01:37:56.824 Test: blockdev write read offset + nbytes == size of blockdev ...passed 01:37:56.824 Test: blockdev write read offset + nbytes > size of blockdev ...passed 01:37:56.824 Test: blockdev write read max offset ...passed 01:37:56.824 Test: blockdev write read 2 blocks on overlapped address offset ...passed 01:37:56.824 Test: blockdev writev readv 8 blocks ...passed 01:37:56.824 Test: blockdev writev readv 30 x 1block ...passed 01:37:56.824 Test: blockdev writev readv block ...passed 01:37:56.824 Test: blockdev writev readv size > 128k ...passed 01:37:56.824 Test: blockdev writev readv size > 128k in two iovs ...passed 01:37:56.824 Test: blockdev comparev and writev ...passed 01:37:56.824 Test: blockdev nvme passthru rw ...passed 01:37:56.824 Test: blockdev nvme passthru vendor specific ...passed 01:37:56.824 Test: blockdev nvme admin passthru ...passed 01:37:56.824 Test: blockdev copy ...passed 01:37:56.824 Suite: bdevio tests on: nvme0n1 01:37:56.824 Test: blockdev write read block ...passed 01:37:56.824 Test: blockdev write zeroes read block ...passed 01:37:56.824 Test: blockdev write zeroes read no split ...passed 01:37:57.082 Test: blockdev write zeroes read split ...passed 01:37:57.082 Test: blockdev write zeroes read split partial ...passed 01:37:57.082 Test: blockdev reset ...passed 01:37:57.082 Test: blockdev write read 8 blocks ...passed 01:37:57.082 Test: blockdev write read size > 128k ...passed 01:37:57.082 Test: blockdev write read invalid size ...passed 01:37:57.082 Test: blockdev write read offset + nbytes == size of blockdev ...passed 01:37:57.082 Test: blockdev write read offset + nbytes > size of blockdev ...passed 01:37:57.082 Test: blockdev write read max offset ...passed 01:37:57.082 Test: blockdev write read 2 blocks on overlapped address offset ...passed 01:37:57.082 Test: blockdev writev readv 8 blocks ...passed 01:37:57.082 Test: blockdev writev readv 30 x 1block ...passed 01:37:57.082 Test: blockdev writev readv block ...passed 01:37:57.082 Test: blockdev writev readv size > 128k ...passed 01:37:57.082 Test: blockdev writev readv size > 128k in two iovs ...passed 01:37:57.082 Test: blockdev comparev and writev ...passed 01:37:57.082 Test: blockdev nvme passthru rw ...passed 01:37:57.082 Test: blockdev nvme passthru vendor specific ...passed 01:37:57.082 Test: blockdev nvme admin passthru ...passed 01:37:57.082 Test: blockdev copy ...passed 01:37:57.082 01:37:57.082 Run Summary: Type Total Ran Passed Failed Inactive 01:37:57.082 suites 6 6 n/a 0 0 01:37:57.082 tests 138 138 138 0 0 01:37:57.082 asserts 780 780 780 0 n/a 01:37:57.082 01:37:57.082 Elapsed time = 1.287 seconds 01:37:57.082 0 01:37:57.082 05:32:48 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 74042 01:37:57.082 05:32:48 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 74042 ']' 01:37:57.082 05:32:48 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 74042 01:37:57.082 05:32:48 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 01:37:57.082 05:32:48 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:37:57.082 05:32:48 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74042 01:37:57.082 killing process with pid 74042 01:37:57.082 05:32:48 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:37:57.082 05:32:48 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:37:57.082 05:32:48 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74042' 01:37:57.082 05:32:48 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 74042 01:37:57.082 05:32:48 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 74042 01:37:58.454 05:32:49 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 01:37:58.454 01:37:58.454 real 0m2.938s 01:37:58.454 user 0m7.091s 01:37:58.454 sys 0m0.519s 01:37:58.454 05:32:49 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 01:37:58.454 05:32:49 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 01:37:58.454 ************************************ 01:37:58.454 END TEST bdev_bounds 01:37:58.454 ************************************ 01:37:58.454 05:32:49 blockdev_xnvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 01:37:58.454 05:32:49 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 01:37:58.454 05:32:49 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 01:37:58.454 05:32:49 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 01:37:58.454 ************************************ 01:37:58.454 START TEST bdev_nbd 01:37:58.454 ************************************ 01:37:58.454 05:32:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 01:37:58.454 05:32:49 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 01:37:58.454 05:32:49 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 01:37:58.454 05:32:49 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:37:58.454 05:32:49 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 01:37:58.454 05:32:49 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 01:37:58.454 05:32:49 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 01:37:58.454 05:32:49 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 01:37:58.454 05:32:49 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 01:37:58.454 05:32:49 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 01:37:58.454 05:32:49 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 01:37:58.454 05:32:49 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 01:37:58.454 05:32:49 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 01:37:58.454 05:32:49 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 01:37:58.454 05:32:49 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 01:37:58.454 05:32:49 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 01:37:58.454 05:32:49 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=74098 01:37:58.454 05:32:49 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 01:37:58.454 05:32:49 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 01:37:58.454 05:32:49 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 74098 /var/tmp/spdk-nbd.sock 01:37:58.454 05:32:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 74098 ']' 01:37:58.454 05:32:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 01:37:58.454 05:32:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 01:37:58.454 05:32:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 01:37:58.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 01:37:58.454 05:32:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 01:37:58.454 05:32:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 01:37:58.454 [2024-12-09 05:32:49.929130] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:37:58.454 [2024-12-09 05:32:49.929433] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:37:58.712 [2024-12-09 05:32:50.105172] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:37:58.712 [2024-12-09 05:32:50.243574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:37:59.647 05:32:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:37:59.647 05:32:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 01:37:59.647 05:32:50 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 01:37:59.647 05:32:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:37:59.647 05:32:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 01:37:59.647 05:32:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 01:37:59.647 05:32:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 01:37:59.647 05:32:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:37:59.647 05:32:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 01:37:59.647 05:32:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 01:37:59.647 05:32:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 01:37:59.647 05:32:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 01:37:59.647 05:32:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 01:37:59.647 05:32:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 01:37:59.647 05:32:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 01:37:59.906 05:32:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 01:37:59.906 05:32:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 01:37:59.906 05:32:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 01:37:59.906 05:32:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 01:37:59.906 05:32:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 01:37:59.906 05:32:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:37:59.906 05:32:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:37:59.906 05:32:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 01:37:59.906 05:32:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 01:37:59.906 05:32:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:37:59.906 05:32:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:37:59.906 05:32:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:37:59.906 1+0 records in 01:37:59.906 1+0 records out 01:37:59.906 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000557544 s, 7.3 MB/s 01:37:59.906 05:32:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:37:59.906 05:32:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 01:37:59.906 05:32:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:37:59.906 05:32:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:37:59.906 05:32:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 01:37:59.906 05:32:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 01:37:59.906 05:32:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 01:37:59.906 05:32:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 01:38:00.165 05:32:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 01:38:00.165 05:32:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 01:38:00.165 05:32:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 01:38:00.165 05:32:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 01:38:00.165 05:32:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 01:38:00.165 05:32:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:38:00.165 05:32:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:38:00.165 05:32:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 01:38:00.165 05:32:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 01:38:00.165 05:32:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:38:00.165 05:32:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:38:00.165 05:32:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:38:00.165 1+0 records in 01:38:00.165 1+0 records out 01:38:00.165 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000596653 s, 6.9 MB/s 01:38:00.165 05:32:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:38:00.165 05:32:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 01:38:00.165 05:32:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:38:00.165 05:32:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:38:00.165 05:32:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 01:38:00.165 05:32:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 01:38:00.165 05:32:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 01:38:00.165 05:32:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 01:38:00.424 05:32:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 01:38:00.424 05:32:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 01:38:00.424 05:32:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 01:38:00.424 05:32:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 01:38:00.424 05:32:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 01:38:00.424 05:32:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:38:00.424 05:32:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:38:00.424 05:32:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 01:38:00.424 05:32:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 01:38:00.424 05:32:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:38:00.424 05:32:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:38:00.424 05:32:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:38:00.424 1+0 records in 01:38:00.424 1+0 records out 01:38:00.424 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000548221 s, 7.5 MB/s 01:38:00.424 05:32:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:38:00.424 05:32:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 01:38:00.424 05:32:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:38:00.424 05:32:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:38:00.424 05:32:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 01:38:00.424 05:32:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 01:38:00.424 05:32:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 01:38:00.424 05:32:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 01:38:00.683 05:32:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 01:38:00.683 05:32:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 01:38:00.683 05:32:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 01:38:00.683 05:32:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 01:38:00.683 05:32:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 01:38:00.683 05:32:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:38:00.683 05:32:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:38:00.683 05:32:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 01:38:00.683 05:32:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 01:38:00.683 05:32:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:38:00.683 05:32:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:38:00.683 05:32:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:38:00.683 1+0 records in 01:38:00.683 1+0 records out 01:38:00.683 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000809262 s, 5.1 MB/s 01:38:00.683 05:32:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:38:00.683 05:32:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 01:38:00.683 05:32:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:38:00.683 05:32:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:38:00.683 05:32:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 01:38:00.683 05:32:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 01:38:00.683 05:32:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 01:38:00.683 05:32:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 01:38:00.942 05:32:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 01:38:00.942 05:32:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 01:38:00.942 05:32:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 01:38:00.942 05:32:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 01:38:00.942 05:32:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 01:38:00.942 05:32:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:38:00.942 05:32:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:38:00.942 05:32:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 01:38:00.942 05:32:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 01:38:00.942 05:32:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:38:00.942 05:32:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:38:00.942 05:32:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:38:00.942 1+0 records in 01:38:00.942 1+0 records out 01:38:00.942 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000619065 s, 6.6 MB/s 01:38:00.942 05:32:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:38:00.942 05:32:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 01:38:00.942 05:32:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:38:00.942 05:32:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:38:00.942 05:32:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 01:38:00.942 05:32:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 01:38:00.942 05:32:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 01:38:00.942 05:32:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 01:38:01.201 05:32:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 01:38:01.201 05:32:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 01:38:01.201 05:32:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 01:38:01.201 05:32:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 01:38:01.201 05:32:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 01:38:01.201 05:32:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:38:01.201 05:32:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:38:01.201 05:32:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 01:38:01.201 05:32:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 01:38:01.201 05:32:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:38:01.201 05:32:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:38:01.201 05:32:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:38:01.201 1+0 records in 01:38:01.201 1+0 records out 01:38:01.201 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000836446 s, 4.9 MB/s 01:38:01.201 05:32:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:38:01.201 05:32:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 01:38:01.201 05:32:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:38:01.201 05:32:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:38:01.201 05:32:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 01:38:01.201 05:32:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 01:38:01.201 05:32:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 01:38:01.201 05:32:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 01:38:01.460 05:32:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 01:38:01.460 { 01:38:01.460 "nbd_device": "/dev/nbd0", 01:38:01.460 "bdev_name": "nvme0n1" 01:38:01.460 }, 01:38:01.460 { 01:38:01.460 "nbd_device": "/dev/nbd1", 01:38:01.460 "bdev_name": "nvme0n2" 01:38:01.460 }, 01:38:01.460 { 01:38:01.460 "nbd_device": "/dev/nbd2", 01:38:01.460 "bdev_name": "nvme0n3" 01:38:01.460 }, 01:38:01.460 { 01:38:01.460 "nbd_device": "/dev/nbd3", 01:38:01.460 "bdev_name": "nvme1n1" 01:38:01.460 }, 01:38:01.460 { 01:38:01.460 "nbd_device": "/dev/nbd4", 01:38:01.460 "bdev_name": "nvme2n1" 01:38:01.460 }, 01:38:01.460 { 01:38:01.460 "nbd_device": "/dev/nbd5", 01:38:01.460 "bdev_name": "nvme3n1" 01:38:01.460 } 01:38:01.460 ]' 01:38:01.460 05:32:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 01:38:01.460 05:32:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 01:38:01.460 05:32:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 01:38:01.460 { 01:38:01.460 "nbd_device": "/dev/nbd0", 01:38:01.460 "bdev_name": "nvme0n1" 01:38:01.460 }, 01:38:01.460 { 01:38:01.460 "nbd_device": "/dev/nbd1", 01:38:01.460 "bdev_name": "nvme0n2" 01:38:01.460 }, 01:38:01.460 { 01:38:01.460 "nbd_device": "/dev/nbd2", 01:38:01.460 "bdev_name": "nvme0n3" 01:38:01.460 }, 01:38:01.460 { 01:38:01.460 "nbd_device": "/dev/nbd3", 01:38:01.460 "bdev_name": "nvme1n1" 01:38:01.460 }, 01:38:01.460 { 01:38:01.460 "nbd_device": "/dev/nbd4", 01:38:01.460 "bdev_name": "nvme2n1" 01:38:01.460 }, 01:38:01.460 { 01:38:01.460 "nbd_device": "/dev/nbd5", 01:38:01.460 "bdev_name": "nvme3n1" 01:38:01.460 } 01:38:01.460 ]' 01:38:01.719 05:32:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 01:38:01.719 05:32:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:38:01.719 05:32:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 01:38:01.719 05:32:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 01:38:01.719 05:32:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 01:38:01.719 05:32:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:38:01.719 05:32:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 01:38:01.978 05:32:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 01:38:01.978 05:32:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 01:38:01.978 05:32:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 01:38:01.978 05:32:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:38:01.978 05:32:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:38:01.978 05:32:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 01:38:01.978 05:32:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 01:38:01.978 05:32:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 01:38:01.978 05:32:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:38:01.978 05:32:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 01:38:02.237 05:32:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 01:38:02.237 05:32:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 01:38:02.237 05:32:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 01:38:02.237 05:32:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:38:02.237 05:32:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:38:02.237 05:32:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 01:38:02.237 05:32:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 01:38:02.237 05:32:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 01:38:02.237 05:32:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:38:02.237 05:32:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 01:38:02.496 05:32:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 01:38:02.496 05:32:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 01:38:02.496 05:32:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 01:38:02.496 05:32:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:38:02.496 05:32:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:38:02.496 05:32:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 01:38:02.496 05:32:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 01:38:02.496 05:32:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 01:38:02.496 05:32:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:38:02.496 05:32:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 01:38:02.496 05:32:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 01:38:02.762 05:32:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 01:38:02.762 05:32:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 01:38:02.762 05:32:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:38:02.762 05:32:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:38:02.762 05:32:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 01:38:02.762 05:32:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 01:38:02.762 05:32:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 01:38:02.762 05:32:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:38:02.762 05:32:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 01:38:03.020 05:32:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 01:38:03.020 05:32:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 01:38:03.020 05:32:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 01:38:03.020 05:32:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:38:03.020 05:32:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:38:03.020 05:32:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 01:38:03.020 05:32:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 01:38:03.020 05:32:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 01:38:03.020 05:32:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:38:03.020 05:32:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 01:38:03.328 05:32:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 01:38:03.328 05:32:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 01:38:03.328 05:32:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 01:38:03.328 05:32:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:38:03.328 05:32:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:38:03.328 05:32:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 01:38:03.328 05:32:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 01:38:03.328 05:32:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 01:38:03.328 05:32:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 01:38:03.328 05:32:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:38:03.328 05:32:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 01:38:03.587 05:32:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 01:38:03.587 05:32:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 01:38:03.587 05:32:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 01:38:03.587 05:32:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 01:38:03.587 05:32:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 01:38:03.587 05:32:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 01:38:03.587 05:32:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 01:38:03.587 05:32:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 01:38:03.587 05:32:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 01:38:03.587 05:32:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 01:38:03.587 05:32:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 01:38:03.587 05:32:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 01:38:03.587 05:32:55 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 01:38:03.587 05:32:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:38:03.587 05:32:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 01:38:03.587 05:32:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 01:38:03.587 05:32:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 01:38:03.587 05:32:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 01:38:03.587 05:32:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 01:38:03.588 05:32:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:38:03.588 05:32:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 01:38:03.588 05:32:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 01:38:03.588 05:32:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 01:38:03.588 05:32:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 01:38:03.588 05:32:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 01:38:03.588 05:32:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 01:38:03.588 05:32:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 01:38:03.588 05:32:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 01:38:03.845 /dev/nbd0 01:38:03.845 05:32:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 01:38:03.845 05:32:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 01:38:03.845 05:32:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 01:38:03.845 05:32:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 01:38:03.845 05:32:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:38:03.845 05:32:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:38:03.846 05:32:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 01:38:03.846 05:32:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 01:38:03.846 05:32:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:38:03.846 05:32:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:38:03.846 05:32:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:38:03.846 1+0 records in 01:38:03.846 1+0 records out 01:38:03.846 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000764885 s, 5.4 MB/s 01:38:03.846 05:32:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:38:03.846 05:32:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 01:38:03.846 05:32:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:38:03.846 05:32:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:38:03.846 05:32:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 01:38:03.846 05:32:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:38:03.846 05:32:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 01:38:03.846 05:32:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 /dev/nbd1 01:38:04.102 /dev/nbd1 01:38:04.102 05:32:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 01:38:04.102 05:32:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 01:38:04.102 05:32:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 01:38:04.102 05:32:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 01:38:04.102 05:32:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:38:04.102 05:32:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:38:04.102 05:32:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 01:38:04.102 05:32:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 01:38:04.102 05:32:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:38:04.102 05:32:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:38:04.102 05:32:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:38:04.102 1+0 records in 01:38:04.102 1+0 records out 01:38:04.102 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000766145 s, 5.3 MB/s 01:38:04.102 05:32:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:38:04.102 05:32:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 01:38:04.102 05:32:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:38:04.102 05:32:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:38:04.102 05:32:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 01:38:04.102 05:32:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:38:04.102 05:32:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 01:38:04.102 05:32:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 /dev/nbd10 01:38:04.359 /dev/nbd10 01:38:04.359 05:32:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 01:38:04.359 05:32:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 01:38:04.359 05:32:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 01:38:04.359 05:32:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 01:38:04.359 05:32:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:38:04.359 05:32:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:38:04.359 05:32:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 01:38:04.359 05:32:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 01:38:04.359 05:32:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:38:04.359 05:32:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:38:04.359 05:32:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:38:04.359 1+0 records in 01:38:04.359 1+0 records out 01:38:04.359 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000861058 s, 4.8 MB/s 01:38:04.359 05:32:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:38:04.359 05:32:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 01:38:04.359 05:32:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:38:04.359 05:32:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:38:04.359 05:32:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 01:38:04.359 05:32:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:38:04.359 05:32:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 01:38:04.359 05:32:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd11 01:38:04.926 /dev/nbd11 01:38:04.926 05:32:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 01:38:04.926 05:32:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 01:38:04.926 05:32:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 01:38:04.926 05:32:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 01:38:04.926 05:32:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:38:04.926 05:32:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:38:04.926 05:32:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 01:38:04.926 05:32:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 01:38:04.926 05:32:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:38:04.926 05:32:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:38:04.926 05:32:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:38:04.926 1+0 records in 01:38:04.926 1+0 records out 01:38:04.926 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000797982 s, 5.1 MB/s 01:38:04.926 05:32:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:38:04.926 05:32:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 01:38:04.926 05:32:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:38:04.926 05:32:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:38:04.926 05:32:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 01:38:04.926 05:32:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:38:04.926 05:32:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 01:38:04.926 05:32:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 01:38:05.183 /dev/nbd12 01:38:05.183 05:32:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 01:38:05.183 05:32:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 01:38:05.183 05:32:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 01:38:05.183 05:32:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 01:38:05.183 05:32:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:38:05.183 05:32:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:38:05.183 05:32:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 01:38:05.183 05:32:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 01:38:05.183 05:32:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:38:05.183 05:32:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:38:05.183 05:32:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:38:05.183 1+0 records in 01:38:05.183 1+0 records out 01:38:05.183 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000705124 s, 5.8 MB/s 01:38:05.183 05:32:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:38:05.183 05:32:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 01:38:05.183 05:32:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:38:05.183 05:32:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:38:05.183 05:32:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 01:38:05.183 05:32:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:38:05.183 05:32:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 01:38:05.183 05:32:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 01:38:05.441 /dev/nbd13 01:38:05.441 05:32:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 01:38:05.441 05:32:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 01:38:05.441 05:32:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 01:38:05.441 05:32:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 01:38:05.441 05:32:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:38:05.441 05:32:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:38:05.441 05:32:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 01:38:05.441 05:32:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 01:38:05.441 05:32:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:38:05.441 05:32:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:38:05.441 05:32:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:38:05.441 1+0 records in 01:38:05.441 1+0 records out 01:38:05.441 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00073355 s, 5.6 MB/s 01:38:05.441 05:32:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:38:05.441 05:32:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 01:38:05.441 05:32:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:38:05.441 05:32:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:38:05.441 05:32:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 01:38:05.441 05:32:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:38:05.441 05:32:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 01:38:05.441 05:32:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 01:38:05.441 05:32:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:38:05.441 05:32:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 01:38:05.700 05:32:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 01:38:05.700 { 01:38:05.700 "nbd_device": "/dev/nbd0", 01:38:05.700 "bdev_name": "nvme0n1" 01:38:05.700 }, 01:38:05.700 { 01:38:05.700 "nbd_device": "/dev/nbd1", 01:38:05.700 "bdev_name": "nvme0n2" 01:38:05.700 }, 01:38:05.700 { 01:38:05.700 "nbd_device": "/dev/nbd10", 01:38:05.700 "bdev_name": "nvme0n3" 01:38:05.700 }, 01:38:05.700 { 01:38:05.700 "nbd_device": "/dev/nbd11", 01:38:05.700 "bdev_name": "nvme1n1" 01:38:05.700 }, 01:38:05.700 { 01:38:05.700 "nbd_device": "/dev/nbd12", 01:38:05.700 "bdev_name": "nvme2n1" 01:38:05.700 }, 01:38:05.700 { 01:38:05.700 "nbd_device": "/dev/nbd13", 01:38:05.700 "bdev_name": "nvme3n1" 01:38:05.700 } 01:38:05.700 ]' 01:38:05.700 05:32:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 01:38:05.700 { 01:38:05.700 "nbd_device": "/dev/nbd0", 01:38:05.700 "bdev_name": "nvme0n1" 01:38:05.700 }, 01:38:05.700 { 01:38:05.700 "nbd_device": "/dev/nbd1", 01:38:05.700 "bdev_name": "nvme0n2" 01:38:05.700 }, 01:38:05.700 { 01:38:05.700 "nbd_device": "/dev/nbd10", 01:38:05.700 "bdev_name": "nvme0n3" 01:38:05.700 }, 01:38:05.700 { 01:38:05.700 "nbd_device": "/dev/nbd11", 01:38:05.700 "bdev_name": "nvme1n1" 01:38:05.700 }, 01:38:05.700 { 01:38:05.700 "nbd_device": "/dev/nbd12", 01:38:05.700 "bdev_name": "nvme2n1" 01:38:05.700 }, 01:38:05.700 { 01:38:05.700 "nbd_device": "/dev/nbd13", 01:38:05.700 "bdev_name": "nvme3n1" 01:38:05.700 } 01:38:05.700 ]' 01:38:05.700 05:32:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 01:38:05.700 05:32:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 01:38:05.700 /dev/nbd1 01:38:05.700 /dev/nbd10 01:38:05.700 /dev/nbd11 01:38:05.700 /dev/nbd12 01:38:05.700 /dev/nbd13' 01:38:05.700 05:32:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 01:38:05.700 /dev/nbd1 01:38:05.700 /dev/nbd10 01:38:05.700 /dev/nbd11 01:38:05.700 /dev/nbd12 01:38:05.700 /dev/nbd13' 01:38:05.700 05:32:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 01:38:05.700 05:32:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 01:38:05.700 05:32:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 01:38:05.700 05:32:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 01:38:05.700 05:32:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 01:38:05.700 05:32:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 01:38:05.700 05:32:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 01:38:05.700 05:32:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 01:38:05.700 05:32:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 01:38:05.700 05:32:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 01:38:05.700 05:32:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 01:38:05.700 05:32:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 01:38:05.957 256+0 records in 01:38:05.957 256+0 records out 01:38:05.957 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00920314 s, 114 MB/s 01:38:05.957 05:32:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 01:38:05.957 05:32:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 01:38:05.957 256+0 records in 01:38:05.957 256+0 records out 01:38:05.957 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.164402 s, 6.4 MB/s 01:38:05.957 05:32:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 01:38:05.957 05:32:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 01:38:06.214 256+0 records in 01:38:06.214 256+0 records out 01:38:06.214 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.172734 s, 6.1 MB/s 01:38:06.214 05:32:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 01:38:06.214 05:32:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 01:38:06.523 256+0 records in 01:38:06.523 256+0 records out 01:38:06.523 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.170479 s, 6.2 MB/s 01:38:06.523 05:32:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 01:38:06.523 05:32:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 01:38:06.523 256+0 records in 01:38:06.523 256+0 records out 01:38:06.523 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.163428 s, 6.4 MB/s 01:38:06.523 05:32:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 01:38:06.523 05:32:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 01:38:06.781 256+0 records in 01:38:06.781 256+0 records out 01:38:06.781 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.167066 s, 6.3 MB/s 01:38:06.781 05:32:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 01:38:06.781 05:32:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 01:38:06.781 256+0 records in 01:38:06.781 256+0 records out 01:38:06.781 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.145081 s, 7.2 MB/s 01:38:06.781 05:32:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 01:38:06.781 05:32:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 01:38:06.781 05:32:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 01:38:06.781 05:32:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 01:38:06.781 05:32:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 01:38:06.781 05:32:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 01:38:06.782 05:32:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 01:38:06.782 05:32:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 01:38:06.782 05:32:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 01:38:06.782 05:32:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 01:38:06.782 05:32:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 01:38:06.782 05:32:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 01:38:06.782 05:32:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 01:38:06.782 05:32:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 01:38:06.782 05:32:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 01:38:06.782 05:32:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 01:38:06.782 05:32:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 01:38:06.782 05:32:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 01:38:06.782 05:32:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 01:38:06.782 05:32:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 01:38:06.782 05:32:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 01:38:06.782 05:32:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:38:06.782 05:32:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 01:38:06.782 05:32:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 01:38:06.782 05:32:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 01:38:06.782 05:32:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:38:06.782 05:32:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 01:38:07.347 05:32:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 01:38:07.347 05:32:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 01:38:07.347 05:32:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 01:38:07.347 05:32:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:38:07.347 05:32:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:38:07.347 05:32:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 01:38:07.347 05:32:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 01:38:07.347 05:32:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 01:38:07.347 05:32:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:38:07.347 05:32:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 01:38:07.347 05:32:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 01:38:07.347 05:32:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 01:38:07.347 05:32:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 01:38:07.347 05:32:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:38:07.347 05:32:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:38:07.347 05:32:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 01:38:07.347 05:32:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 01:38:07.347 05:32:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 01:38:07.347 05:32:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:38:07.347 05:32:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 01:38:07.912 05:32:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 01:38:07.912 05:32:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 01:38:07.912 05:32:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 01:38:07.912 05:32:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:38:07.912 05:32:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:38:07.912 05:32:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 01:38:07.912 05:32:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 01:38:07.912 05:32:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 01:38:07.912 05:32:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:38:07.912 05:32:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 01:38:08.169 05:32:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 01:38:08.169 05:32:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 01:38:08.169 05:32:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 01:38:08.169 05:32:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:38:08.169 05:32:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:38:08.169 05:32:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 01:38:08.169 05:32:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 01:38:08.169 05:32:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 01:38:08.169 05:32:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:38:08.169 05:32:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 01:38:08.427 05:32:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 01:38:08.427 05:32:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 01:38:08.427 05:32:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 01:38:08.427 05:32:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:38:08.427 05:32:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:38:08.427 05:32:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 01:38:08.427 05:32:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 01:38:08.427 05:32:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 01:38:08.427 05:32:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:38:08.427 05:32:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 01:38:08.684 05:33:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 01:38:08.684 05:33:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 01:38:08.684 05:33:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 01:38:08.684 05:33:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:38:08.684 05:33:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:38:08.684 05:33:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 01:38:08.684 05:33:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 01:38:08.684 05:33:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 01:38:08.684 05:33:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 01:38:08.684 05:33:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:38:08.684 05:33:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 01:38:08.942 05:33:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 01:38:08.942 05:33:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 01:38:08.942 05:33:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 01:38:08.942 05:33:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 01:38:08.942 05:33:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 01:38:08.942 05:33:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 01:38:08.942 05:33:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 01:38:08.942 05:33:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 01:38:08.942 05:33:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 01:38:08.942 05:33:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 01:38:08.942 05:33:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 01:38:08.942 05:33:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 01:38:08.942 05:33:00 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 01:38:08.942 05:33:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:38:08.942 05:33:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 01:38:08.942 05:33:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 01:38:09.199 malloc_lvol_verify 01:38:09.200 05:33:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 01:38:09.458 9f102e35-df26-411d-9ac2-608f78d976f0 01:38:09.458 05:33:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 01:38:09.717 d4ca6b45-c3f1-4774-a864-0b06f640a8c6 01:38:09.717 05:33:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 01:38:09.976 /dev/nbd0 01:38:09.976 05:33:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 01:38:09.976 05:33:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 01:38:09.977 05:33:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 01:38:09.977 05:33:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 01:38:09.977 05:33:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 01:38:09.977 mke2fs 1.47.0 (5-Feb-2023) 01:38:09.977 Discarding device blocks: 0/4096 done 01:38:09.977 Creating filesystem with 4096 1k blocks and 1024 inodes 01:38:09.977 01:38:09.977 Allocating group tables: 0/1 done 01:38:09.977 Writing inode tables: 0/1 done 01:38:09.977 Creating journal (1024 blocks): done 01:38:09.977 Writing superblocks and filesystem accounting information: 0/1 done 01:38:09.977 01:38:09.977 05:33:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 01:38:09.977 05:33:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:38:09.977 05:33:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 01:38:09.977 05:33:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 01:38:09.977 05:33:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 01:38:09.977 05:33:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:38:09.977 05:33:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 01:38:10.544 05:33:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 01:38:10.544 05:33:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 01:38:10.544 05:33:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 01:38:10.544 05:33:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:38:10.544 05:33:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:38:10.544 05:33:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 01:38:10.544 05:33:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 01:38:10.544 05:33:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 01:38:10.544 05:33:01 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 74098 01:38:10.544 05:33:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 74098 ']' 01:38:10.544 05:33:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 74098 01:38:10.544 05:33:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 01:38:10.544 05:33:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:38:10.544 05:33:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74098 01:38:10.544 killing process with pid 74098 01:38:10.544 05:33:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:38:10.544 05:33:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:38:10.544 05:33:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74098' 01:38:10.544 05:33:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 74098 01:38:10.544 05:33:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 74098 01:38:11.921 05:33:03 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 01:38:11.921 01:38:11.921 real 0m13.283s 01:38:11.921 user 0m18.663s 01:38:11.921 sys 0m4.470s 01:38:11.921 05:33:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 01:38:11.921 05:33:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 01:38:11.921 ************************************ 01:38:11.921 END TEST bdev_nbd 01:38:11.921 ************************************ 01:38:11.921 05:33:03 blockdev_xnvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 01:38:11.921 05:33:03 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = nvme ']' 01:38:11.921 05:33:03 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = gpt ']' 01:38:11.921 05:33:03 blockdev_xnvme -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 01:38:11.921 05:33:03 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:38:11.921 05:33:03 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 01:38:11.921 05:33:03 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 01:38:11.921 ************************************ 01:38:11.921 START TEST bdev_fio 01:38:11.921 ************************************ 01:38:11.921 05:33:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 01:38:11.921 05:33:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 01:38:11.921 05:33:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 01:38:11.921 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 01:38:11.921 05:33:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 01:38:11.921 05:33:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 01:38:11.921 05:33:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 01:38:11.921 05:33:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 01:38:11.921 05:33:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 01:38:11.921 05:33:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 01:38:11.921 05:33:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 01:38:11.921 05:33:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 01:38:11.921 05:33:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 01:38:11.921 05:33:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 01:38:11.921 05:33:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 01:38:11.921 05:33:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 01:38:11.921 05:33:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 01:38:11.921 05:33:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 01:38:11.921 05:33:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 01:38:11.921 05:33:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 01:38:11.921 05:33:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 01:38:11.921 05:33:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 01:38:11.921 05:33:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 01:38:11.921 05:33:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 01:38:11.921 05:33:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 01:38:11.921 05:33:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 01:38:11.921 05:33:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 01:38:11.921 05:33:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 01:38:11.921 05:33:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 01:38:11.921 05:33:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n2]' 01:38:11.921 05:33:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n2 01:38:11.921 05:33:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 01:38:11.921 05:33:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n3]' 01:38:11.921 05:33:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n3 01:38:11.921 05:33:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 01:38:11.921 05:33:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 01:38:11.921 05:33:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 01:38:11.921 05:33:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 01:38:11.921 05:33:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 01:38:11.921 05:33:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 01:38:11.921 05:33:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 01:38:11.921 05:33:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 01:38:11.921 05:33:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 01:38:11.921 05:33:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 01:38:11.922 05:33:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 01:38:11.922 05:33:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 01:38:11.922 05:33:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 01:38:11.922 05:33:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 01:38:11.922 ************************************ 01:38:11.922 START TEST bdev_fio_rw_verify 01:38:11.922 ************************************ 01:38:11.922 05:33:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 01:38:11.922 05:33:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 01:38:11.922 05:33:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 01:38:11.922 05:33:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:38:11.922 05:33:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 01:38:11.922 05:33:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:38:11.922 05:33:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 01:38:11.922 05:33:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 01:38:11.922 05:33:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:38:11.922 05:33:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 01:38:11.922 05:33:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:38:11.922 05:33:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:38:11.922 05:33:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 01:38:11.922 05:33:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 01:38:11.922 05:33:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 01:38:11.922 05:33:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 01:38:11.922 05:33:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 01:38:11.922 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 01:38:11.922 job_nvme0n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 01:38:11.922 job_nvme0n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 01:38:11.922 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 01:38:11.922 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 01:38:11.922 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 01:38:11.922 fio-3.35 01:38:11.922 Starting 6 threads 01:38:24.120 01:38:24.120 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=74538: Mon Dec 9 05:33:14 2024 01:38:24.120 read: IOPS=28.8k, BW=113MiB/s (118MB/s)(1126MiB/10001msec) 01:38:24.120 slat (usec): min=2, max=2579, avg= 8.15, stdev= 9.06 01:38:24.120 clat (usec): min=92, max=4258, avg=611.79, stdev=242.92 01:38:24.120 lat (usec): min=98, max=4264, avg=619.93, stdev=244.09 01:38:24.120 clat percentiles (usec): 01:38:24.120 | 50.000th=[ 611], 99.000th=[ 1188], 99.900th=[ 1729], 99.990th=[ 3720], 01:38:24.120 | 99.999th=[ 4228] 01:38:24.120 write: IOPS=29.2k, BW=114MiB/s (120MB/s)(1140MiB/10001msec); 0 zone resets 01:38:24.120 slat (usec): min=12, max=1640, avg=30.61, stdev=36.53 01:38:24.120 clat (usec): min=75, max=7532, avg=741.61, stdev=268.83 01:38:24.120 lat (usec): min=97, max=7550, avg=772.21, stdev=272.48 01:38:24.120 clat percentiles (usec): 01:38:24.120 | 50.000th=[ 742], 99.000th=[ 1467], 99.900th=[ 2073], 99.990th=[ 3851], 01:38:24.120 | 99.999th=[ 7504] 01:38:24.120 bw ( KiB/s): min=96439, max=144155, per=100.00%, avg=117643.05, stdev=2425.79, samples=114 01:38:24.120 iops : min=24107, max=36037, avg=29410.16, stdev=606.45, samples=114 01:38:24.120 lat (usec) : 100=0.01%, 250=3.51%, 500=23.26%, 750=34.46%, 1000=29.14% 01:38:24.120 lat (msec) : 2=9.54%, 4=0.08%, 10=0.01% 01:38:24.120 cpu : usr=53.20%, sys=30.50%, ctx=8283, majf=0, minf=24590 01:38:24.120 IO depths : 1=11.4%, 2=23.7%, 4=51.2%, 8=13.7%, 16=0.0%, 32=0.0%, >=64=0.0% 01:38:24.120 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:38:24.120 complete : 0=0.0%, 4=89.2%, 8=10.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:38:24.120 issued rwts: total=288357,291791,0,0 short=0,0,0,0 dropped=0,0,0,0 01:38:24.120 latency : target=0, window=0, percentile=100.00%, depth=8 01:38:24.120 01:38:24.120 Run status group 0 (all jobs): 01:38:24.120 READ: bw=113MiB/s (118MB/s), 113MiB/s-113MiB/s (118MB/s-118MB/s), io=1126MiB (1181MB), run=10001-10001msec 01:38:24.120 WRITE: bw=114MiB/s (120MB/s), 114MiB/s-114MiB/s (120MB/s-120MB/s), io=1140MiB (1195MB), run=10001-10001msec 01:38:24.380 ----------------------------------------------------- 01:38:24.380 Suppressions used: 01:38:24.380 count bytes template 01:38:24.380 6 48 /usr/src/fio/parse.c 01:38:24.380 3215 308640 /usr/src/fio/iolog.c 01:38:24.380 1 8 libtcmalloc_minimal.so 01:38:24.380 1 904 libcrypto.so 01:38:24.380 ----------------------------------------------------- 01:38:24.380 01:38:24.380 01:38:24.380 real 0m12.621s 01:38:24.380 user 0m33.963s 01:38:24.380 sys 0m18.745s 01:38:24.380 05:33:15 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 01:38:24.380 05:33:15 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 01:38:24.380 ************************************ 01:38:24.380 END TEST bdev_fio_rw_verify 01:38:24.380 ************************************ 01:38:24.380 05:33:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 01:38:24.380 05:33:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 01:38:24.380 05:33:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 01:38:24.380 05:33:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 01:38:24.380 05:33:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 01:38:24.380 05:33:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 01:38:24.380 05:33:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 01:38:24.380 05:33:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 01:38:24.380 05:33:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 01:38:24.380 05:33:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 01:38:24.380 05:33:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 01:38:24.380 05:33:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 01:38:24.380 05:33:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 01:38:24.380 05:33:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 01:38:24.380 05:33:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 01:38:24.380 05:33:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 01:38:24.380 05:33:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 01:38:24.381 05:33:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "e75c492b-eefc-42a6-b707-c56bb54feb0d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e75c492b-eefc-42a6-b707-c56bb54feb0d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "07586d11-2f1c-44b3-ab26-ccf8a28717c1"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "07586d11-2f1c-44b3-ab26-ccf8a28717c1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "e8b72853-92ce-4300-9d39-03b8ee6f390a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e8b72853-92ce-4300-9d39-03b8ee6f390a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "9a3fbaa9-f17c-498b-9d98-371bf4314b4f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "9a3fbaa9-f17c-498b-9d98-371bf4314b4f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "e9383fc8-08e0-4a9b-8447-2b1d8ec05561"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "e9383fc8-08e0-4a9b-8447-2b1d8ec05561",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "81a7f009-d0ef-4393-8153-7e064a5db10b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "81a7f009-d0ef-4393-8153-7e064a5db10b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 01:38:24.381 05:33:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 01:38:24.381 05:33:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 01:38:24.381 /home/vagrant/spdk_repo/spdk 01:38:24.381 05:33:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 01:38:24.381 05:33:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 01:38:24.381 05:33:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 01:38:24.381 01:38:24.381 real 0m12.818s 01:38:24.381 user 0m34.063s 01:38:24.381 sys 0m18.843s 01:38:24.381 05:33:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 01:38:24.381 05:33:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 01:38:24.381 ************************************ 01:38:24.381 END TEST bdev_fio 01:38:24.381 ************************************ 01:38:24.639 05:33:16 blockdev_xnvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 01:38:24.639 05:33:16 blockdev_xnvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 01:38:24.639 05:33:16 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 01:38:24.639 05:33:16 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 01:38:24.639 05:33:16 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 01:38:24.640 ************************************ 01:38:24.640 START TEST bdev_verify 01:38:24.640 ************************************ 01:38:24.640 05:33:16 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 01:38:24.640 [2024-12-09 05:33:16.151007] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:38:24.640 [2024-12-09 05:33:16.151211] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74710 ] 01:38:24.899 [2024-12-09 05:33:16.345119] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 01:38:24.899 [2024-12-09 05:33:16.497133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:38:24.899 [2024-12-09 05:33:16.497138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:38:25.467 Running I/O for 5 seconds... 01:38:27.776 21888.00 IOPS, 85.50 MiB/s [2024-12-09T05:33:20.386Z] 21952.00 IOPS, 85.75 MiB/s [2024-12-09T05:33:21.319Z] 21322.67 IOPS, 83.29 MiB/s [2024-12-09T05:33:22.254Z] 21536.75 IOPS, 84.13 MiB/s [2024-12-09T05:33:22.254Z] 21561.60 IOPS, 84.22 MiB/s 01:38:30.637 Latency(us) 01:38:30.637 [2024-12-09T05:33:22.254Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:38:30.637 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 01:38:30.637 Verification LBA range: start 0x0 length 0x80000 01:38:30.637 nvme0n1 : 5.02 1657.07 6.47 0.00 0.00 77109.71 12928.47 72447.07 01:38:30.637 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 01:38:30.637 Verification LBA range: start 0x80000 length 0x80000 01:38:30.637 nvme0n1 : 5.04 1447.95 5.66 0.00 0.00 88252.23 11141.12 81502.95 01:38:30.637 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 01:38:30.637 Verification LBA range: start 0x0 length 0x80000 01:38:30.637 nvme0n2 : 5.03 1652.70 6.46 0.00 0.00 77167.59 18945.86 64821.06 01:38:30.637 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 01:38:30.637 Verification LBA range: start 0x80000 length 0x80000 01:38:30.637 nvme0n2 : 5.04 1447.45 5.65 0.00 0.00 88119.27 11260.28 76260.07 01:38:30.637 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 01:38:30.637 Verification LBA range: start 0x0 length 0x80000 01:38:30.637 nvme0n3 : 5.04 1651.60 6.45 0.00 0.00 77078.71 15847.80 64344.44 01:38:30.637 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 01:38:30.637 Verification LBA range: start 0x80000 length 0x80000 01:38:30.637 nvme0n3 : 5.03 1449.14 5.66 0.00 0.00 87863.38 11498.59 73876.95 01:38:30.637 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 01:38:30.637 Verification LBA range: start 0x0 length 0xbd0bd 01:38:30.637 nvme1n1 : 5.08 3107.40 12.14 0.00 0.00 40832.91 5421.61 52190.49 01:38:30.637 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 01:38:30.637 Verification LBA range: start 0xbd0bd length 0xbd0bd 01:38:30.637 nvme1n1 : 5.05 2684.35 10.49 0.00 0.00 47230.90 5987.61 50522.30 01:38:30.637 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 01:38:30.637 Verification LBA range: start 0x0 length 0x20000 01:38:30.637 nvme2n1 : 5.08 1662.85 6.50 0.00 0.00 76150.54 7000.44 67680.81 01:38:30.637 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 01:38:30.637 Verification LBA range: start 0x20000 length 0x20000 01:38:30.637 nvme2n1 : 5.06 1466.52 5.73 0.00 0.00 86402.43 4200.26 68157.44 01:38:30.637 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 01:38:30.637 Verification LBA range: start 0x0 length 0xa0000 01:38:30.637 nvme3n1 : 5.08 1662.37 6.49 0.00 0.00 76042.38 7447.27 70540.57 01:38:30.637 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 01:38:30.637 Verification LBA range: start 0xa0000 length 0xa0000 01:38:30.637 nvme3n1 : 5.06 1442.23 5.63 0.00 0.00 87709.45 5213.09 74353.57 01:38:30.637 [2024-12-09T05:33:22.254Z] =================================================================================================================== 01:38:30.637 [2024-12-09T05:33:22.254Z] Total : 21331.62 83.33 0.00 0.00 71468.30 4200.26 81502.95 01:38:32.013 01:38:32.013 real 0m7.396s 01:38:32.013 user 0m11.599s 01:38:32.013 sys 0m1.813s 01:38:32.013 05:33:23 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 01:38:32.013 05:33:23 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 01:38:32.013 ************************************ 01:38:32.013 END TEST bdev_verify 01:38:32.013 ************************************ 01:38:32.013 05:33:23 blockdev_xnvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 01:38:32.013 05:33:23 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 01:38:32.013 05:33:23 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 01:38:32.013 05:33:23 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 01:38:32.013 ************************************ 01:38:32.013 START TEST bdev_verify_big_io 01:38:32.013 ************************************ 01:38:32.013 05:33:23 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 01:38:32.013 [2024-12-09 05:33:23.582764] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:38:32.013 [2024-12-09 05:33:23.583719] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74810 ] 01:38:32.272 [2024-12-09 05:33:23.759175] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 01:38:32.272 [2024-12-09 05:33:23.886625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:38:32.272 [2024-12-09 05:33:23.886633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:38:33.209 Running I/O for 5 seconds... 01:38:38.394 1083.00 IOPS, 67.69 MiB/s [2024-12-09T05:33:30.576Z] 2470.00 IOPS, 154.38 MiB/s [2024-12-09T05:33:30.576Z] 3249.33 IOPS, 203.08 MiB/s 01:38:38.959 Latency(us) 01:38:38.959 [2024-12-09T05:33:30.576Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:38:38.959 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 01:38:38.959 Verification LBA range: start 0x0 length 0x8000 01:38:38.960 nvme0n1 : 5.96 108.65 6.79 0.00 0.00 1152917.62 83886.08 1014258.97 01:38:38.960 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 01:38:38.960 Verification LBA range: start 0x8000 length 0x8000 01:38:38.960 nvme0n1 : 5.82 153.83 9.61 0.00 0.00 796372.65 60054.81 1021884.97 01:38:38.960 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 01:38:38.960 Verification LBA range: start 0x0 length 0x8000 01:38:38.960 nvme0n2 : 5.95 120.94 7.56 0.00 0.00 1012287.76 20971.52 1212535.16 01:38:38.960 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 01:38:38.960 Verification LBA range: start 0x8000 length 0x8000 01:38:38.960 nvme0n2 : 5.72 134.35 8.40 0.00 0.00 895045.51 143940.89 907494.87 01:38:38.960 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 01:38:38.960 Verification LBA range: start 0x0 length 0x8000 01:38:38.960 nvme0n3 : 5.95 137.22 8.58 0.00 0.00 868309.07 8519.68 1243039.19 01:38:38.960 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 01:38:38.960 Verification LBA range: start 0x8000 length 0x8000 01:38:38.960 nvme0n3 : 5.83 137.25 8.58 0.00 0.00 830927.67 59101.56 1715851.64 01:38:38.960 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 01:38:38.960 Verification LBA range: start 0x0 length 0xbd0b 01:38:38.960 nvme1n1 : 5.95 193.65 12.10 0.00 0.00 590287.90 13643.40 713031.68 01:38:38.960 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 01:38:38.960 Verification LBA range: start 0xbd0b length 0xbd0b 01:38:38.960 nvme1n1 : 5.97 147.37 9.21 0.00 0.00 771932.36 20733.21 2059021.96 01:38:38.960 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 01:38:38.960 Verification LBA range: start 0x0 length 0x2000 01:38:38.960 nvme2n1 : 5.95 137.04 8.56 0.00 0.00 808485.89 15192.44 1258291.20 01:38:38.960 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 01:38:38.960 Verification LBA range: start 0x2000 length 0x2000 01:38:38.960 nvme2n1 : 5.94 151.64 9.48 0.00 0.00 726303.05 95325.09 1471819.40 01:38:38.960 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 01:38:38.960 Verification LBA range: start 0x0 length 0xa000 01:38:38.960 nvme3n1 : 5.96 126.15 7.88 0.00 0.00 857420.12 7804.74 2852126.72 01:38:38.960 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 01:38:38.960 Verification LBA range: start 0xa000 length 0xa000 01:38:38.960 nvme3n1 : 5.98 185.81 11.61 0.00 0.00 578837.13 4200.26 964689.92 01:38:38.960 [2024-12-09T05:33:30.577Z] =================================================================================================================== 01:38:38.960 [2024-12-09T05:33:30.577Z] Total : 1733.89 108.37 0.00 0.00 800323.70 4200.26 2852126.72 01:38:40.335 01:38:40.335 real 0m8.234s 01:38:40.335 user 0m14.794s 01:38:40.335 sys 0m0.652s 01:38:40.335 05:33:31 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 01:38:40.335 05:33:31 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 01:38:40.335 ************************************ 01:38:40.335 END TEST bdev_verify_big_io 01:38:40.335 ************************************ 01:38:40.335 05:33:31 blockdev_xnvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 01:38:40.335 05:33:31 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 01:38:40.335 05:33:31 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 01:38:40.335 05:33:31 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 01:38:40.335 ************************************ 01:38:40.335 START TEST bdev_write_zeroes 01:38:40.335 ************************************ 01:38:40.335 05:33:31 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 01:38:40.335 [2024-12-09 05:33:31.889613] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:38:40.335 [2024-12-09 05:33:31.889826] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74920 ] 01:38:40.595 [2024-12-09 05:33:32.078258] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:38:40.854 [2024-12-09 05:33:32.235002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:38:41.112 Running I/O for 1 seconds... 01:38:42.484 67072.00 IOPS, 262.00 MiB/s 01:38:42.484 Latency(us) 01:38:42.484 [2024-12-09T05:33:34.101Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:38:42.484 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 01:38:42.484 nvme0n1 : 1.02 10031.33 39.18 0.00 0.00 12746.16 6166.34 21567.30 01:38:42.484 Job: nvme0n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 01:38:42.484 nvme0n2 : 1.02 10019.78 39.14 0.00 0.00 12750.20 6345.08 21924.77 01:38:42.484 Job: nvme0n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 01:38:42.484 nvme0n3 : 1.02 10009.46 39.10 0.00 0.00 12752.11 6255.71 22282.24 01:38:42.484 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 01:38:42.484 nvme1n1 : 1.03 16279.97 63.59 0.00 0.00 7826.31 4349.21 20256.58 01:38:42.484 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 01:38:42.484 nvme2n1 : 1.03 9983.76 39.00 0.00 0.00 12700.12 6255.71 22758.87 01:38:42.484 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 01:38:42.484 nvme3n1 : 1.03 9970.49 38.95 0.00 0.00 12706.15 6166.34 22163.08 01:38:42.484 [2024-12-09T05:33:34.101Z] =================================================================================================================== 01:38:42.484 [2024-12-09T05:33:34.101Z] Total : 66294.78 258.96 0.00 0.00 11522.69 4349.21 22758.87 01:38:43.419 01:38:43.419 real 0m3.051s 01:38:43.419 user 0m2.242s 01:38:43.419 sys 0m0.638s 01:38:43.419 05:33:34 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 01:38:43.419 05:33:34 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 01:38:43.419 ************************************ 01:38:43.419 END TEST bdev_write_zeroes 01:38:43.419 ************************************ 01:38:43.419 05:33:34 blockdev_xnvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 01:38:43.419 05:33:34 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 01:38:43.419 05:33:34 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 01:38:43.419 05:33:34 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 01:38:43.419 ************************************ 01:38:43.419 START TEST bdev_json_nonenclosed 01:38:43.419 ************************************ 01:38:43.419 05:33:34 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 01:38:43.419 [2024-12-09 05:33:35.003767] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:38:43.419 [2024-12-09 05:33:35.003964] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74979 ] 01:38:43.676 [2024-12-09 05:33:35.187579] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:38:43.934 [2024-12-09 05:33:35.317208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:38:43.934 [2024-12-09 05:33:35.317343] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 01:38:43.934 [2024-12-09 05:33:35.317370] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 01:38:43.934 [2024-12-09 05:33:35.317383] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 01:38:44.192 01:38:44.192 real 0m0.746s 01:38:44.192 user 0m0.503s 01:38:44.192 sys 0m0.136s 01:38:44.192 05:33:35 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 01:38:44.192 05:33:35 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 01:38:44.192 ************************************ 01:38:44.192 END TEST bdev_json_nonenclosed 01:38:44.192 ************************************ 01:38:44.192 05:33:35 blockdev_xnvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 01:38:44.192 05:33:35 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 01:38:44.192 05:33:35 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 01:38:44.192 05:33:35 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 01:38:44.192 ************************************ 01:38:44.192 START TEST bdev_json_nonarray 01:38:44.192 ************************************ 01:38:44.192 05:33:35 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 01:38:44.192 [2024-12-09 05:33:35.801025] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:38:44.192 [2024-12-09 05:33:35.801226] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75005 ] 01:38:44.450 [2024-12-09 05:33:35.988621] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:38:44.725 [2024-12-09 05:33:36.102926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:38:44.725 [2024-12-09 05:33:36.103121] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 01:38:44.725 [2024-12-09 05:33:36.103150] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 01:38:44.725 [2024-12-09 05:33:36.103164] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 01:38:44.983 01:38:44.983 real 0m0.730s 01:38:44.983 user 0m0.474s 01:38:44.983 sys 0m0.150s 01:38:44.983 05:33:36 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 01:38:44.983 05:33:36 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 01:38:44.983 ************************************ 01:38:44.983 END TEST bdev_json_nonarray 01:38:44.983 ************************************ 01:38:44.983 05:33:36 blockdev_xnvme -- bdev/blockdev.sh@824 -- # [[ xnvme == bdev ]] 01:38:44.983 05:33:36 blockdev_xnvme -- bdev/blockdev.sh@832 -- # [[ xnvme == gpt ]] 01:38:44.983 05:33:36 blockdev_xnvme -- bdev/blockdev.sh@836 -- # [[ xnvme == crypto_sw ]] 01:38:44.983 05:33:36 blockdev_xnvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 01:38:44.983 05:33:36 blockdev_xnvme -- bdev/blockdev.sh@849 -- # cleanup 01:38:44.983 05:33:36 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 01:38:44.983 05:33:36 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 01:38:44.983 05:33:36 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 01:38:44.983 05:33:36 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 01:38:44.983 05:33:36 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 01:38:44.983 05:33:36 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 01:38:44.983 05:33:36 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 01:38:45.551 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:38:46.119 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 01:38:46.119 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 01:38:46.119 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 01:38:46.119 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 01:38:46.377 01:38:46.377 real 0m58.500s 01:38:46.377 user 1m37.582s 01:38:46.377 sys 0m30.145s 01:38:46.377 05:33:37 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 01:38:46.377 05:33:37 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 01:38:46.377 ************************************ 01:38:46.377 END TEST blockdev_xnvme 01:38:46.377 ************************************ 01:38:46.377 05:33:37 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 01:38:46.378 05:33:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:38:46.378 05:33:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:38:46.378 05:33:37 -- common/autotest_common.sh@10 -- # set +x 01:38:46.378 ************************************ 01:38:46.378 START TEST ublk 01:38:46.378 ************************************ 01:38:46.378 05:33:37 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 01:38:46.378 * Looking for test storage... 01:38:46.378 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 01:38:46.378 05:33:37 ublk -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:38:46.378 05:33:37 ublk -- common/autotest_common.sh@1693 -- # lcov --version 01:38:46.378 05:33:37 ublk -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:38:46.637 05:33:38 ublk -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:38:46.637 05:33:38 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:38:46.637 05:33:38 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 01:38:46.637 05:33:38 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 01:38:46.637 05:33:38 ublk -- scripts/common.sh@336 -- # IFS=.-: 01:38:46.637 05:33:38 ublk -- scripts/common.sh@336 -- # read -ra ver1 01:38:46.637 05:33:38 ublk -- scripts/common.sh@337 -- # IFS=.-: 01:38:46.637 05:33:38 ublk -- scripts/common.sh@337 -- # read -ra ver2 01:38:46.637 05:33:38 ublk -- scripts/common.sh@338 -- # local 'op=<' 01:38:46.637 05:33:38 ublk -- scripts/common.sh@340 -- # ver1_l=2 01:38:46.637 05:33:38 ublk -- scripts/common.sh@341 -- # ver2_l=1 01:38:46.637 05:33:38 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:38:46.637 05:33:38 ublk -- scripts/common.sh@344 -- # case "$op" in 01:38:46.637 05:33:38 ublk -- scripts/common.sh@345 -- # : 1 01:38:46.637 05:33:38 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 01:38:46.637 05:33:38 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:38:46.637 05:33:38 ublk -- scripts/common.sh@365 -- # decimal 1 01:38:46.637 05:33:38 ublk -- scripts/common.sh@353 -- # local d=1 01:38:46.637 05:33:38 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:38:46.637 05:33:38 ublk -- scripts/common.sh@355 -- # echo 1 01:38:46.637 05:33:38 ublk -- scripts/common.sh@365 -- # ver1[v]=1 01:38:46.637 05:33:38 ublk -- scripts/common.sh@366 -- # decimal 2 01:38:46.637 05:33:38 ublk -- scripts/common.sh@353 -- # local d=2 01:38:46.637 05:33:38 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:38:46.637 05:33:38 ublk -- scripts/common.sh@355 -- # echo 2 01:38:46.637 05:33:38 ublk -- scripts/common.sh@366 -- # ver2[v]=2 01:38:46.637 05:33:38 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:38:46.637 05:33:38 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:38:46.637 05:33:38 ublk -- scripts/common.sh@368 -- # return 0 01:38:46.637 05:33:38 ublk -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:38:46.637 05:33:38 ublk -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:38:46.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:38:46.637 --rc genhtml_branch_coverage=1 01:38:46.637 --rc genhtml_function_coverage=1 01:38:46.637 --rc genhtml_legend=1 01:38:46.637 --rc geninfo_all_blocks=1 01:38:46.637 --rc geninfo_unexecuted_blocks=1 01:38:46.637 01:38:46.637 ' 01:38:46.637 05:33:38 ublk -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:38:46.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:38:46.637 --rc genhtml_branch_coverage=1 01:38:46.637 --rc genhtml_function_coverage=1 01:38:46.637 --rc genhtml_legend=1 01:38:46.637 --rc geninfo_all_blocks=1 01:38:46.637 --rc geninfo_unexecuted_blocks=1 01:38:46.637 01:38:46.637 ' 01:38:46.637 05:33:38 ublk -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:38:46.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:38:46.637 --rc genhtml_branch_coverage=1 01:38:46.637 --rc genhtml_function_coverage=1 01:38:46.637 --rc genhtml_legend=1 01:38:46.637 --rc geninfo_all_blocks=1 01:38:46.637 --rc geninfo_unexecuted_blocks=1 01:38:46.637 01:38:46.637 ' 01:38:46.637 05:33:38 ublk -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:38:46.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:38:46.637 --rc genhtml_branch_coverage=1 01:38:46.637 --rc genhtml_function_coverage=1 01:38:46.637 --rc genhtml_legend=1 01:38:46.637 --rc geninfo_all_blocks=1 01:38:46.637 --rc geninfo_unexecuted_blocks=1 01:38:46.637 01:38:46.637 ' 01:38:46.637 05:33:38 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 01:38:46.637 05:33:38 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 01:38:46.637 05:33:38 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 01:38:46.637 05:33:38 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 01:38:46.637 05:33:38 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 01:38:46.637 05:33:38 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 01:38:46.637 05:33:38 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 01:38:46.637 05:33:38 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 01:38:46.637 05:33:38 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 01:38:46.637 05:33:38 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 01:38:46.637 05:33:38 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 01:38:46.637 05:33:38 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 01:38:46.637 05:33:38 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 01:38:46.637 05:33:38 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 01:38:46.637 05:33:38 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 01:38:46.637 05:33:38 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 01:38:46.637 05:33:38 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 01:38:46.637 05:33:38 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 01:38:46.637 05:33:38 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 01:38:46.637 05:33:38 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 01:38:46.637 05:33:38 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:38:46.637 05:33:38 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 01:38:46.637 05:33:38 ublk -- common/autotest_common.sh@10 -- # set +x 01:38:46.637 ************************************ 01:38:46.637 START TEST test_save_ublk_config 01:38:46.637 ************************************ 01:38:46.637 05:33:38 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 01:38:46.637 05:33:38 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 01:38:46.637 05:33:38 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=75300 01:38:46.637 05:33:38 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 01:38:46.637 05:33:38 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 01:38:46.637 05:33:38 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 75300 01:38:46.637 05:33:38 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 75300 ']' 01:38:46.637 05:33:38 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:38:46.637 05:33:38 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 01:38:46.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:38:46.637 05:33:38 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:38:46.637 05:33:38 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 01:38:46.637 05:33:38 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 01:38:46.637 [2024-12-09 05:33:38.179823] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:38:46.637 [2024-12-09 05:33:38.180049] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75300 ] 01:38:46.896 [2024-12-09 05:33:38.375409] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:38:47.155 [2024-12-09 05:33:38.526898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:38:48.110 05:33:39 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:38:48.110 05:33:39 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 01:38:48.110 05:33:39 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 01:38:48.110 05:33:39 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 01:38:48.110 05:33:39 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 01:38:48.110 05:33:39 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 01:38:48.110 [2024-12-09 05:33:39.485734] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 01:38:48.110 [2024-12-09 05:33:39.486979] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 01:38:48.110 malloc0 01:38:48.110 [2024-12-09 05:33:39.564877] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 01:38:48.110 [2024-12-09 05:33:39.565030] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 01:38:48.110 [2024-12-09 05:33:39.565048] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 01:38:48.110 [2024-12-09 05:33:39.565058] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 01:38:48.110 [2024-12-09 05:33:39.573848] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 01:38:48.110 [2024-12-09 05:33:39.573892] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 01:38:48.110 [2024-12-09 05:33:39.579809] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 01:38:48.110 [2024-12-09 05:33:39.579930] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 01:38:48.110 [2024-12-09 05:33:39.596854] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 01:38:48.110 0 01:38:48.110 05:33:39 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:38:48.110 05:33:39 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 01:38:48.110 05:33:39 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 01:38:48.110 05:33:39 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 01:38:48.368 05:33:39 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:38:48.368 05:33:39 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 01:38:48.368 "subsystems": [ 01:38:48.368 { 01:38:48.368 "subsystem": "fsdev", 01:38:48.368 "config": [ 01:38:48.368 { 01:38:48.368 "method": "fsdev_set_opts", 01:38:48.368 "params": { 01:38:48.368 "fsdev_io_pool_size": 65535, 01:38:48.368 "fsdev_io_cache_size": 256 01:38:48.368 } 01:38:48.368 } 01:38:48.368 ] 01:38:48.368 }, 01:38:48.368 { 01:38:48.368 "subsystem": "keyring", 01:38:48.368 "config": [] 01:38:48.368 }, 01:38:48.368 { 01:38:48.368 "subsystem": "iobuf", 01:38:48.368 "config": [ 01:38:48.368 { 01:38:48.368 "method": "iobuf_set_options", 01:38:48.368 "params": { 01:38:48.368 "small_pool_count": 8192, 01:38:48.368 "large_pool_count": 1024, 01:38:48.368 "small_bufsize": 8192, 01:38:48.368 "large_bufsize": 135168, 01:38:48.368 "enable_numa": false 01:38:48.368 } 01:38:48.368 } 01:38:48.368 ] 01:38:48.368 }, 01:38:48.368 { 01:38:48.368 "subsystem": "sock", 01:38:48.368 "config": [ 01:38:48.368 { 01:38:48.368 "method": "sock_set_default_impl", 01:38:48.368 "params": { 01:38:48.368 "impl_name": "posix" 01:38:48.368 } 01:38:48.368 }, 01:38:48.368 { 01:38:48.368 "method": "sock_impl_set_options", 01:38:48.368 "params": { 01:38:48.368 "impl_name": "ssl", 01:38:48.368 "recv_buf_size": 4096, 01:38:48.368 "send_buf_size": 4096, 01:38:48.368 "enable_recv_pipe": true, 01:38:48.368 "enable_quickack": false, 01:38:48.368 "enable_placement_id": 0, 01:38:48.368 "enable_zerocopy_send_server": true, 01:38:48.368 "enable_zerocopy_send_client": false, 01:38:48.368 "zerocopy_threshold": 0, 01:38:48.368 "tls_version": 0, 01:38:48.368 "enable_ktls": false 01:38:48.368 } 01:38:48.368 }, 01:38:48.368 { 01:38:48.368 "method": "sock_impl_set_options", 01:38:48.368 "params": { 01:38:48.368 "impl_name": "posix", 01:38:48.368 "recv_buf_size": 2097152, 01:38:48.368 "send_buf_size": 2097152, 01:38:48.368 "enable_recv_pipe": true, 01:38:48.368 "enable_quickack": false, 01:38:48.368 "enable_placement_id": 0, 01:38:48.368 "enable_zerocopy_send_server": true, 01:38:48.368 "enable_zerocopy_send_client": false, 01:38:48.368 "zerocopy_threshold": 0, 01:38:48.368 "tls_version": 0, 01:38:48.368 "enable_ktls": false 01:38:48.368 } 01:38:48.368 } 01:38:48.368 ] 01:38:48.368 }, 01:38:48.368 { 01:38:48.368 "subsystem": "vmd", 01:38:48.368 "config": [] 01:38:48.368 }, 01:38:48.368 { 01:38:48.368 "subsystem": "accel", 01:38:48.368 "config": [ 01:38:48.368 { 01:38:48.368 "method": "accel_set_options", 01:38:48.368 "params": { 01:38:48.368 "small_cache_size": 128, 01:38:48.368 "large_cache_size": 16, 01:38:48.368 "task_count": 2048, 01:38:48.368 "sequence_count": 2048, 01:38:48.368 "buf_count": 2048 01:38:48.368 } 01:38:48.368 } 01:38:48.368 ] 01:38:48.368 }, 01:38:48.368 { 01:38:48.368 "subsystem": "bdev", 01:38:48.368 "config": [ 01:38:48.368 { 01:38:48.368 "method": "bdev_set_options", 01:38:48.368 "params": { 01:38:48.368 "bdev_io_pool_size": 65535, 01:38:48.368 "bdev_io_cache_size": 256, 01:38:48.368 "bdev_auto_examine": true, 01:38:48.368 "iobuf_small_cache_size": 128, 01:38:48.368 "iobuf_large_cache_size": 16 01:38:48.368 } 01:38:48.368 }, 01:38:48.368 { 01:38:48.368 "method": "bdev_raid_set_options", 01:38:48.368 "params": { 01:38:48.368 "process_window_size_kb": 1024, 01:38:48.368 "process_max_bandwidth_mb_sec": 0 01:38:48.368 } 01:38:48.368 }, 01:38:48.368 { 01:38:48.368 "method": "bdev_iscsi_set_options", 01:38:48.368 "params": { 01:38:48.368 "timeout_sec": 30 01:38:48.368 } 01:38:48.368 }, 01:38:48.368 { 01:38:48.368 "method": "bdev_nvme_set_options", 01:38:48.368 "params": { 01:38:48.368 "action_on_timeout": "none", 01:38:48.368 "timeout_us": 0, 01:38:48.368 "timeout_admin_us": 0, 01:38:48.368 "keep_alive_timeout_ms": 10000, 01:38:48.368 "arbitration_burst": 0, 01:38:48.368 "low_priority_weight": 0, 01:38:48.368 "medium_priority_weight": 0, 01:38:48.368 "high_priority_weight": 0, 01:38:48.368 "nvme_adminq_poll_period_us": 10000, 01:38:48.368 "nvme_ioq_poll_period_us": 0, 01:38:48.368 "io_queue_requests": 0, 01:38:48.368 "delay_cmd_submit": true, 01:38:48.368 "transport_retry_count": 4, 01:38:48.368 "bdev_retry_count": 3, 01:38:48.368 "transport_ack_timeout": 0, 01:38:48.368 "ctrlr_loss_timeout_sec": 0, 01:38:48.368 "reconnect_delay_sec": 0, 01:38:48.368 "fast_io_fail_timeout_sec": 0, 01:38:48.368 "disable_auto_failback": false, 01:38:48.368 "generate_uuids": false, 01:38:48.368 "transport_tos": 0, 01:38:48.368 "nvme_error_stat": false, 01:38:48.368 "rdma_srq_size": 0, 01:38:48.368 "io_path_stat": false, 01:38:48.368 "allow_accel_sequence": false, 01:38:48.368 "rdma_max_cq_size": 0, 01:38:48.368 "rdma_cm_event_timeout_ms": 0, 01:38:48.368 "dhchap_digests": [ 01:38:48.368 "sha256", 01:38:48.368 "sha384", 01:38:48.368 "sha512" 01:38:48.368 ], 01:38:48.368 "dhchap_dhgroups": [ 01:38:48.368 "null", 01:38:48.368 "ffdhe2048", 01:38:48.368 "ffdhe3072", 01:38:48.368 "ffdhe4096", 01:38:48.368 "ffdhe6144", 01:38:48.368 "ffdhe8192" 01:38:48.368 ] 01:38:48.368 } 01:38:48.368 }, 01:38:48.368 { 01:38:48.368 "method": "bdev_nvme_set_hotplug", 01:38:48.368 "params": { 01:38:48.368 "period_us": 100000, 01:38:48.368 "enable": false 01:38:48.368 } 01:38:48.368 }, 01:38:48.368 { 01:38:48.368 "method": "bdev_malloc_create", 01:38:48.368 "params": { 01:38:48.368 "name": "malloc0", 01:38:48.368 "num_blocks": 8192, 01:38:48.368 "block_size": 4096, 01:38:48.368 "physical_block_size": 4096, 01:38:48.368 "uuid": "0534b66a-37dd-4966-991a-0f11b8d6d80d", 01:38:48.368 "optimal_io_boundary": 0, 01:38:48.368 "md_size": 0, 01:38:48.368 "dif_type": 0, 01:38:48.368 "dif_is_head_of_md": false, 01:38:48.368 "dif_pi_format": 0 01:38:48.368 } 01:38:48.368 }, 01:38:48.368 { 01:38:48.368 "method": "bdev_wait_for_examine" 01:38:48.368 } 01:38:48.368 ] 01:38:48.368 }, 01:38:48.368 { 01:38:48.368 "subsystem": "scsi", 01:38:48.368 "config": null 01:38:48.368 }, 01:38:48.368 { 01:38:48.368 "subsystem": "scheduler", 01:38:48.368 "config": [ 01:38:48.368 { 01:38:48.368 "method": "framework_set_scheduler", 01:38:48.368 "params": { 01:38:48.368 "name": "static" 01:38:48.368 } 01:38:48.368 } 01:38:48.368 ] 01:38:48.368 }, 01:38:48.368 { 01:38:48.368 "subsystem": "vhost_scsi", 01:38:48.368 "config": [] 01:38:48.368 }, 01:38:48.368 { 01:38:48.368 "subsystem": "vhost_blk", 01:38:48.368 "config": [] 01:38:48.368 }, 01:38:48.368 { 01:38:48.368 "subsystem": "ublk", 01:38:48.368 "config": [ 01:38:48.368 { 01:38:48.368 "method": "ublk_create_target", 01:38:48.368 "params": { 01:38:48.368 "cpumask": "1" 01:38:48.368 } 01:38:48.368 }, 01:38:48.368 { 01:38:48.368 "method": "ublk_start_disk", 01:38:48.368 "params": { 01:38:48.368 "bdev_name": "malloc0", 01:38:48.368 "ublk_id": 0, 01:38:48.368 "num_queues": 1, 01:38:48.368 "queue_depth": 128 01:38:48.368 } 01:38:48.368 } 01:38:48.368 ] 01:38:48.368 }, 01:38:48.368 { 01:38:48.368 "subsystem": "nbd", 01:38:48.368 "config": [] 01:38:48.368 }, 01:38:48.368 { 01:38:48.368 "subsystem": "nvmf", 01:38:48.368 "config": [ 01:38:48.368 { 01:38:48.368 "method": "nvmf_set_config", 01:38:48.368 "params": { 01:38:48.368 "discovery_filter": "match_any", 01:38:48.368 "admin_cmd_passthru": { 01:38:48.368 "identify_ctrlr": false 01:38:48.368 }, 01:38:48.368 "dhchap_digests": [ 01:38:48.368 "sha256", 01:38:48.368 "sha384", 01:38:48.368 "sha512" 01:38:48.368 ], 01:38:48.368 "dhchap_dhgroups": [ 01:38:48.368 "null", 01:38:48.368 "ffdhe2048", 01:38:48.368 "ffdhe3072", 01:38:48.368 "ffdhe4096", 01:38:48.368 "ffdhe6144", 01:38:48.368 "ffdhe8192" 01:38:48.368 ] 01:38:48.368 } 01:38:48.368 }, 01:38:48.368 { 01:38:48.368 "method": "nvmf_set_max_subsystems", 01:38:48.368 "params": { 01:38:48.368 "max_subsystems": 1024 01:38:48.368 } 01:38:48.368 }, 01:38:48.368 { 01:38:48.368 "method": "nvmf_set_crdt", 01:38:48.368 "params": { 01:38:48.368 "crdt1": 0, 01:38:48.368 "crdt2": 0, 01:38:48.368 "crdt3": 0 01:38:48.368 } 01:38:48.368 } 01:38:48.369 ] 01:38:48.369 }, 01:38:48.369 { 01:38:48.369 "subsystem": "iscsi", 01:38:48.369 "config": [ 01:38:48.369 { 01:38:48.369 "method": "iscsi_set_options", 01:38:48.369 "params": { 01:38:48.369 "node_base": "iqn.2016-06.io.spdk", 01:38:48.369 "max_sessions": 128, 01:38:48.369 "max_connections_per_session": 2, 01:38:48.369 "max_queue_depth": 64, 01:38:48.369 "default_time2wait": 2, 01:38:48.369 "default_time2retain": 20, 01:38:48.369 "first_burst_length": 8192, 01:38:48.369 "immediate_data": true, 01:38:48.369 "allow_duplicated_isid": false, 01:38:48.369 "error_recovery_level": 0, 01:38:48.369 "nop_timeout": 60, 01:38:48.369 "nop_in_interval": 30, 01:38:48.369 "disable_chap": false, 01:38:48.369 "require_chap": false, 01:38:48.369 "mutual_chap": false, 01:38:48.369 "chap_group": 0, 01:38:48.369 "max_large_datain_per_connection": 64, 01:38:48.369 "max_r2t_per_connection": 4, 01:38:48.369 "pdu_pool_size": 36864, 01:38:48.369 "immediate_data_pool_size": 16384, 01:38:48.369 "data_out_pool_size": 2048 01:38:48.369 } 01:38:48.369 } 01:38:48.369 ] 01:38:48.369 } 01:38:48.369 ] 01:38:48.369 }' 01:38:48.369 05:33:39 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 75300 01:38:48.369 05:33:39 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 75300 ']' 01:38:48.369 05:33:39 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 75300 01:38:48.369 05:33:39 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 01:38:48.369 05:33:39 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:38:48.369 05:33:39 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75300 01:38:48.369 05:33:39 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:38:48.369 05:33:39 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:38:48.369 killing process with pid 75300 01:38:48.369 05:33:39 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75300' 01:38:48.369 05:33:39 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 75300 01:38:48.369 05:33:39 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 75300 01:38:49.787 [2024-12-09 05:33:41.208950] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 01:38:49.787 [2024-12-09 05:33:41.240913] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 01:38:49.787 [2024-12-09 05:33:41.241073] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 01:38:49.787 [2024-12-09 05:33:41.247858] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 01:38:49.787 [2024-12-09 05:33:41.247929] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 01:38:49.787 [2024-12-09 05:33:41.247951] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 01:38:49.787 [2024-12-09 05:33:41.247982] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 01:38:49.787 [2024-12-09 05:33:41.248195] ublk.c: 766:_ublk_fini_done: *DEBUG*: 01:38:51.688 05:33:42 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=75362 01:38:51.688 05:33:42 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 75362 01:38:51.688 05:33:42 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 75362 ']' 01:38:51.688 05:33:42 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 01:38:51.688 05:33:42 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 01:38:51.688 "subsystems": [ 01:38:51.688 { 01:38:51.688 "subsystem": "fsdev", 01:38:51.688 "config": [ 01:38:51.688 { 01:38:51.688 "method": "fsdev_set_opts", 01:38:51.688 "params": { 01:38:51.688 "fsdev_io_pool_size": 65535, 01:38:51.688 "fsdev_io_cache_size": 256 01:38:51.688 } 01:38:51.688 } 01:38:51.688 ] 01:38:51.688 }, 01:38:51.688 { 01:38:51.688 "subsystem": "keyring", 01:38:51.688 "config": [] 01:38:51.688 }, 01:38:51.688 { 01:38:51.688 "subsystem": "iobuf", 01:38:51.688 "config": [ 01:38:51.688 { 01:38:51.688 "method": "iobuf_set_options", 01:38:51.688 "params": { 01:38:51.688 "small_pool_count": 8192, 01:38:51.688 "large_pool_count": 1024, 01:38:51.688 "small_bufsize": 8192, 01:38:51.688 "large_bufsize": 135168, 01:38:51.688 "enable_numa": false 01:38:51.688 } 01:38:51.688 } 01:38:51.688 ] 01:38:51.688 }, 01:38:51.688 { 01:38:51.688 "subsystem": "sock", 01:38:51.688 "config": [ 01:38:51.689 { 01:38:51.689 "method": "sock_set_default_impl", 01:38:51.689 "params": { 01:38:51.689 "impl_name": "posix" 01:38:51.689 } 01:38:51.689 }, 01:38:51.689 { 01:38:51.689 "method": "sock_impl_set_options", 01:38:51.689 "params": { 01:38:51.689 "impl_name": "ssl", 01:38:51.689 "recv_buf_size": 4096, 01:38:51.689 "send_buf_size": 4096, 01:38:51.689 "enable_recv_pipe": true, 01:38:51.689 "enable_quickack": false, 01:38:51.689 "enable_placement_id": 0, 01:38:51.689 "enable_zerocopy_send_server": true, 01:38:51.689 "enable_zerocopy_send_client": false, 01:38:51.689 "zerocopy_threshold": 0, 01:38:51.689 "tls_version": 0, 01:38:51.689 "enable_ktls": false 01:38:51.689 } 01:38:51.689 }, 01:38:51.689 { 01:38:51.689 "method": "sock_impl_set_options", 01:38:51.689 "params": { 01:38:51.689 "impl_name": "posix", 01:38:51.689 "recv_buf_size": 2097152, 01:38:51.689 "send_buf_size": 2097152, 01:38:51.689 "enable_recv_pipe": true, 01:38:51.689 "enable_quickack": false, 01:38:51.689 "enable_placement_id": 0, 01:38:51.689 "enable_zerocopy_send_server": true, 01:38:51.689 "enable_zerocopy_send_client": false, 01:38:51.689 "zerocopy_threshold": 0, 01:38:51.689 "tls_version": 0, 01:38:51.689 "enable_ktls": false 01:38:51.689 } 01:38:51.689 } 01:38:51.689 ] 01:38:51.689 }, 01:38:51.689 { 01:38:51.689 "subsystem": "vmd", 01:38:51.689 "config": [] 01:38:51.689 }, 01:38:51.689 { 01:38:51.689 "subsystem": "accel", 01:38:51.689 "config": [ 01:38:51.689 { 01:38:51.689 "method": "accel_set_options", 01:38:51.689 "params": { 01:38:51.689 "small_cache_size": 128, 01:38:51.689 "large_cache_size": 16, 01:38:51.689 "task_count": 2048, 01:38:51.689 "sequence_count": 2048, 01:38:51.689 "buf_count": 2048 01:38:51.689 } 01:38:51.689 } 01:38:51.689 ] 01:38:51.689 }, 01:38:51.689 { 01:38:51.689 "subsystem": "bdev", 01:38:51.689 "config": [ 01:38:51.689 { 01:38:51.689 "method": "bdev_set_options", 01:38:51.689 "params": { 01:38:51.689 "bdev_io_pool_size": 65535, 01:38:51.689 "bdev_io_cache_size": 256, 01:38:51.689 "bdev_auto_examine": true, 01:38:51.689 "iobuf_small_cache_size": 128, 01:38:51.689 "iobuf_large_cache_size": 16 01:38:51.689 } 01:38:51.689 }, 01:38:51.689 { 01:38:51.689 "method": "bdev_raid_set_options", 01:38:51.689 "params": { 01:38:51.689 "process_window_size_kb": 1024, 01:38:51.689 "process_max_bandwidth_mb_sec": 0 01:38:51.689 } 01:38:51.689 }, 01:38:51.689 { 01:38:51.689 "method": "bdev_iscsi_set_options", 01:38:51.689 "params": { 01:38:51.689 "timeout_sec": 30 01:38:51.689 } 01:38:51.689 }, 01:38:51.689 { 01:38:51.689 "method": "bdev_nvme_set_options", 01:38:51.689 "params": { 01:38:51.689 "action_on_timeout": "none", 01:38:51.689 "timeout_us": 0, 01:38:51.689 "timeout_admin_us": 0, 01:38:51.689 "keep_alive_timeout_ms": 10000, 01:38:51.689 "arbitration_burst": 0, 01:38:51.689 "low_priority_weight": 0, 01:38:51.689 "medium_priority_weight": 0, 01:38:51.689 "high_priority_weight": 0, 01:38:51.689 "nvme_adminq_poll_period_us": 10000, 01:38:51.689 "nvme_ioq_poll_period_us": 0, 01:38:51.689 "io_queue_requests": 0, 01:38:51.689 "delay_cmd_submit": true, 01:38:51.689 "transport_retry_count": 4, 01:38:51.689 "bdev_retry_count": 3, 01:38:51.689 "transport_ack_timeout": 0, 01:38:51.689 "ctrlr_loss_timeout_sec": 0, 01:38:51.689 "reconnect_delay_sec": 0, 01:38:51.689 "fast_io_fail_timeout_sec": 0, 01:38:51.689 "disable_auto_failback": false, 01:38:51.689 "generate_uuids": false, 01:38:51.689 "transport_tos": 0, 01:38:51.689 "nvme_error_stat": false, 01:38:51.689 "rdma_srq_size": 0, 01:38:51.689 "io_path_stat": false, 01:38:51.689 "allow_accel_sequence": false, 01:38:51.689 "rdma_max_cq_size": 0, 01:38:51.689 "rdma_cm_event_timeout_ms": 0, 01:38:51.689 "dhchap_digests": [ 01:38:51.689 "sha256", 01:38:51.689 "sha384", 01:38:51.689 "sha512" 01:38:51.689 ], 01:38:51.689 "dhchap_dhgroups": [ 01:38:51.689 "null", 01:38:51.689 "ffdhe2048", 01:38:51.689 "ffdhe3072", 01:38:51.689 "ffdhe4096", 01:38:51.689 "ffdhe6144", 01:38:51.689 "ffdhe8192" 01:38:51.689 ] 01:38:51.689 } 01:38:51.689 }, 01:38:51.689 { 01:38:51.689 "method": "bdev_nvme_set_hotplug", 01:38:51.689 "params": { 01:38:51.689 "period_us": 100000, 01:38:51.689 "enable": false 01:38:51.689 } 01:38:51.689 }, 01:38:51.689 { 01:38:51.689 "method": "bdev_malloc_create", 01:38:51.689 "params": { 01:38:51.689 "name": "malloc0", 01:38:51.689 "num_blocks": 8192, 01:38:51.689 "block_size": 4096, 01:38:51.689 "physical_block_size": 4096, 01:38:51.689 "uuid": "0534b66a-37dd-4966-991a-0f11b8d6d80d", 01:38:51.689 "optimal_io_boundary": 0, 01:38:51.689 "md_size": 0, 01:38:51.689 "dif_type": 0, 01:38:51.689 "dif_is_head_of_md": false, 01:38:51.689 "dif_pi_format": 0 01:38:51.689 } 01:38:51.689 }, 01:38:51.689 { 01:38:51.689 "method": "bdev_wait_for_examine" 01:38:51.689 } 01:38:51.689 ] 01:38:51.689 }, 01:38:51.689 { 01:38:51.689 "subsystem": "scsi", 01:38:51.689 "config": null 01:38:51.689 }, 01:38:51.689 { 01:38:51.689 "subsystem": "scheduler", 01:38:51.689 "config": [ 01:38:51.689 { 01:38:51.689 "method": "framework_set_scheduler", 01:38:51.689 "params": { 01:38:51.689 "name": "static" 01:38:51.689 } 01:38:51.689 } 01:38:51.689 ] 01:38:51.689 }, 01:38:51.689 { 01:38:51.689 "subsystem": "vhost_scsi", 01:38:51.689 "config": [] 01:38:51.689 }, 01:38:51.689 { 01:38:51.689 "subsystem": "vhost_blk", 01:38:51.689 "config": [] 01:38:51.689 }, 01:38:51.689 { 01:38:51.689 "subsystem": "ublk", 01:38:51.689 "config": [ 01:38:51.689 { 01:38:51.689 "method": "ublk_create_target", 01:38:51.689 "params": { 01:38:51.689 "cpumask": "1" 01:38:51.689 } 01:38:51.689 }, 01:38:51.689 { 01:38:51.689 "method": "ublk_start_disk", 01:38:51.689 "params": { 01:38:51.689 "bdev_name": "malloc0", 01:38:51.689 "ublk_id": 0, 01:38:51.689 "num_queues": 1, 01:38:51.689 "queue_depth": 128 01:38:51.689 } 01:38:51.689 } 01:38:51.689 ] 01:38:51.689 }, 01:38:51.689 { 01:38:51.689 "subsystem": "nbd", 01:38:51.689 "config": [] 01:38:51.689 }, 01:38:51.689 { 01:38:51.689 "subsystem": "nvmf", 01:38:51.689 "config": [ 01:38:51.689 { 01:38:51.689 "method": "nvmf_set_config", 01:38:51.689 "params": { 01:38:51.689 "discovery_filter": "match_any", 01:38:51.689 "admin_cmd_passthru": { 01:38:51.689 "identify_ctrlr": false 01:38:51.689 }, 01:38:51.689 "dhchap_digests": [ 01:38:51.689 "sha256", 01:38:51.689 "sha384", 01:38:51.689 "sha512" 01:38:51.689 ], 01:38:51.689 "dhchap_dhgroups": [ 01:38:51.689 "null", 01:38:51.689 "ffdhe2048", 01:38:51.689 "ffdhe3072", 01:38:51.689 "ffdhe4096", 01:38:51.689 "ffdhe6144", 01:38:51.689 "ffdhe8192" 01:38:51.689 ] 01:38:51.689 } 01:38:51.689 }, 01:38:51.689 { 01:38:51.689 "method": "nvmf_set_max_subsystems", 01:38:51.689 "params": { 01:38:51.689 "max_subsystems": 1024 01:38:51.689 } 01:38:51.689 }, 01:38:51.689 { 01:38:51.689 "method": "nvmf_set_crdt", 01:38:51.689 "params": { 01:38:51.689 "crdt1": 0, 01:38:51.689 "crdt2": 0, 01:38:51.689 "crdt3": 0 01:38:51.689 } 01:38:51.689 } 01:38:51.689 ] 01:38:51.689 }, 01:38:51.689 { 01:38:51.689 "subsystem": "iscsi", 01:38:51.689 "config": [ 01:38:51.689 { 01:38:51.689 "method": "iscsi_set_options", 01:38:51.689 "params": { 01:38:51.689 "node_base": "iqn.2016-06.io.spdk", 01:38:51.689 "max_sessions": 128, 01:38:51.689 "max_connections_per_session": 2, 01:38:51.689 "max_queue_depth": 64, 01:38:51.689 "default_time2wait": 2, 01:38:51.689 "default_time2retain": 20, 01:38:51.689 "first_burst_length": 8192, 01:38:51.689 "immediate_data": true, 01:38:51.689 "allow_duplicated_isid": false, 01:38:51.689 "error_recovery_level": 0, 01:38:51.689 "nop_timeout": 60, 01:38:51.689 "nop_in_interval": 30, 01:38:51.689 "disable_chap": false, 01:38:51.689 "require_chap": false, 01:38:51.689 "mutual_chap": false, 01:38:51.689 "chap_group": 0, 01:38:51.689 "max_large_datain_per_connection": 64, 01:38:51.689 "max_r2t_per_connection": 4, 01:38:51.689 "pdu_pool_size": 36864, 01:38:51.689 "immediate_data_pool_size": 16384, 01:38:51.689 "data_out_pool_size": 2048 01:38:51.689 } 01:38:51.689 } 01:38:51.689 ] 01:38:51.689 } 01:38:51.689 ] 01:38:51.689 }' 01:38:51.689 05:33:42 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:38:51.689 05:33:42 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 01:38:51.689 05:33:42 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:38:51.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:38:51.690 05:33:42 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 01:38:51.690 05:33:42 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 01:38:51.690 [2024-12-09 05:33:43.066869] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:38:51.690 [2024-12-09 05:33:43.067049] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75362 ] 01:38:51.690 [2024-12-09 05:33:43.238742] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:38:51.948 [2024-12-09 05:33:43.342708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:38:52.884 [2024-12-09 05:33:44.381711] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 01:38:52.884 [2024-12-09 05:33:44.383145] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 01:38:52.884 [2024-12-09 05:33:44.388866] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 01:38:52.884 [2024-12-09 05:33:44.388972] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 01:38:52.884 [2024-12-09 05:33:44.389014] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 01:38:52.884 [2024-12-09 05:33:44.389047] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 01:38:52.884 [2024-12-09 05:33:44.400779] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 01:38:52.884 [2024-12-09 05:33:44.400814] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 01:38:52.884 [2024-12-09 05:33:44.411780] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 01:38:52.884 [2024-12-09 05:33:44.411937] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 01:38:52.884 [2024-12-09 05:33:44.432887] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 01:38:52.884 05:33:44 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:38:52.884 05:33:44 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 01:38:52.884 05:33:44 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 01:38:52.884 05:33:44 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 01:38:52.884 05:33:44 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 01:38:52.884 05:33:44 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 01:38:52.884 05:33:44 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:38:53.143 05:33:44 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 01:38:53.143 05:33:44 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 01:38:53.143 05:33:44 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 75362 01:38:53.143 05:33:44 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 75362 ']' 01:38:53.143 05:33:44 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 75362 01:38:53.143 05:33:44 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 01:38:53.143 05:33:44 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:38:53.143 05:33:44 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75362 01:38:53.143 05:33:44 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:38:53.143 05:33:44 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:38:53.143 killing process with pid 75362 01:38:53.143 05:33:44 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75362' 01:38:53.143 05:33:44 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 75362 01:38:53.143 05:33:44 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 75362 01:38:54.524 [2024-12-09 05:33:46.027655] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 01:38:54.524 [2024-12-09 05:33:46.071791] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 01:38:54.524 [2024-12-09 05:33:46.072032] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 01:38:54.524 [2024-12-09 05:33:46.079713] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 01:38:54.524 [2024-12-09 05:33:46.079794] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 01:38:54.524 [2024-12-09 05:33:46.079816] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 01:38:54.524 [2024-12-09 05:33:46.079868] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 01:38:54.524 [2024-12-09 05:33:46.080091] ublk.c: 766:_ublk_fini_done: *DEBUG*: 01:38:57.056 05:33:48 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 01:38:57.056 01:38:57.056 real 0m10.131s 01:38:57.056 user 0m7.504s 01:38:57.056 sys 0m3.620s 01:38:57.056 05:33:48 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 01:38:57.056 05:33:48 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 01:38:57.056 ************************************ 01:38:57.056 END TEST test_save_ublk_config 01:38:57.056 ************************************ 01:38:57.056 05:33:48 ublk -- ublk/ublk.sh@139 -- # spdk_pid=75453 01:38:57.056 05:33:48 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 01:38:57.056 05:33:48 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 01:38:57.056 05:33:48 ublk -- ublk/ublk.sh@141 -- # waitforlisten 75453 01:38:57.056 05:33:48 ublk -- common/autotest_common.sh@835 -- # '[' -z 75453 ']' 01:38:57.056 05:33:48 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:38:57.056 05:33:48 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 01:38:57.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:38:57.056 05:33:48 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:38:57.056 05:33:48 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 01:38:57.056 05:33:48 ublk -- common/autotest_common.sh@10 -- # set +x 01:38:57.056 [2024-12-09 05:33:48.323284] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:38:57.056 [2024-12-09 05:33:48.323443] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75453 ] 01:38:57.056 [2024-12-09 05:33:48.491577] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 01:38:57.056 [2024-12-09 05:33:48.616454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:38:57.056 [2024-12-09 05:33:48.616463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:38:57.992 05:33:49 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:38:57.992 05:33:49 ublk -- common/autotest_common.sh@868 -- # return 0 01:38:57.992 05:33:49 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 01:38:57.992 05:33:49 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:38:57.992 05:33:49 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 01:38:57.992 05:33:49 ublk -- common/autotest_common.sh@10 -- # set +x 01:38:57.992 ************************************ 01:38:57.992 START TEST test_create_ublk 01:38:57.992 ************************************ 01:38:57.992 05:33:49 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 01:38:57.992 05:33:49 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 01:38:57.992 05:33:49 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 01:38:57.992 05:33:49 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 01:38:57.992 [2024-12-09 05:33:49.489768] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 01:38:57.992 [2024-12-09 05:33:49.493072] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 01:38:57.992 05:33:49 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:38:57.992 05:33:49 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 01:38:57.992 05:33:49 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 01:38:57.992 05:33:49 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 01:38:57.992 05:33:49 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 01:38:58.250 05:33:49 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:38:58.250 05:33:49 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 01:38:58.250 05:33:49 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 01:38:58.250 05:33:49 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 01:38:58.250 05:33:49 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 01:38:58.250 [2024-12-09 05:33:49.768096] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 01:38:58.250 [2024-12-09 05:33:49.768778] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 01:38:58.250 [2024-12-09 05:33:49.768813] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 01:38:58.250 [2024-12-09 05:33:49.768831] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 01:38:58.250 [2024-12-09 05:33:49.777292] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 01:38:58.250 [2024-12-09 05:33:49.777326] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 01:38:58.250 [2024-12-09 05:33:49.783809] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 01:38:58.250 [2024-12-09 05:33:49.784655] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 01:38:58.250 [2024-12-09 05:33:49.799853] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 01:38:58.250 05:33:49 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:38:58.250 05:33:49 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 01:38:58.250 05:33:49 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 01:38:58.250 05:33:49 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 01:38:58.250 05:33:49 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 01:38:58.250 05:33:49 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 01:38:58.250 05:33:49 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:38:58.250 05:33:49 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 01:38:58.250 { 01:38:58.250 "ublk_device": "/dev/ublkb0", 01:38:58.250 "id": 0, 01:38:58.250 "queue_depth": 512, 01:38:58.250 "num_queues": 4, 01:38:58.250 "bdev_name": "Malloc0" 01:38:58.250 } 01:38:58.250 ]' 01:38:58.250 05:33:49 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 01:38:58.250 05:33:49 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 01:38:58.250 05:33:49 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 01:38:58.509 05:33:49 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 01:38:58.509 05:33:49 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 01:38:58.509 05:33:49 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 01:38:58.509 05:33:49 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 01:38:58.509 05:33:50 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 01:38:58.509 05:33:50 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 01:38:58.509 05:33:50 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 01:38:58.509 05:33:50 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 01:38:58.509 05:33:50 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 01:38:58.509 05:33:50 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 01:38:58.509 05:33:50 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 01:38:58.509 05:33:50 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 01:38:58.509 05:33:50 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 01:38:58.509 05:33:50 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 01:38:58.509 05:33:50 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 01:38:58.509 05:33:50 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 01:38:58.509 05:33:50 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 01:38:58.509 05:33:50 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 01:38:58.509 05:33:50 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 01:38:58.766 fio: verification read phase will never start because write phase uses all of runtime 01:38:58.766 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 01:38:58.766 fio-3.35 01:38:58.766 Starting 1 process 01:39:08.781 01:39:08.781 fio_test: (groupid=0, jobs=1): err= 0: pid=75503: Mon Dec 9 05:34:00 2024 01:39:08.781 write: IOPS=12.0k, BW=46.7MiB/s (49.0MB/s)(467MiB/10001msec); 0 zone resets 01:39:08.781 clat (usec): min=47, max=4053, avg=82.33, stdev=129.84 01:39:08.781 lat (usec): min=47, max=4053, avg=83.04, stdev=129.85 01:39:08.781 clat percentiles (usec): 01:39:08.781 | 1.00th=[ 56], 5.00th=[ 65], 10.00th=[ 66], 20.00th=[ 68], 01:39:08.781 | 30.00th=[ 69], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 73], 01:39:08.781 | 70.00th=[ 77], 80.00th=[ 84], 90.00th=[ 95], 95.00th=[ 104], 01:39:08.781 | 99.00th=[ 126], 99.50th=[ 135], 99.90th=[ 2671], 99.95th=[ 3130], 01:39:08.781 | 99.99th=[ 3752] 01:39:08.781 bw ( KiB/s): min=47240, max=49352, per=100.00%, avg=47904.42, stdev=539.28, samples=19 01:39:08.781 iops : min=11810, max=12338, avg=11976.21, stdev=134.84, samples=19 01:39:08.781 lat (usec) : 50=0.04%, 100=92.79%, 250=6.83%, 500=0.02%, 750=0.01% 01:39:08.781 lat (usec) : 1000=0.02% 01:39:08.781 lat (msec) : 2=0.11%, 4=0.18%, 10=0.01% 01:39:08.781 cpu : usr=3.19%, sys=8.22%, ctx=119576, majf=0, minf=794 01:39:08.781 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:39:08.781 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:39:08.781 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:39:08.781 issued rwts: total=0,119571,0,0 short=0,0,0,0 dropped=0,0,0,0 01:39:08.781 latency : target=0, window=0, percentile=100.00%, depth=1 01:39:08.781 01:39:08.781 Run status group 0 (all jobs): 01:39:08.781 WRITE: bw=46.7MiB/s (49.0MB/s), 46.7MiB/s-46.7MiB/s (49.0MB/s-49.0MB/s), io=467MiB (490MB), run=10001-10001msec 01:39:08.781 01:39:08.781 Disk stats (read/write): 01:39:08.781 ublkb0: ios=0/118361, merge=0/0, ticks=0/8850, in_queue=8851, util=99.10% 01:39:08.781 05:34:00 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 01:39:08.781 05:34:00 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 01:39:08.781 05:34:00 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 01:39:08.781 [2024-12-09 05:34:00.332307] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 01:39:08.781 [2024-12-09 05:34:00.379816] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 01:39:08.781 [2024-12-09 05:34:00.380750] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 01:39:08.781 [2024-12-09 05:34:00.387741] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 01:39:08.781 [2024-12-09 05:34:00.388060] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 01:39:08.781 [2024-12-09 05:34:00.388084] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 01:39:08.781 05:34:00 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:39:08.781 05:34:00 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 01:39:08.781 05:34:00 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 01:39:08.781 05:34:00 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 01:39:08.781 05:34:00 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 01:39:08.781 05:34:00 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:39:08.781 05:34:00 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 01:39:08.782 05:34:00 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:39:08.782 05:34:00 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 01:39:08.782 05:34:00 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 01:39:09.039 05:34:00 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 01:39:09.039 [2024-12-09 05:34:00.403918] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 01:39:09.039 request: 01:39:09.039 { 01:39:09.039 "ublk_id": 0, 01:39:09.039 "method": "ublk_stop_disk", 01:39:09.039 "req_id": 1 01:39:09.039 } 01:39:09.039 Got JSON-RPC error response 01:39:09.039 response: 01:39:09.039 { 01:39:09.039 "code": -19, 01:39:09.039 "message": "No such device" 01:39:09.039 } 01:39:09.039 05:34:00 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 01:39:09.039 05:34:00 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 01:39:09.039 05:34:00 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:39:09.039 05:34:00 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:39:09.039 05:34:00 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:39:09.039 05:34:00 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 01:39:09.039 05:34:00 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 01:39:09.039 05:34:00 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 01:39:09.039 [2024-12-09 05:34:00.427891] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 01:39:09.039 [2024-12-09 05:34:00.435748] ublk.c: 766:_ublk_fini_done: *DEBUG*: 01:39:09.039 [2024-12-09 05:34:00.435813] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 01:39:09.039 05:34:00 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:39:09.039 05:34:00 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 01:39:09.039 05:34:00 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 01:39:09.039 05:34:00 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 01:39:09.651 05:34:01 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:39:09.651 05:34:01 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 01:39:09.651 05:34:01 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 01:39:09.651 05:34:01 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 01:39:09.651 05:34:01 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 01:39:09.651 05:34:01 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:39:09.651 05:34:01 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 01:39:09.651 05:34:01 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 01:39:09.651 05:34:01 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 01:39:09.651 05:34:01 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 01:39:09.651 05:34:01 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 01:39:09.651 05:34:01 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 01:39:09.651 05:34:01 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:39:09.651 05:34:01 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 01:39:09.651 05:34:01 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 01:39:09.651 05:34:01 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 01:39:09.651 01:39:09.651 real 0m11.725s 01:39:09.651 user 0m0.756s 01:39:09.651 sys 0m0.944s 01:39:09.651 05:34:01 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 01:39:09.651 05:34:01 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 01:39:09.651 ************************************ 01:39:09.651 END TEST test_create_ublk 01:39:09.651 ************************************ 01:39:09.651 05:34:01 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 01:39:09.651 05:34:01 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:39:09.651 05:34:01 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 01:39:09.651 05:34:01 ublk -- common/autotest_common.sh@10 -- # set +x 01:39:09.651 ************************************ 01:39:09.651 START TEST test_create_multi_ublk 01:39:09.651 ************************************ 01:39:09.651 05:34:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 01:39:09.651 05:34:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 01:39:09.651 05:34:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 01:39:09.651 05:34:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 01:39:09.910 [2024-12-09 05:34:01.279763] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 01:39:09.910 [2024-12-09 05:34:01.282710] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 01:39:09.910 05:34:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:39:09.910 05:34:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 01:39:09.910 05:34:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 01:39:09.910 05:34:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 01:39:09.910 05:34:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 01:39:09.910 05:34:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 01:39:09.910 05:34:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 01:39:10.169 05:34:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:39:10.169 05:34:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 01:39:10.169 05:34:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 01:39:10.169 05:34:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 01:39:10.169 05:34:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 01:39:10.169 [2024-12-09 05:34:01.591927] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 01:39:10.169 [2024-12-09 05:34:01.592561] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 01:39:10.169 [2024-12-09 05:34:01.592582] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 01:39:10.169 [2024-12-09 05:34:01.592599] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 01:39:10.169 [2024-12-09 05:34:01.599791] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 01:39:10.169 [2024-12-09 05:34:01.599827] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 01:39:10.169 [2024-12-09 05:34:01.607807] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 01:39:10.169 [2024-12-09 05:34:01.608694] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 01:39:10.169 [2024-12-09 05:34:01.639769] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 01:39:10.169 05:34:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:39:10.169 05:34:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 01:39:10.169 05:34:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 01:39:10.169 05:34:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 01:39:10.169 05:34:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 01:39:10.169 05:34:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 01:39:10.426 05:34:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:39:10.426 05:34:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 01:39:10.426 05:34:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 01:39:10.426 05:34:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 01:39:10.426 05:34:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 01:39:10.426 [2024-12-09 05:34:01.917936] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 01:39:10.426 [2024-12-09 05:34:01.918559] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 01:39:10.426 [2024-12-09 05:34:01.918588] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 01:39:10.426 [2024-12-09 05:34:01.918598] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 01:39:10.426 [2024-12-09 05:34:01.925726] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 01:39:10.426 [2024-12-09 05:34:01.925754] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 01:39:10.426 [2024-12-09 05:34:01.932822] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 01:39:10.426 [2024-12-09 05:34:01.933757] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 01:39:10.426 [2024-12-09 05:34:01.942778] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 01:39:10.426 05:34:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:39:10.426 05:34:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 01:39:10.426 05:34:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 01:39:10.426 05:34:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 01:39:10.426 05:34:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 01:39:10.426 05:34:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 01:39:10.684 05:34:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:39:10.684 05:34:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 01:39:10.684 05:34:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 01:39:10.684 05:34:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 01:39:10.684 05:34:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 01:39:10.684 [2024-12-09 05:34:02.219938] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 01:39:10.684 [2024-12-09 05:34:02.220487] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 01:39:10.684 [2024-12-09 05:34:02.220521] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 01:39:10.684 [2024-12-09 05:34:02.220545] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 01:39:10.684 [2024-12-09 05:34:02.228337] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 01:39:10.684 [2024-12-09 05:34:02.228391] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 01:39:10.684 [2024-12-09 05:34:02.238711] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 01:39:10.684 [2024-12-09 05:34:02.239647] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 01:39:10.684 [2024-12-09 05:34:02.250842] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 01:39:10.684 05:34:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:39:10.684 05:34:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 01:39:10.684 05:34:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 01:39:10.684 05:34:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 01:39:10.684 05:34:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 01:39:10.684 05:34:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 01:39:10.941 05:34:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:39:10.941 05:34:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 01:39:10.941 05:34:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 01:39:10.941 05:34:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 01:39:10.941 05:34:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 01:39:10.942 [2024-12-09 05:34:02.523937] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 01:39:10.942 [2024-12-09 05:34:02.524540] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 01:39:10.942 [2024-12-09 05:34:02.524582] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 01:39:10.942 [2024-12-09 05:34:02.524604] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 01:39:10.942 [2024-12-09 05:34:02.531718] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 01:39:10.942 [2024-12-09 05:34:02.531743] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 01:39:10.942 [2024-12-09 05:34:02.539833] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 01:39:10.942 [2024-12-09 05:34:02.540648] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 01:39:10.942 [2024-12-09 05:34:02.548820] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 01:39:10.942 05:34:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:39:10.942 05:34:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 01:39:11.199 05:34:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 01:39:11.199 05:34:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 01:39:11.199 05:34:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 01:39:11.199 05:34:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:39:11.199 05:34:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 01:39:11.199 { 01:39:11.199 "ublk_device": "/dev/ublkb0", 01:39:11.199 "id": 0, 01:39:11.199 "queue_depth": 512, 01:39:11.199 "num_queues": 4, 01:39:11.199 "bdev_name": "Malloc0" 01:39:11.199 }, 01:39:11.199 { 01:39:11.199 "ublk_device": "/dev/ublkb1", 01:39:11.199 "id": 1, 01:39:11.199 "queue_depth": 512, 01:39:11.199 "num_queues": 4, 01:39:11.199 "bdev_name": "Malloc1" 01:39:11.199 }, 01:39:11.199 { 01:39:11.199 "ublk_device": "/dev/ublkb2", 01:39:11.199 "id": 2, 01:39:11.199 "queue_depth": 512, 01:39:11.199 "num_queues": 4, 01:39:11.199 "bdev_name": "Malloc2" 01:39:11.199 }, 01:39:11.199 { 01:39:11.199 "ublk_device": "/dev/ublkb3", 01:39:11.199 "id": 3, 01:39:11.199 "queue_depth": 512, 01:39:11.199 "num_queues": 4, 01:39:11.199 "bdev_name": "Malloc3" 01:39:11.199 } 01:39:11.199 ]' 01:39:11.199 05:34:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 01:39:11.199 05:34:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 01:39:11.199 05:34:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 01:39:11.199 05:34:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 01:39:11.199 05:34:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 01:39:11.199 05:34:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 01:39:11.199 05:34:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 01:39:11.199 05:34:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 01:39:11.199 05:34:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 01:39:11.199 05:34:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 01:39:11.199 05:34:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 01:39:11.457 05:34:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 01:39:11.457 05:34:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 01:39:11.457 05:34:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 01:39:11.457 05:34:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 01:39:11.457 05:34:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 01:39:11.457 05:34:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 01:39:11.457 05:34:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 01:39:11.457 05:34:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 01:39:11.457 05:34:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 01:39:11.457 05:34:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 01:39:11.457 05:34:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 01:39:11.714 05:34:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 01:39:11.714 05:34:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 01:39:11.714 05:34:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 01:39:11.715 05:34:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 01:39:11.715 05:34:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 01:39:11.715 05:34:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 01:39:11.715 05:34:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 01:39:11.715 05:34:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 01:39:11.715 05:34:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 01:39:11.715 05:34:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 01:39:11.715 05:34:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 01:39:11.972 05:34:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 01:39:11.972 05:34:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 01:39:11.972 05:34:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 01:39:11.972 05:34:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 01:39:11.972 05:34:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 01:39:11.972 05:34:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 01:39:11.972 05:34:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 01:39:11.972 05:34:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 01:39:11.972 05:34:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 01:39:12.230 05:34:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 01:39:12.230 05:34:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 01:39:12.230 05:34:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 01:39:12.230 05:34:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 01:39:12.230 05:34:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 01:39:12.230 05:34:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 01:39:12.230 05:34:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 01:39:12.230 05:34:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 01:39:12.230 05:34:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 01:39:12.230 [2024-12-09 05:34:03.652065] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 01:39:12.230 [2024-12-09 05:34:03.689353] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 01:39:12.230 [2024-12-09 05:34:03.690634] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 01:39:12.230 [2024-12-09 05:34:03.696009] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 01:39:12.230 [2024-12-09 05:34:03.696389] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 01:39:12.230 [2024-12-09 05:34:03.696409] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 01:39:12.230 05:34:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:39:12.230 05:34:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 01:39:12.230 05:34:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 01:39:12.230 05:34:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 01:39:12.230 05:34:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 01:39:12.230 [2024-12-09 05:34:03.713794] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 01:39:12.230 [2024-12-09 05:34:03.746287] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 01:39:12.230 [2024-12-09 05:34:03.747529] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 01:39:12.230 [2024-12-09 05:34:03.753756] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 01:39:12.230 [2024-12-09 05:34:03.754075] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 01:39:12.230 [2024-12-09 05:34:03.754101] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 01:39:12.230 05:34:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:39:12.230 05:34:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 01:39:12.230 05:34:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 01:39:12.230 05:34:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 01:39:12.230 05:34:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 01:39:12.230 [2024-12-09 05:34:03.764918] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 01:39:12.230 [2024-12-09 05:34:03.803349] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 01:39:12.230 [2024-12-09 05:34:03.804484] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 01:39:12.230 [2024-12-09 05:34:03.811800] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 01:39:12.230 [2024-12-09 05:34:03.812189] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 01:39:12.230 [2024-12-09 05:34:03.812215] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 01:39:12.230 05:34:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:39:12.230 05:34:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 01:39:12.230 05:34:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 01:39:12.230 05:34:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 01:39:12.230 05:34:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 01:39:12.230 [2024-12-09 05:34:03.827882] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 01:39:12.489 [2024-12-09 05:34:03.868431] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 01:39:12.489 [2024-12-09 05:34:03.869454] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 01:39:12.489 [2024-12-09 05:34:03.875820] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 01:39:12.489 [2024-12-09 05:34:03.876181] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 01:39:12.489 [2024-12-09 05:34:03.876206] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 01:39:12.489 05:34:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:39:12.489 05:34:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 01:39:12.748 [2024-12-09 05:34:04.163900] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 01:39:12.748 [2024-12-09 05:34:04.171773] ublk.c: 766:_ublk_fini_done: *DEBUG*: 01:39:12.748 [2024-12-09 05:34:04.171838] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 01:39:12.748 05:34:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 01:39:12.748 05:34:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 01:39:12.748 05:34:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 01:39:12.748 05:34:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 01:39:12.748 05:34:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 01:39:13.316 05:34:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:39:13.316 05:34:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 01:39:13.316 05:34:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 01:39:13.316 05:34:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 01:39:13.316 05:34:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 01:39:13.574 05:34:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:39:13.574 05:34:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 01:39:13.575 05:34:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 01:39:13.575 05:34:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 01:39:13.575 05:34:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 01:39:14.144 05:34:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:39:14.144 05:34:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 01:39:14.144 05:34:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 01:39:14.144 05:34:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 01:39:14.144 05:34:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 01:39:14.401 05:34:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:39:14.401 05:34:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 01:39:14.401 05:34:05 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 01:39:14.402 05:34:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 01:39:14.402 05:34:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 01:39:14.402 05:34:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:39:14.402 05:34:05 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 01:39:14.402 05:34:05 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 01:39:14.402 05:34:05 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 01:39:14.402 05:34:05 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 01:39:14.402 05:34:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 01:39:14.402 05:34:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 01:39:14.402 05:34:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:39:14.402 05:34:05 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 01:39:14.402 05:34:05 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 01:39:14.402 ************************************ 01:39:14.402 END TEST test_create_multi_ublk 01:39:14.402 ************************************ 01:39:14.402 05:34:05 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 01:39:14.402 01:39:14.402 real 0m4.705s 01:39:14.402 user 0m1.403s 01:39:14.402 sys 0m0.149s 01:39:14.402 05:34:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 01:39:14.402 05:34:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 01:39:14.402 05:34:06 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 01:39:14.402 05:34:06 ublk -- ublk/ublk.sh@147 -- # cleanup 01:39:14.402 05:34:06 ublk -- ublk/ublk.sh@130 -- # killprocess 75453 01:39:14.402 05:34:06 ublk -- common/autotest_common.sh@954 -- # '[' -z 75453 ']' 01:39:14.402 05:34:06 ublk -- common/autotest_common.sh@958 -- # kill -0 75453 01:39:14.402 05:34:06 ublk -- common/autotest_common.sh@959 -- # uname 01:39:14.402 05:34:06 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:39:14.402 05:34:06 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75453 01:39:14.659 killing process with pid 75453 01:39:14.659 05:34:06 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:39:14.659 05:34:06 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:39:14.659 05:34:06 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75453' 01:39:14.659 05:34:06 ublk -- common/autotest_common.sh@973 -- # kill 75453 01:39:14.659 05:34:06 ublk -- common/autotest_common.sh@978 -- # wait 75453 01:39:15.592 [2024-12-09 05:34:07.035961] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 01:39:15.592 [2024-12-09 05:34:07.036020] ublk.c: 766:_ublk_fini_done: *DEBUG*: 01:39:16.980 01:39:16.980 real 0m30.436s 01:39:16.980 user 0m43.345s 01:39:16.980 sys 0m10.935s 01:39:16.980 05:34:08 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 01:39:16.980 05:34:08 ublk -- common/autotest_common.sh@10 -- # set +x 01:39:16.980 ************************************ 01:39:16.980 END TEST ublk 01:39:16.980 ************************************ 01:39:16.980 05:34:08 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 01:39:16.980 05:34:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:39:16.980 05:34:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:39:16.980 05:34:08 -- common/autotest_common.sh@10 -- # set +x 01:39:16.980 ************************************ 01:39:16.980 START TEST ublk_recovery 01:39:16.980 ************************************ 01:39:16.980 05:34:08 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 01:39:16.980 * Looking for test storage... 01:39:16.980 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 01:39:16.980 05:34:08 ublk_recovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:39:16.980 05:34:08 ublk_recovery -- common/autotest_common.sh@1693 -- # lcov --version 01:39:16.980 05:34:08 ublk_recovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:39:16.980 05:34:08 ublk_recovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:39:16.980 05:34:08 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:39:16.980 05:34:08 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 01:39:16.980 05:34:08 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 01:39:16.980 05:34:08 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 01:39:16.980 05:34:08 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 01:39:16.980 05:34:08 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 01:39:16.980 05:34:08 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 01:39:16.980 05:34:08 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 01:39:16.980 05:34:08 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 01:39:16.980 05:34:08 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 01:39:16.980 05:34:08 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:39:16.980 05:34:08 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 01:39:16.980 05:34:08 ublk_recovery -- scripts/common.sh@345 -- # : 1 01:39:16.980 05:34:08 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 01:39:16.980 05:34:08 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:39:16.980 05:34:08 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 01:39:16.980 05:34:08 ublk_recovery -- scripts/common.sh@353 -- # local d=1 01:39:16.980 05:34:08 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:39:16.980 05:34:08 ublk_recovery -- scripts/common.sh@355 -- # echo 1 01:39:16.980 05:34:08 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 01:39:16.980 05:34:08 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 01:39:16.980 05:34:08 ublk_recovery -- scripts/common.sh@353 -- # local d=2 01:39:16.980 05:34:08 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:39:16.980 05:34:08 ublk_recovery -- scripts/common.sh@355 -- # echo 2 01:39:16.980 05:34:08 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 01:39:16.980 05:34:08 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:39:16.980 05:34:08 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:39:16.980 05:34:08 ublk_recovery -- scripts/common.sh@368 -- # return 0 01:39:16.980 05:34:08 ublk_recovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:39:16.980 05:34:08 ublk_recovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:39:16.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:39:16.980 --rc genhtml_branch_coverage=1 01:39:16.980 --rc genhtml_function_coverage=1 01:39:16.980 --rc genhtml_legend=1 01:39:16.980 --rc geninfo_all_blocks=1 01:39:16.980 --rc geninfo_unexecuted_blocks=1 01:39:16.980 01:39:16.980 ' 01:39:16.980 05:34:08 ublk_recovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:39:16.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:39:16.980 --rc genhtml_branch_coverage=1 01:39:16.980 --rc genhtml_function_coverage=1 01:39:16.980 --rc genhtml_legend=1 01:39:16.980 --rc geninfo_all_blocks=1 01:39:16.980 --rc geninfo_unexecuted_blocks=1 01:39:16.980 01:39:16.980 ' 01:39:16.980 05:34:08 ublk_recovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:39:16.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:39:16.980 --rc genhtml_branch_coverage=1 01:39:16.980 --rc genhtml_function_coverage=1 01:39:16.980 --rc genhtml_legend=1 01:39:16.980 --rc geninfo_all_blocks=1 01:39:16.980 --rc geninfo_unexecuted_blocks=1 01:39:16.980 01:39:16.980 ' 01:39:16.980 05:34:08 ublk_recovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:39:16.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:39:16.980 --rc genhtml_branch_coverage=1 01:39:16.980 --rc genhtml_function_coverage=1 01:39:16.980 --rc genhtml_legend=1 01:39:16.980 --rc geninfo_all_blocks=1 01:39:16.980 --rc geninfo_unexecuted_blocks=1 01:39:16.980 01:39:16.980 ' 01:39:16.980 05:34:08 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 01:39:16.980 05:34:08 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 01:39:16.980 05:34:08 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 01:39:16.980 05:34:08 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 01:39:16.980 05:34:08 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 01:39:16.980 05:34:08 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 01:39:16.980 05:34:08 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 01:39:16.980 05:34:08 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 01:39:16.980 05:34:08 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 01:39:16.980 05:34:08 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 01:39:16.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:39:16.980 05:34:08 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=75876 01:39:16.980 05:34:08 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 01:39:16.980 05:34:08 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 01:39:16.980 05:34:08 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 75876 01:39:16.980 05:34:08 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 75876 ']' 01:39:16.980 05:34:08 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:39:16.980 05:34:08 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 01:39:16.980 05:34:08 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:39:16.980 05:34:08 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 01:39:16.980 05:34:08 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 01:39:17.240 [2024-12-09 05:34:08.636565] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:39:17.240 [2024-12-09 05:34:08.636761] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75876 ] 01:39:17.240 [2024-12-09 05:34:08.824211] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 01:39:17.499 [2024-12-09 05:34:08.943464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:39:17.499 [2024-12-09 05:34:08.943469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:39:18.525 05:34:09 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:39:18.525 05:34:09 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 01:39:18.525 05:34:09 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 01:39:18.525 05:34:09 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:39:18.525 05:34:09 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 01:39:18.525 [2024-12-09 05:34:09.814706] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 01:39:18.525 [2024-12-09 05:34:09.817687] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 01:39:18.525 05:34:09 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:39:18.525 05:34:09 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 01:39:18.525 05:34:09 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:39:18.525 05:34:09 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 01:39:18.525 malloc0 01:39:18.525 05:34:09 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:39:18.525 05:34:09 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 01:39:18.525 05:34:09 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:39:18.525 05:34:09 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 01:39:18.525 [2024-12-09 05:34:09.972082] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 01:39:18.525 [2024-12-09 05:34:09.972250] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 01:39:18.525 [2024-12-09 05:34:09.972272] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 01:39:18.525 [2024-12-09 05:34:09.972294] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 01:39:18.525 [2024-12-09 05:34:09.979819] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 01:39:18.525 [2024-12-09 05:34:09.979845] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 01:39:18.525 [2024-12-09 05:34:09.987796] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 01:39:18.525 [2024-12-09 05:34:09.988015] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 01:39:18.525 [2024-12-09 05:34:10.017721] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 01:39:18.525 1 01:39:18.525 05:34:10 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:39:18.525 05:34:10 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 01:39:19.457 05:34:11 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=75917 01:39:19.457 05:34:11 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 01:39:19.457 05:34:11 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 01:39:19.715 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 01:39:19.715 fio-3.35 01:39:19.715 Starting 1 process 01:39:24.992 05:34:16 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 75876 01:39:24.992 05:34:16 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 01:39:30.259 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 75876 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 01:39:30.259 05:34:21 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=76023 01:39:30.259 05:34:21 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 01:39:30.259 05:34:21 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 01:39:30.259 05:34:21 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 76023 01:39:30.259 05:34:21 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 76023 ']' 01:39:30.259 05:34:21 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:39:30.259 05:34:21 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 01:39:30.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:39:30.259 05:34:21 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:39:30.259 05:34:21 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 01:39:30.259 05:34:21 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 01:39:30.259 [2024-12-09 05:34:21.216590] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:39:30.259 [2024-12-09 05:34:21.216779] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76023 ] 01:39:30.259 [2024-12-09 05:34:21.400298] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 01:39:30.259 [2024-12-09 05:34:21.554455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:39:30.259 [2024-12-09 05:34:21.554504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:39:30.822 05:34:22 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:39:30.822 05:34:22 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 01:39:30.822 05:34:22 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 01:39:30.822 05:34:22 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:39:30.822 05:34:22 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 01:39:30.822 [2024-12-09 05:34:22.437780] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 01:39:31.081 [2024-12-09 05:34:22.441113] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 01:39:31.081 05:34:22 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:39:31.081 05:34:22 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 01:39:31.081 05:34:22 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:39:31.081 05:34:22 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 01:39:31.081 malloc0 01:39:31.081 05:34:22 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:39:31.081 05:34:22 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 01:39:31.081 05:34:22 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:39:31.081 05:34:22 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 01:39:31.081 [2024-12-09 05:34:22.595958] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 01:39:31.081 [2024-12-09 05:34:22.596053] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 01:39:31.081 [2024-12-09 05:34:22.596092] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 01:39:31.081 [2024-12-09 05:34:22.603852] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 01:39:31.081 [2024-12-09 05:34:22.603889] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 01:39:31.081 1 01:39:31.081 05:34:22 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:39:31.081 05:34:22 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 75917 01:39:32.018 [2024-12-09 05:34:23.607732] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 01:39:32.018 [2024-12-09 05:34:23.615773] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 01:39:32.018 [2024-12-09 05:34:23.615801] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 01:39:33.388 [2024-12-09 05:34:24.619784] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 01:39:33.388 [2024-12-09 05:34:24.627824] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 01:39:33.388 [2024-12-09 05:34:24.627874] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 01:39:34.319 [2024-12-09 05:34:25.627906] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 01:39:34.319 [2024-12-09 05:34:25.631792] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 01:39:34.319 [2024-12-09 05:34:25.631819] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 01:39:34.319 [2024-12-09 05:34:25.631852] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 01:39:34.319 [2024-12-09 05:34:25.631984] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 01:39:56.245 [2024-12-09 05:34:46.101765] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 01:39:56.245 [2024-12-09 05:34:46.109305] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 01:39:56.245 [2024-12-09 05:34:46.117013] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 01:39:56.245 [2024-12-09 05:34:46.117058] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 01:40:22.788 01:40:22.788 fio_test: (groupid=0, jobs=1): err= 0: pid=75920: Mon Dec 9 05:35:11 2024 01:40:22.788 read: IOPS=10.1k, BW=39.5MiB/s (41.4MB/s)(2371MiB/60005msec) 01:40:22.788 slat (usec): min=2, max=230, avg= 6.14, stdev= 3.13 01:40:22.788 clat (usec): min=873, max=30092k, avg=6318.88, stdev=311364.64 01:40:22.788 lat (usec): min=890, max=30092k, avg=6325.02, stdev=311364.65 01:40:22.788 clat percentiles (msec): 01:40:22.788 | 1.00th=[ 3], 5.00th=[ 3], 10.00th=[ 3], 20.00th=[ 3], 01:40:22.788 | 30.00th=[ 3], 40.00th=[ 3], 50.00th=[ 3], 60.00th=[ 4], 01:40:22.788 | 70.00th=[ 4], 80.00th=[ 4], 90.00th=[ 4], 95.00th=[ 5], 01:40:22.788 | 99.00th=[ 7], 99.50th=[ 7], 99.90th=[ 9], 99.95th=[ 10], 01:40:22.788 | 99.99th=[17113] 01:40:22.788 bw ( KiB/s): min=14584, max=87928, per=100.00%, avg=79706.13, stdev=11577.47, samples=60 01:40:22.788 iops : min= 3646, max=21982, avg=19926.53, stdev=2894.37, samples=60 01:40:22.788 write: IOPS=10.1k, BW=39.5MiB/s (41.4MB/s)(2368MiB/60005msec); 0 zone resets 01:40:22.788 slat (usec): min=2, max=218, avg= 6.37, stdev= 3.36 01:40:22.788 clat (usec): min=861, max=30092k, avg=6330.49, stdev=306726.78 01:40:22.788 lat (usec): min=867, max=30092k, avg=6336.86, stdev=306726.79 01:40:22.788 clat percentiles (msec): 01:40:22.788 | 1.00th=[ 3], 5.00th=[ 3], 10.00th=[ 3], 20.00th=[ 3], 01:40:22.788 | 30.00th=[ 3], 40.00th=[ 4], 50.00th=[ 4], 60.00th=[ 4], 01:40:22.788 | 70.00th=[ 4], 80.00th=[ 4], 90.00th=[ 4], 95.00th=[ 4], 01:40:22.788 | 99.00th=[ 7], 99.50th=[ 7], 99.90th=[ 9], 99.95th=[ 10], 01:40:22.788 | 99.99th=[17113] 01:40:22.788 bw ( KiB/s): min=15384, max=87576, per=100.00%, avg=79615.33, stdev=11460.23, samples=60 01:40:22.788 iops : min= 3846, max=21894, avg=19903.83, stdev=2865.06, samples=60 01:40:22.788 lat (usec) : 1000=0.01% 01:40:22.788 lat (msec) : 2=0.07%, 4=94.75%, 10=5.13%, 20=0.03%, >=2000=0.01% 01:40:22.788 cpu : usr=5.29%, sys=12.01%, ctx=38403, majf=0, minf=13 01:40:22.788 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 01:40:22.788 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:40:22.788 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:40:22.788 issued rwts: total=606884,606139,0,0 short=0,0,0,0 dropped=0,0,0,0 01:40:22.788 latency : target=0, window=0, percentile=100.00%, depth=128 01:40:22.788 01:40:22.788 Run status group 0 (all jobs): 01:40:22.788 READ: bw=39.5MiB/s (41.4MB/s), 39.5MiB/s-39.5MiB/s (41.4MB/s-41.4MB/s), io=2371MiB (2486MB), run=60005-60005msec 01:40:22.788 WRITE: bw=39.5MiB/s (41.4MB/s), 39.5MiB/s-39.5MiB/s (41.4MB/s-41.4MB/s), io=2368MiB (2483MB), run=60005-60005msec 01:40:22.788 01:40:22.788 Disk stats (read/write): 01:40:22.788 ublkb1: ios=604568/603860, merge=0/0, ticks=3774317/3710147, in_queue=7484464, util=99.94% 01:40:22.788 05:35:11 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 01:40:22.788 05:35:11 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:40:22.788 05:35:11 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 01:40:22.788 [2024-12-09 05:35:11.319921] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 01:40:22.788 [2024-12-09 05:35:11.365910] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 01:40:22.788 [2024-12-09 05:35:11.366151] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 01:40:22.788 [2024-12-09 05:35:11.374729] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 01:40:22.788 [2024-12-09 05:35:11.374865] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 01:40:22.788 [2024-12-09 05:35:11.374885] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 01:40:22.788 05:35:11 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:40:22.788 05:35:11 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 01:40:22.788 05:35:11 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:40:22.788 05:35:11 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 01:40:22.788 [2024-12-09 05:35:11.388831] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 01:40:22.788 [2024-12-09 05:35:11.396706] ublk.c: 766:_ublk_fini_done: *DEBUG*: 01:40:22.788 [2024-12-09 05:35:11.396772] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 01:40:22.788 05:35:11 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:40:22.788 05:35:11 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 01:40:22.788 05:35:11 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 01:40:22.788 05:35:11 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 76023 01:40:22.788 05:35:11 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 76023 ']' 01:40:22.788 05:35:11 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 76023 01:40:22.788 05:35:11 ublk_recovery -- common/autotest_common.sh@959 -- # uname 01:40:22.788 05:35:11 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:40:22.788 05:35:11 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76023 01:40:22.788 killing process with pid 76023 01:40:22.788 05:35:11 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:40:22.788 05:35:11 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:40:22.788 05:35:11 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76023' 01:40:22.788 05:35:11 ublk_recovery -- common/autotest_common.sh@973 -- # kill 76023 01:40:22.788 05:35:11 ublk_recovery -- common/autotest_common.sh@978 -- # wait 76023 01:40:22.788 [2024-12-09 05:35:12.948127] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 01:40:22.788 [2024-12-09 05:35:12.948201] ublk.c: 766:_ublk_fini_done: *DEBUG*: 01:40:22.788 ************************************ 01:40:22.788 END TEST ublk_recovery 01:40:22.788 ************************************ 01:40:22.788 01:40:22.788 real 1m6.019s 01:40:22.788 user 1m50.320s 01:40:22.788 sys 0m21.304s 01:40:22.788 05:35:14 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 01:40:22.788 05:35:14 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 01:40:22.788 05:35:14 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 01:40:22.788 05:35:14 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 01:40:22.788 05:35:14 -- spdk/autotest.sh@260 -- # timing_exit lib 01:40:22.788 05:35:14 -- common/autotest_common.sh@732 -- # xtrace_disable 01:40:22.788 05:35:14 -- common/autotest_common.sh@10 -- # set +x 01:40:23.047 05:35:14 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 01:40:23.047 05:35:14 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 01:40:23.047 05:35:14 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 01:40:23.047 05:35:14 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 01:40:23.047 05:35:14 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 01:40:23.047 05:35:14 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 01:40:23.048 05:35:14 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 01:40:23.048 05:35:14 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 01:40:23.048 05:35:14 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 01:40:23.048 05:35:14 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 01:40:23.048 05:35:14 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 01:40:23.048 05:35:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:40:23.048 05:35:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:40:23.048 05:35:14 -- common/autotest_common.sh@10 -- # set +x 01:40:23.048 ************************************ 01:40:23.048 START TEST ftl 01:40:23.048 ************************************ 01:40:23.048 05:35:14 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 01:40:23.048 * Looking for test storage... 01:40:23.048 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 01:40:23.048 05:35:14 ftl -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:40:23.048 05:35:14 ftl -- common/autotest_common.sh@1693 -- # lcov --version 01:40:23.048 05:35:14 ftl -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:40:23.048 05:35:14 ftl -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:40:23.048 05:35:14 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:40:23.048 05:35:14 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 01:40:23.048 05:35:14 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 01:40:23.048 05:35:14 ftl -- scripts/common.sh@336 -- # IFS=.-: 01:40:23.048 05:35:14 ftl -- scripts/common.sh@336 -- # read -ra ver1 01:40:23.048 05:35:14 ftl -- scripts/common.sh@337 -- # IFS=.-: 01:40:23.048 05:35:14 ftl -- scripts/common.sh@337 -- # read -ra ver2 01:40:23.048 05:35:14 ftl -- scripts/common.sh@338 -- # local 'op=<' 01:40:23.048 05:35:14 ftl -- scripts/common.sh@340 -- # ver1_l=2 01:40:23.048 05:35:14 ftl -- scripts/common.sh@341 -- # ver2_l=1 01:40:23.048 05:35:14 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:40:23.048 05:35:14 ftl -- scripts/common.sh@344 -- # case "$op" in 01:40:23.048 05:35:14 ftl -- scripts/common.sh@345 -- # : 1 01:40:23.048 05:35:14 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 01:40:23.048 05:35:14 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:40:23.048 05:35:14 ftl -- scripts/common.sh@365 -- # decimal 1 01:40:23.048 05:35:14 ftl -- scripts/common.sh@353 -- # local d=1 01:40:23.048 05:35:14 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:40:23.048 05:35:14 ftl -- scripts/common.sh@355 -- # echo 1 01:40:23.048 05:35:14 ftl -- scripts/common.sh@365 -- # ver1[v]=1 01:40:23.048 05:35:14 ftl -- scripts/common.sh@366 -- # decimal 2 01:40:23.048 05:35:14 ftl -- scripts/common.sh@353 -- # local d=2 01:40:23.048 05:35:14 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:40:23.048 05:35:14 ftl -- scripts/common.sh@355 -- # echo 2 01:40:23.048 05:35:14 ftl -- scripts/common.sh@366 -- # ver2[v]=2 01:40:23.048 05:35:14 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:40:23.048 05:35:14 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:40:23.048 05:35:14 ftl -- scripts/common.sh@368 -- # return 0 01:40:23.048 05:35:14 ftl -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:40:23.048 05:35:14 ftl -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:40:23.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:40:23.048 --rc genhtml_branch_coverage=1 01:40:23.048 --rc genhtml_function_coverage=1 01:40:23.048 --rc genhtml_legend=1 01:40:23.048 --rc geninfo_all_blocks=1 01:40:23.048 --rc geninfo_unexecuted_blocks=1 01:40:23.048 01:40:23.048 ' 01:40:23.048 05:35:14 ftl -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:40:23.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:40:23.048 --rc genhtml_branch_coverage=1 01:40:23.048 --rc genhtml_function_coverage=1 01:40:23.048 --rc genhtml_legend=1 01:40:23.048 --rc geninfo_all_blocks=1 01:40:23.048 --rc geninfo_unexecuted_blocks=1 01:40:23.048 01:40:23.048 ' 01:40:23.048 05:35:14 ftl -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:40:23.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:40:23.048 --rc genhtml_branch_coverage=1 01:40:23.048 --rc genhtml_function_coverage=1 01:40:23.048 --rc genhtml_legend=1 01:40:23.048 --rc geninfo_all_blocks=1 01:40:23.048 --rc geninfo_unexecuted_blocks=1 01:40:23.048 01:40:23.048 ' 01:40:23.048 05:35:14 ftl -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:40:23.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:40:23.048 --rc genhtml_branch_coverage=1 01:40:23.048 --rc genhtml_function_coverage=1 01:40:23.048 --rc genhtml_legend=1 01:40:23.048 --rc geninfo_all_blocks=1 01:40:23.048 --rc geninfo_unexecuted_blocks=1 01:40:23.048 01:40:23.048 ' 01:40:23.048 05:35:14 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 01:40:23.048 05:35:14 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 01:40:23.048 05:35:14 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 01:40:23.048 05:35:14 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 01:40:23.048 05:35:14 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 01:40:23.048 05:35:14 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 01:40:23.048 05:35:14 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:40:23.048 05:35:14 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 01:40:23.048 05:35:14 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 01:40:23.048 05:35:14 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:40:23.048 05:35:14 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:40:23.048 05:35:14 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 01:40:23.048 05:35:14 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 01:40:23.048 05:35:14 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 01:40:23.048 05:35:14 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 01:40:23.048 05:35:14 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 01:40:23.048 05:35:14 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 01:40:23.048 05:35:14 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:40:23.048 05:35:14 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:40:23.048 05:35:14 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 01:40:23.048 05:35:14 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 01:40:23.048 05:35:14 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 01:40:23.048 05:35:14 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 01:40:23.048 05:35:14 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 01:40:23.048 05:35:14 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 01:40:23.048 05:35:14 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 01:40:23.048 05:35:14 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 01:40:23.048 05:35:14 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:40:23.048 05:35:14 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:40:23.048 05:35:14 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:40:23.048 05:35:14 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 01:40:23.048 05:35:14 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 01:40:23.048 05:35:14 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 01:40:23.048 05:35:14 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 01:40:23.048 05:35:14 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 01:40:23.618 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:40:23.618 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 01:40:23.618 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 01:40:23.618 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 01:40:23.618 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 01:40:23.618 05:35:15 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=76808 01:40:23.618 05:35:15 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 01:40:23.618 05:35:15 ftl -- ftl/ftl.sh@38 -- # waitforlisten 76808 01:40:23.618 05:35:15 ftl -- common/autotest_common.sh@835 -- # '[' -z 76808 ']' 01:40:23.618 05:35:15 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:40:23.618 05:35:15 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 01:40:23.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:40:23.618 05:35:15 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:40:23.618 05:35:15 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 01:40:23.618 05:35:15 ftl -- common/autotest_common.sh@10 -- # set +x 01:40:23.877 [2024-12-09 05:35:15.326061] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:40:23.877 [2024-12-09 05:35:15.326273] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76808 ] 01:40:24.135 [2024-12-09 05:35:15.518708] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:40:24.135 [2024-12-09 05:35:15.677176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:40:24.713 05:35:16 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:40:24.713 05:35:16 ftl -- common/autotest_common.sh@868 -- # return 0 01:40:24.713 05:35:16 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 01:40:24.972 05:35:16 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 01:40:26.346 05:35:17 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 01:40:26.346 05:35:17 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 01:40:26.604 05:35:18 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 01:40:26.604 05:35:18 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 01:40:26.604 05:35:18 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 01:40:26.873 05:35:18 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 01:40:26.873 05:35:18 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 01:40:26.873 05:35:18 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 01:40:26.873 05:35:18 ftl -- ftl/ftl.sh@50 -- # break 01:40:26.873 05:35:18 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 01:40:26.873 05:35:18 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 01:40:26.873 05:35:18 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 01:40:26.873 05:35:18 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 01:40:27.439 05:35:18 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 01:40:27.439 05:35:18 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 01:40:27.439 05:35:18 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 01:40:27.439 05:35:18 ftl -- ftl/ftl.sh@63 -- # break 01:40:27.439 05:35:18 ftl -- ftl/ftl.sh@66 -- # killprocess 76808 01:40:27.439 05:35:18 ftl -- common/autotest_common.sh@954 -- # '[' -z 76808 ']' 01:40:27.439 05:35:18 ftl -- common/autotest_common.sh@958 -- # kill -0 76808 01:40:27.439 05:35:18 ftl -- common/autotest_common.sh@959 -- # uname 01:40:27.439 05:35:18 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:40:27.439 05:35:18 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76808 01:40:27.439 05:35:18 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:40:27.439 05:35:18 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:40:27.439 killing process with pid 76808 01:40:27.439 05:35:18 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76808' 01:40:27.439 05:35:18 ftl -- common/autotest_common.sh@973 -- # kill 76808 01:40:27.439 05:35:18 ftl -- common/autotest_common.sh@978 -- # wait 76808 01:40:29.970 05:35:21 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 01:40:29.970 05:35:21 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 01:40:29.970 05:35:21 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 01:40:29.970 05:35:21 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 01:40:29.970 05:35:21 ftl -- common/autotest_common.sh@10 -- # set +x 01:40:29.970 ************************************ 01:40:29.970 START TEST ftl_fio_basic 01:40:29.970 ************************************ 01:40:29.970 05:35:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 01:40:29.970 * Looking for test storage... 01:40:29.970 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 01:40:29.970 05:35:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:40:29.970 05:35:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lcov --version 01:40:29.970 05:35:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:40:29.970 05:35:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:40:29.970 05:35:21 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:40:29.970 05:35:21 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 01:40:29.970 05:35:21 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 01:40:29.970 05:35:21 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 01:40:29.970 05:35:21 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 01:40:29.970 05:35:21 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 01:40:29.970 05:35:21 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 01:40:29.970 05:35:21 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 01:40:29.970 05:35:21 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 01:40:29.970 05:35:21 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 01:40:29.970 05:35:21 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:40:29.970 05:35:21 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 01:40:29.970 05:35:21 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 01:40:29.970 05:35:21 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 01:40:29.970 05:35:21 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:40:29.970 05:35:21 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 01:40:29.970 05:35:21 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 01:40:29.970 05:35:21 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:40:29.970 05:35:21 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 01:40:29.970 05:35:21 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 01:40:29.970 05:35:21 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 01:40:29.970 05:35:21 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 01:40:29.970 05:35:21 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:40:29.970 05:35:21 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 01:40:29.971 05:35:21 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 01:40:29.971 05:35:21 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:40:29.971 05:35:21 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:40:29.971 05:35:21 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 01:40:29.971 05:35:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:40:29.971 05:35:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:40:29.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:40:29.971 --rc genhtml_branch_coverage=1 01:40:29.971 --rc genhtml_function_coverage=1 01:40:29.971 --rc genhtml_legend=1 01:40:29.971 --rc geninfo_all_blocks=1 01:40:29.971 --rc geninfo_unexecuted_blocks=1 01:40:29.971 01:40:29.971 ' 01:40:29.971 05:35:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:40:29.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:40:29.971 --rc genhtml_branch_coverage=1 01:40:29.971 --rc genhtml_function_coverage=1 01:40:29.971 --rc genhtml_legend=1 01:40:29.971 --rc geninfo_all_blocks=1 01:40:29.971 --rc geninfo_unexecuted_blocks=1 01:40:29.971 01:40:29.971 ' 01:40:29.971 05:35:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:40:29.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:40:29.971 --rc genhtml_branch_coverage=1 01:40:29.971 --rc genhtml_function_coverage=1 01:40:29.971 --rc genhtml_legend=1 01:40:29.971 --rc geninfo_all_blocks=1 01:40:29.971 --rc geninfo_unexecuted_blocks=1 01:40:29.971 01:40:29.971 ' 01:40:29.971 05:35:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:40:29.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:40:29.971 --rc genhtml_branch_coverage=1 01:40:29.971 --rc genhtml_function_coverage=1 01:40:29.971 --rc genhtml_legend=1 01:40:29.971 --rc geninfo_all_blocks=1 01:40:29.971 --rc geninfo_unexecuted_blocks=1 01:40:29.971 01:40:29.971 ' 01:40:29.971 05:35:21 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 01:40:29.971 05:35:21 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 01:40:29.971 05:35:21 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 01:40:29.971 05:35:21 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 01:40:29.971 05:35:21 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 01:40:29.971 05:35:21 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 01:40:29.971 05:35:21 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:40:29.971 05:35:21 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 01:40:29.971 05:35:21 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 01:40:29.971 05:35:21 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:40:29.971 05:35:21 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:40:29.971 05:35:21 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 01:40:29.971 05:35:21 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 01:40:29.971 05:35:21 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 01:40:29.971 05:35:21 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 01:40:29.971 05:35:21 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 01:40:29.971 05:35:21 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 01:40:29.971 05:35:21 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:40:29.971 05:35:21 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:40:29.971 05:35:21 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 01:40:29.971 05:35:21 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 01:40:29.971 05:35:21 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 01:40:29.971 05:35:21 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 01:40:29.971 05:35:21 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 01:40:29.971 05:35:21 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 01:40:29.971 05:35:21 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 01:40:29.971 05:35:21 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 01:40:29.971 05:35:21 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:40:29.971 05:35:21 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:40:29.971 05:35:21 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 01:40:29.971 05:35:21 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 01:40:29.971 05:35:21 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 01:40:29.971 05:35:21 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 01:40:29.971 05:35:21 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:40:29.971 05:35:21 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 01:40:29.971 05:35:21 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 01:40:29.971 05:35:21 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 01:40:29.971 05:35:21 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 01:40:29.971 05:35:21 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 01:40:29.971 05:35:21 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 01:40:29.971 05:35:21 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 01:40:29.971 05:35:21 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 01:40:29.971 05:35:21 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 01:40:29.971 05:35:21 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 01:40:29.971 05:35:21 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 01:40:29.971 05:35:21 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 01:40:29.971 05:35:21 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=76957 01:40:29.971 05:35:21 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 01:40:29.971 05:35:21 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 76957 01:40:29.971 05:35:21 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 76957 ']' 01:40:29.971 05:35:21 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:40:29.971 05:35:21 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 01:40:29.971 05:35:21 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:40:29.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:40:29.971 05:35:21 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 01:40:29.971 05:35:21 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 01:40:29.971 [2024-12-09 05:35:21.390526] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:40:29.971 [2024-12-09 05:35:21.390725] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76957 ] 01:40:29.971 [2024-12-09 05:35:21.582189] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 01:40:30.230 [2024-12-09 05:35:21.745940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:40:30.230 [2024-12-09 05:35:21.746097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:40:30.230 [2024-12-09 05:35:21.746122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:40:31.167 05:35:22 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:40:31.167 05:35:22 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 01:40:31.167 05:35:22 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 01:40:31.167 05:35:22 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 01:40:31.167 05:35:22 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 01:40:31.167 05:35:22 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 01:40:31.167 05:35:22 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 01:40:31.167 05:35:22 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 01:40:31.426 05:35:22 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 01:40:31.426 05:35:22 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 01:40:31.426 05:35:22 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 01:40:31.426 05:35:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 01:40:31.426 05:35:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 01:40:31.426 05:35:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 01:40:31.426 05:35:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 01:40:31.426 05:35:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 01:40:31.685 05:35:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 01:40:31.685 { 01:40:31.685 "name": "nvme0n1", 01:40:31.685 "aliases": [ 01:40:31.685 "7444e446-431a-4789-91a6-d76a2c2d0824" 01:40:31.685 ], 01:40:31.685 "product_name": "NVMe disk", 01:40:31.685 "block_size": 4096, 01:40:31.685 "num_blocks": 1310720, 01:40:31.685 "uuid": "7444e446-431a-4789-91a6-d76a2c2d0824", 01:40:31.685 "numa_id": -1, 01:40:31.685 "assigned_rate_limits": { 01:40:31.685 "rw_ios_per_sec": 0, 01:40:31.685 "rw_mbytes_per_sec": 0, 01:40:31.685 "r_mbytes_per_sec": 0, 01:40:31.685 "w_mbytes_per_sec": 0 01:40:31.685 }, 01:40:31.685 "claimed": false, 01:40:31.685 "zoned": false, 01:40:31.685 "supported_io_types": { 01:40:31.685 "read": true, 01:40:31.685 "write": true, 01:40:31.685 "unmap": true, 01:40:31.685 "flush": true, 01:40:31.685 "reset": true, 01:40:31.685 "nvme_admin": true, 01:40:31.685 "nvme_io": true, 01:40:31.685 "nvme_io_md": false, 01:40:31.685 "write_zeroes": true, 01:40:31.685 "zcopy": false, 01:40:31.685 "get_zone_info": false, 01:40:31.686 "zone_management": false, 01:40:31.686 "zone_append": false, 01:40:31.686 "compare": true, 01:40:31.686 "compare_and_write": false, 01:40:31.686 "abort": true, 01:40:31.686 "seek_hole": false, 01:40:31.686 "seek_data": false, 01:40:31.686 "copy": true, 01:40:31.686 "nvme_iov_md": false 01:40:31.686 }, 01:40:31.686 "driver_specific": { 01:40:31.686 "nvme": [ 01:40:31.686 { 01:40:31.686 "pci_address": "0000:00:11.0", 01:40:31.686 "trid": { 01:40:31.686 "trtype": "PCIe", 01:40:31.686 "traddr": "0000:00:11.0" 01:40:31.686 }, 01:40:31.686 "ctrlr_data": { 01:40:31.686 "cntlid": 0, 01:40:31.686 "vendor_id": "0x1b36", 01:40:31.686 "model_number": "QEMU NVMe Ctrl", 01:40:31.686 "serial_number": "12341", 01:40:31.686 "firmware_revision": "8.0.0", 01:40:31.686 "subnqn": "nqn.2019-08.org.qemu:12341", 01:40:31.686 "oacs": { 01:40:31.686 "security": 0, 01:40:31.686 "format": 1, 01:40:31.686 "firmware": 0, 01:40:31.686 "ns_manage": 1 01:40:31.686 }, 01:40:31.686 "multi_ctrlr": false, 01:40:31.686 "ana_reporting": false 01:40:31.686 }, 01:40:31.686 "vs": { 01:40:31.686 "nvme_version": "1.4" 01:40:31.686 }, 01:40:31.686 "ns_data": { 01:40:31.686 "id": 1, 01:40:31.686 "can_share": false 01:40:31.686 } 01:40:31.686 } 01:40:31.686 ], 01:40:31.686 "mp_policy": "active_passive" 01:40:31.686 } 01:40:31.686 } 01:40:31.686 ]' 01:40:31.686 05:35:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 01:40:31.945 05:35:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 01:40:31.945 05:35:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 01:40:31.945 05:35:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 01:40:31.945 05:35:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 01:40:31.945 05:35:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 01:40:31.945 05:35:23 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 01:40:31.945 05:35:23 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 01:40:31.945 05:35:23 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 01:40:31.945 05:35:23 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 01:40:31.945 05:35:23 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 01:40:32.205 05:35:23 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 01:40:32.205 05:35:23 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 01:40:32.463 05:35:23 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=5532f1d2-f2f1-4975-ac6b-4e2c7d387ae0 01:40:32.463 05:35:23 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 5532f1d2-f2f1-4975-ac6b-4e2c7d387ae0 01:40:32.720 05:35:24 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=149539d6-5a29-4c9e-9e13-521fd940e0f2 01:40:32.720 05:35:24 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 149539d6-5a29-4c9e-9e13-521fd940e0f2 01:40:32.720 05:35:24 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 01:40:32.720 05:35:24 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 01:40:32.720 05:35:24 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=149539d6-5a29-4c9e-9e13-521fd940e0f2 01:40:32.720 05:35:24 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 01:40:32.720 05:35:24 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 149539d6-5a29-4c9e-9e13-521fd940e0f2 01:40:32.720 05:35:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=149539d6-5a29-4c9e-9e13-521fd940e0f2 01:40:32.720 05:35:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 01:40:32.720 05:35:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 01:40:32.720 05:35:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 01:40:32.720 05:35:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 149539d6-5a29-4c9e-9e13-521fd940e0f2 01:40:32.979 05:35:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 01:40:32.979 { 01:40:32.979 "name": "149539d6-5a29-4c9e-9e13-521fd940e0f2", 01:40:32.979 "aliases": [ 01:40:32.979 "lvs/nvme0n1p0" 01:40:32.979 ], 01:40:32.979 "product_name": "Logical Volume", 01:40:32.979 "block_size": 4096, 01:40:32.979 "num_blocks": 26476544, 01:40:32.979 "uuid": "149539d6-5a29-4c9e-9e13-521fd940e0f2", 01:40:32.979 "assigned_rate_limits": { 01:40:32.979 "rw_ios_per_sec": 0, 01:40:32.979 "rw_mbytes_per_sec": 0, 01:40:32.979 "r_mbytes_per_sec": 0, 01:40:32.979 "w_mbytes_per_sec": 0 01:40:32.979 }, 01:40:32.979 "claimed": false, 01:40:32.979 "zoned": false, 01:40:32.979 "supported_io_types": { 01:40:32.979 "read": true, 01:40:32.979 "write": true, 01:40:32.979 "unmap": true, 01:40:32.979 "flush": false, 01:40:32.979 "reset": true, 01:40:32.979 "nvme_admin": false, 01:40:32.979 "nvme_io": false, 01:40:32.979 "nvme_io_md": false, 01:40:32.979 "write_zeroes": true, 01:40:32.979 "zcopy": false, 01:40:32.979 "get_zone_info": false, 01:40:32.979 "zone_management": false, 01:40:32.979 "zone_append": false, 01:40:32.979 "compare": false, 01:40:32.979 "compare_and_write": false, 01:40:32.979 "abort": false, 01:40:32.979 "seek_hole": true, 01:40:32.979 "seek_data": true, 01:40:32.979 "copy": false, 01:40:32.979 "nvme_iov_md": false 01:40:32.979 }, 01:40:32.979 "driver_specific": { 01:40:32.979 "lvol": { 01:40:32.979 "lvol_store_uuid": "5532f1d2-f2f1-4975-ac6b-4e2c7d387ae0", 01:40:32.979 "base_bdev": "nvme0n1", 01:40:32.979 "thin_provision": true, 01:40:32.979 "num_allocated_clusters": 0, 01:40:32.979 "snapshot": false, 01:40:32.979 "clone": false, 01:40:32.979 "esnap_clone": false 01:40:32.979 } 01:40:32.979 } 01:40:32.979 } 01:40:32.979 ]' 01:40:32.979 05:35:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 01:40:32.979 05:35:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 01:40:32.979 05:35:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 01:40:33.238 05:35:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 01:40:33.238 05:35:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 01:40:33.238 05:35:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 01:40:33.238 05:35:24 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 01:40:33.238 05:35:24 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 01:40:33.238 05:35:24 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 01:40:33.496 05:35:24 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 01:40:33.497 05:35:24 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 01:40:33.497 05:35:24 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 149539d6-5a29-4c9e-9e13-521fd940e0f2 01:40:33.497 05:35:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=149539d6-5a29-4c9e-9e13-521fd940e0f2 01:40:33.497 05:35:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 01:40:33.497 05:35:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 01:40:33.497 05:35:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 01:40:33.497 05:35:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 149539d6-5a29-4c9e-9e13-521fd940e0f2 01:40:33.754 05:35:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 01:40:33.754 { 01:40:33.754 "name": "149539d6-5a29-4c9e-9e13-521fd940e0f2", 01:40:33.754 "aliases": [ 01:40:33.754 "lvs/nvme0n1p0" 01:40:33.754 ], 01:40:33.754 "product_name": "Logical Volume", 01:40:33.754 "block_size": 4096, 01:40:33.754 "num_blocks": 26476544, 01:40:33.754 "uuid": "149539d6-5a29-4c9e-9e13-521fd940e0f2", 01:40:33.754 "assigned_rate_limits": { 01:40:33.754 "rw_ios_per_sec": 0, 01:40:33.754 "rw_mbytes_per_sec": 0, 01:40:33.754 "r_mbytes_per_sec": 0, 01:40:33.754 "w_mbytes_per_sec": 0 01:40:33.754 }, 01:40:33.754 "claimed": false, 01:40:33.754 "zoned": false, 01:40:33.754 "supported_io_types": { 01:40:33.754 "read": true, 01:40:33.754 "write": true, 01:40:33.754 "unmap": true, 01:40:33.754 "flush": false, 01:40:33.754 "reset": true, 01:40:33.754 "nvme_admin": false, 01:40:33.754 "nvme_io": false, 01:40:33.754 "nvme_io_md": false, 01:40:33.754 "write_zeroes": true, 01:40:33.754 "zcopy": false, 01:40:33.754 "get_zone_info": false, 01:40:33.754 "zone_management": false, 01:40:33.754 "zone_append": false, 01:40:33.754 "compare": false, 01:40:33.754 "compare_and_write": false, 01:40:33.754 "abort": false, 01:40:33.754 "seek_hole": true, 01:40:33.754 "seek_data": true, 01:40:33.754 "copy": false, 01:40:33.754 "nvme_iov_md": false 01:40:33.754 }, 01:40:33.754 "driver_specific": { 01:40:33.754 "lvol": { 01:40:33.754 "lvol_store_uuid": "5532f1d2-f2f1-4975-ac6b-4e2c7d387ae0", 01:40:33.754 "base_bdev": "nvme0n1", 01:40:33.754 "thin_provision": true, 01:40:33.754 "num_allocated_clusters": 0, 01:40:33.754 "snapshot": false, 01:40:33.754 "clone": false, 01:40:33.754 "esnap_clone": false 01:40:33.754 } 01:40:33.754 } 01:40:33.754 } 01:40:33.754 ]' 01:40:33.754 05:35:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 01:40:33.754 05:35:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 01:40:33.754 05:35:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 01:40:33.754 05:35:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 01:40:33.754 05:35:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 01:40:33.754 05:35:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 01:40:33.754 05:35:25 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 01:40:33.754 05:35:25 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 01:40:34.011 05:35:25 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 01:40:34.012 05:35:25 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 01:40:34.012 05:35:25 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 01:40:34.012 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 01:40:34.012 05:35:25 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 149539d6-5a29-4c9e-9e13-521fd940e0f2 01:40:34.012 05:35:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=149539d6-5a29-4c9e-9e13-521fd940e0f2 01:40:34.012 05:35:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 01:40:34.012 05:35:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 01:40:34.012 05:35:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 01:40:34.012 05:35:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 149539d6-5a29-4c9e-9e13-521fd940e0f2 01:40:34.299 05:35:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 01:40:34.299 { 01:40:34.299 "name": "149539d6-5a29-4c9e-9e13-521fd940e0f2", 01:40:34.299 "aliases": [ 01:40:34.299 "lvs/nvme0n1p0" 01:40:34.299 ], 01:40:34.299 "product_name": "Logical Volume", 01:40:34.299 "block_size": 4096, 01:40:34.299 "num_blocks": 26476544, 01:40:34.299 "uuid": "149539d6-5a29-4c9e-9e13-521fd940e0f2", 01:40:34.299 "assigned_rate_limits": { 01:40:34.299 "rw_ios_per_sec": 0, 01:40:34.299 "rw_mbytes_per_sec": 0, 01:40:34.299 "r_mbytes_per_sec": 0, 01:40:34.300 "w_mbytes_per_sec": 0 01:40:34.300 }, 01:40:34.300 "claimed": false, 01:40:34.300 "zoned": false, 01:40:34.300 "supported_io_types": { 01:40:34.300 "read": true, 01:40:34.300 "write": true, 01:40:34.300 "unmap": true, 01:40:34.300 "flush": false, 01:40:34.300 "reset": true, 01:40:34.300 "nvme_admin": false, 01:40:34.300 "nvme_io": false, 01:40:34.300 "nvme_io_md": false, 01:40:34.300 "write_zeroes": true, 01:40:34.300 "zcopy": false, 01:40:34.300 "get_zone_info": false, 01:40:34.300 "zone_management": false, 01:40:34.300 "zone_append": false, 01:40:34.300 "compare": false, 01:40:34.300 "compare_and_write": false, 01:40:34.300 "abort": false, 01:40:34.300 "seek_hole": true, 01:40:34.300 "seek_data": true, 01:40:34.300 "copy": false, 01:40:34.300 "nvme_iov_md": false 01:40:34.300 }, 01:40:34.300 "driver_specific": { 01:40:34.300 "lvol": { 01:40:34.300 "lvol_store_uuid": "5532f1d2-f2f1-4975-ac6b-4e2c7d387ae0", 01:40:34.300 "base_bdev": "nvme0n1", 01:40:34.300 "thin_provision": true, 01:40:34.300 "num_allocated_clusters": 0, 01:40:34.300 "snapshot": false, 01:40:34.300 "clone": false, 01:40:34.300 "esnap_clone": false 01:40:34.300 } 01:40:34.300 } 01:40:34.300 } 01:40:34.300 ]' 01:40:34.300 05:35:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 01:40:34.563 05:35:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 01:40:34.563 05:35:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 01:40:34.563 05:35:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 01:40:34.563 05:35:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 01:40:34.563 05:35:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 01:40:34.563 05:35:25 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 01:40:34.563 05:35:25 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 01:40:34.563 05:35:25 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 149539d6-5a29-4c9e-9e13-521fd940e0f2 -c nvc0n1p0 --l2p_dram_limit 60 01:40:34.822 [2024-12-09 05:35:26.222029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:40:34.822 [2024-12-09 05:35:26.222091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 01:40:34.822 [2024-12-09 05:35:26.222118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 01:40:34.822 [2024-12-09 05:35:26.222131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:34.822 [2024-12-09 05:35:26.222261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:40:34.822 [2024-12-09 05:35:26.222284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 01:40:34.822 [2024-12-09 05:35:26.222304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 01:40:34.822 [2024-12-09 05:35:26.222317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:34.822 [2024-12-09 05:35:26.222357] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 01:40:34.822 [2024-12-09 05:35:26.223497] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 01:40:34.822 [2024-12-09 05:35:26.223549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:40:34.822 [2024-12-09 05:35:26.223565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 01:40:34.822 [2024-12-09 05:35:26.223591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.194 ms 01:40:34.822 [2024-12-09 05:35:26.223604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:34.822 [2024-12-09 05:35:26.223771] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 84f9641f-e66b-4728-a381-78b80f3ef027 01:40:34.822 [2024-12-09 05:35:26.225956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:40:34.822 [2024-12-09 05:35:26.226005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 01:40:34.822 [2024-12-09 05:35:26.226023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 01:40:34.822 [2024-12-09 05:35:26.226038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:34.822 [2024-12-09 05:35:26.237114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:40:34.822 [2024-12-09 05:35:26.237196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 01:40:34.822 [2024-12-09 05:35:26.237225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.938 ms 01:40:34.822 [2024-12-09 05:35:26.237249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:34.822 [2024-12-09 05:35:26.237407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:40:34.822 [2024-12-09 05:35:26.237432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 01:40:34.822 [2024-12-09 05:35:26.237447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 01:40:34.822 [2024-12-09 05:35:26.237468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:34.822 [2024-12-09 05:35:26.237568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:40:34.822 [2024-12-09 05:35:26.237595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 01:40:34.822 [2024-12-09 05:35:26.237609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 01:40:34.822 [2024-12-09 05:35:26.237628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:34.822 [2024-12-09 05:35:26.237706] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 01:40:34.822 [2024-12-09 05:35:26.243202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:40:34.822 [2024-12-09 05:35:26.243247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 01:40:34.822 [2024-12-09 05:35:26.243272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.520 ms 01:40:34.822 [2024-12-09 05:35:26.243284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:34.822 [2024-12-09 05:35:26.243351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:40:34.822 [2024-12-09 05:35:26.243368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 01:40:34.822 [2024-12-09 05:35:26.243385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 01:40:34.822 [2024-12-09 05:35:26.243396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:34.822 [2024-12-09 05:35:26.243466] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 01:40:34.822 [2024-12-09 05:35:26.243684] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 01:40:34.822 [2024-12-09 05:35:26.243738] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 01:40:34.822 [2024-12-09 05:35:26.243758] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 01:40:34.822 [2024-12-09 05:35:26.243778] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 01:40:34.822 [2024-12-09 05:35:26.243792] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 01:40:34.822 [2024-12-09 05:35:26.243808] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 01:40:34.822 [2024-12-09 05:35:26.243820] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 01:40:34.822 [2024-12-09 05:35:26.243833] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 01:40:34.822 [2024-12-09 05:35:26.243844] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 01:40:34.822 [2024-12-09 05:35:26.243873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:40:34.822 [2024-12-09 05:35:26.243886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 01:40:34.822 [2024-12-09 05:35:26.243901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.419 ms 01:40:34.822 [2024-12-09 05:35:26.243913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:34.823 [2024-12-09 05:35:26.244025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:40:34.823 [2024-12-09 05:35:26.244046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 01:40:34.823 [2024-12-09 05:35:26.244062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 01:40:34.823 [2024-12-09 05:35:26.244074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:34.823 [2024-12-09 05:35:26.244219] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 01:40:34.823 [2024-12-09 05:35:26.244238] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 01:40:34.823 [2024-12-09 05:35:26.244253] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 01:40:34.823 [2024-12-09 05:35:26.244265] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:40:34.823 [2024-12-09 05:35:26.244279] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 01:40:34.823 [2024-12-09 05:35:26.244290] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 01:40:34.823 [2024-12-09 05:35:26.244303] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 01:40:34.823 [2024-12-09 05:35:26.244314] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 01:40:34.823 [2024-12-09 05:35:26.244328] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 01:40:34.823 [2024-12-09 05:35:26.244338] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 01:40:34.823 [2024-12-09 05:35:26.244351] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 01:40:34.823 [2024-12-09 05:35:26.244362] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 01:40:34.823 [2024-12-09 05:35:26.244375] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 01:40:34.823 [2024-12-09 05:35:26.244385] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 01:40:34.823 [2024-12-09 05:35:26.244398] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 01:40:34.823 [2024-12-09 05:35:26.244409] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:40:34.823 [2024-12-09 05:35:26.244426] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 01:40:34.823 [2024-12-09 05:35:26.244438] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 01:40:34.823 [2024-12-09 05:35:26.244450] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:40:34.823 [2024-12-09 05:35:26.244461] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 01:40:34.823 [2024-12-09 05:35:26.244474] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 01:40:34.823 [2024-12-09 05:35:26.244485] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:40:34.823 [2024-12-09 05:35:26.244498] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 01:40:34.823 [2024-12-09 05:35:26.244508] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 01:40:34.823 [2024-12-09 05:35:26.244521] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:40:34.823 [2024-12-09 05:35:26.244531] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 01:40:34.823 [2024-12-09 05:35:26.244552] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 01:40:34.823 [2024-12-09 05:35:26.244563] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:40:34.823 [2024-12-09 05:35:26.244576] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 01:40:34.823 [2024-12-09 05:35:26.244587] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 01:40:34.823 [2024-12-09 05:35:26.244608] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:40:34.823 [2024-12-09 05:35:26.244620] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 01:40:34.823 [2024-12-09 05:35:26.244637] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 01:40:34.823 [2024-12-09 05:35:26.244684] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 01:40:34.823 [2024-12-09 05:35:26.244702] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 01:40:34.823 [2024-12-09 05:35:26.244713] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 01:40:34.823 [2024-12-09 05:35:26.244727] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 01:40:34.823 [2024-12-09 05:35:26.244737] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 01:40:34.823 [2024-12-09 05:35:26.244751] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 01:40:34.823 [2024-12-09 05:35:26.244762] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:40:34.823 [2024-12-09 05:35:26.244777] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 01:40:34.823 [2024-12-09 05:35:26.244788] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 01:40:34.823 [2024-12-09 05:35:26.244802] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:40:34.823 [2024-12-09 05:35:26.244812] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 01:40:34.823 [2024-12-09 05:35:26.244826] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 01:40:34.823 [2024-12-09 05:35:26.244838] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 01:40:34.823 [2024-12-09 05:35:26.244853] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:40:34.823 [2024-12-09 05:35:26.244864] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 01:40:34.823 [2024-12-09 05:35:26.244881] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 01:40:34.823 [2024-12-09 05:35:26.244892] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 01:40:34.823 [2024-12-09 05:35:26.244906] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 01:40:34.823 [2024-12-09 05:35:26.244916] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 01:40:34.823 [2024-12-09 05:35:26.244930] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 01:40:34.823 [2024-12-09 05:35:26.244947] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 01:40:34.823 [2024-12-09 05:35:26.244964] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 01:40:34.823 [2024-12-09 05:35:26.244977] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 01:40:34.823 [2024-12-09 05:35:26.244992] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 01:40:34.823 [2024-12-09 05:35:26.245004] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 01:40:34.823 [2024-12-09 05:35:26.245024] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 01:40:34.823 [2024-12-09 05:35:26.245036] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 01:40:34.823 [2024-12-09 05:35:26.245050] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 01:40:34.823 [2024-12-09 05:35:26.245062] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 01:40:34.823 [2024-12-09 05:35:26.245077] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 01:40:34.823 [2024-12-09 05:35:26.245089] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 01:40:34.823 [2024-12-09 05:35:26.245108] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 01:40:34.823 [2024-12-09 05:35:26.245120] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 01:40:34.823 [2024-12-09 05:35:26.245134] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 01:40:34.823 [2024-12-09 05:35:26.245146] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 01:40:34.823 [2024-12-09 05:35:26.245161] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 01:40:34.823 [2024-12-09 05:35:26.245173] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 01:40:34.823 [2024-12-09 05:35:26.245193] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 01:40:34.823 [2024-12-09 05:35:26.245206] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 01:40:34.823 [2024-12-09 05:35:26.245221] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 01:40:34.823 [2024-12-09 05:35:26.245239] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 01:40:34.823 [2024-12-09 05:35:26.245254] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 01:40:34.823 [2024-12-09 05:35:26.245267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:40:34.823 [2024-12-09 05:35:26.245282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 01:40:34.823 [2024-12-09 05:35:26.245294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.131 ms 01:40:34.823 [2024-12-09 05:35:26.245308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:34.823 [2024-12-09 05:35:26.245399] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 01:40:34.823 [2024-12-09 05:35:26.245430] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 01:40:38.100 [2024-12-09 05:35:29.333869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:40:38.100 [2024-12-09 05:35:29.333972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 01:40:38.100 [2024-12-09 05:35:29.334011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3088.487 ms 01:40:38.100 [2024-12-09 05:35:29.334027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:38.100 [2024-12-09 05:35:29.372824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:40:38.100 [2024-12-09 05:35:29.372917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 01:40:38.100 [2024-12-09 05:35:29.372939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.495 ms 01:40:38.100 [2024-12-09 05:35:29.372956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:38.100 [2024-12-09 05:35:29.373209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:40:38.100 [2024-12-09 05:35:29.373244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 01:40:38.100 [2024-12-09 05:35:29.373260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 01:40:38.100 [2024-12-09 05:35:29.373278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:38.100 [2024-12-09 05:35:29.425452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:40:38.100 [2024-12-09 05:35:29.425566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 01:40:38.100 [2024-12-09 05:35:29.425588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.069 ms 01:40:38.100 [2024-12-09 05:35:29.425606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:38.100 [2024-12-09 05:35:29.425674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:40:38.100 [2024-12-09 05:35:29.425711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 01:40:38.100 [2024-12-09 05:35:29.425726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 01:40:38.100 [2024-12-09 05:35:29.425740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:38.100 [2024-12-09 05:35:29.426408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:40:38.100 [2024-12-09 05:35:29.426461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 01:40:38.100 [2024-12-09 05:35:29.426483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.548 ms 01:40:38.100 [2024-12-09 05:35:29.426499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:38.100 [2024-12-09 05:35:29.426695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:40:38.100 [2024-12-09 05:35:29.426721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 01:40:38.100 [2024-12-09 05:35:29.426736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.162 ms 01:40:38.100 [2024-12-09 05:35:29.426752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:38.100 [2024-12-09 05:35:29.447890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:40:38.101 [2024-12-09 05:35:29.448200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 01:40:38.101 [2024-12-09 05:35:29.448233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.095 ms 01:40:38.101 [2024-12-09 05:35:29.448250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:38.101 [2024-12-09 05:35:29.464305] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 01:40:38.101 [2024-12-09 05:35:29.485938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:40:38.101 [2024-12-09 05:35:29.486023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 01:40:38.101 [2024-12-09 05:35:29.486070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.518 ms 01:40:38.101 [2024-12-09 05:35:29.486084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:38.101 [2024-12-09 05:35:29.550136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:40:38.101 [2024-12-09 05:35:29.550199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 01:40:38.101 [2024-12-09 05:35:29.550246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.957 ms 01:40:38.101 [2024-12-09 05:35:29.550259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:38.101 [2024-12-09 05:35:29.550533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:40:38.101 [2024-12-09 05:35:29.550556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 01:40:38.101 [2024-12-09 05:35:29.550576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.207 ms 01:40:38.101 [2024-12-09 05:35:29.550589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:38.101 [2024-12-09 05:35:29.581455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:40:38.101 [2024-12-09 05:35:29.581518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 01:40:38.101 [2024-12-09 05:35:29.581559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.781 ms 01:40:38.101 [2024-12-09 05:35:29.581572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:38.101 [2024-12-09 05:35:29.612409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:40:38.101 [2024-12-09 05:35:29.612461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 01:40:38.101 [2024-12-09 05:35:29.612503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.773 ms 01:40:38.101 [2024-12-09 05:35:29.612515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:38.101 [2024-12-09 05:35:29.613458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:40:38.101 [2024-12-09 05:35:29.613508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 01:40:38.101 [2024-12-09 05:35:29.613527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.871 ms 01:40:38.101 [2024-12-09 05:35:29.613539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:38.101 [2024-12-09 05:35:29.705964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:40:38.101 [2024-12-09 05:35:29.706047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 01:40:38.101 [2024-12-09 05:35:29.706083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 92.331 ms 01:40:38.101 [2024-12-09 05:35:29.706112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:38.357 [2024-12-09 05:35:29.740784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:40:38.357 [2024-12-09 05:35:29.740861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 01:40:38.357 [2024-12-09 05:35:29.740904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.481 ms 01:40:38.357 [2024-12-09 05:35:29.740917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:38.357 [2024-12-09 05:35:29.771309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:40:38.357 [2024-12-09 05:35:29.771506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 01:40:38.357 [2024-12-09 05:35:29.771562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.310 ms 01:40:38.357 [2024-12-09 05:35:29.771576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:38.357 [2024-12-09 05:35:29.803509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:40:38.357 [2024-12-09 05:35:29.803776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 01:40:38.357 [2024-12-09 05:35:29.803826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.868 ms 01:40:38.357 [2024-12-09 05:35:29.803841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:38.357 [2024-12-09 05:35:29.803916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:40:38.357 [2024-12-09 05:35:29.803936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 01:40:38.357 [2024-12-09 05:35:29.803960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 01:40:38.357 [2024-12-09 05:35:29.803973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:38.357 [2024-12-09 05:35:29.804152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:40:38.357 [2024-12-09 05:35:29.804177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 01:40:38.357 [2024-12-09 05:35:29.804195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 01:40:38.357 [2024-12-09 05:35:29.804207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:38.357 [2024-12-09 05:35:29.805637] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3583.118 ms, result 0 01:40:38.357 { 01:40:38.357 "name": "ftl0", 01:40:38.357 "uuid": "84f9641f-e66b-4728-a381-78b80f3ef027" 01:40:38.357 } 01:40:38.357 05:35:29 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 01:40:38.357 05:35:29 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 01:40:38.357 05:35:29 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:40:38.357 05:35:29 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 01:40:38.357 05:35:29 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:40:38.357 05:35:29 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:40:38.357 05:35:29 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 01:40:38.613 05:35:30 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 01:40:38.870 [ 01:40:38.870 { 01:40:38.870 "name": "ftl0", 01:40:38.870 "aliases": [ 01:40:38.870 "84f9641f-e66b-4728-a381-78b80f3ef027" 01:40:38.870 ], 01:40:38.870 "product_name": "FTL disk", 01:40:38.870 "block_size": 4096, 01:40:38.870 "num_blocks": 20971520, 01:40:38.870 "uuid": "84f9641f-e66b-4728-a381-78b80f3ef027", 01:40:38.870 "assigned_rate_limits": { 01:40:38.870 "rw_ios_per_sec": 0, 01:40:38.870 "rw_mbytes_per_sec": 0, 01:40:38.870 "r_mbytes_per_sec": 0, 01:40:38.870 "w_mbytes_per_sec": 0 01:40:38.870 }, 01:40:38.870 "claimed": false, 01:40:38.870 "zoned": false, 01:40:38.870 "supported_io_types": { 01:40:38.870 "read": true, 01:40:38.870 "write": true, 01:40:38.870 "unmap": true, 01:40:38.870 "flush": true, 01:40:38.870 "reset": false, 01:40:38.870 "nvme_admin": false, 01:40:38.870 "nvme_io": false, 01:40:38.870 "nvme_io_md": false, 01:40:38.870 "write_zeroes": true, 01:40:38.870 "zcopy": false, 01:40:38.870 "get_zone_info": false, 01:40:38.870 "zone_management": false, 01:40:38.870 "zone_append": false, 01:40:38.870 "compare": false, 01:40:38.870 "compare_and_write": false, 01:40:38.870 "abort": false, 01:40:38.870 "seek_hole": false, 01:40:38.870 "seek_data": false, 01:40:38.870 "copy": false, 01:40:38.870 "nvme_iov_md": false 01:40:38.870 }, 01:40:38.870 "driver_specific": { 01:40:38.870 "ftl": { 01:40:38.870 "base_bdev": "149539d6-5a29-4c9e-9e13-521fd940e0f2", 01:40:38.870 "cache": "nvc0n1p0" 01:40:38.870 } 01:40:38.870 } 01:40:38.870 } 01:40:38.870 ] 01:40:38.870 05:35:30 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 01:40:38.870 05:35:30 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 01:40:38.870 05:35:30 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 01:40:39.127 05:35:30 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 01:40:39.127 05:35:30 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 01:40:39.384 [2024-12-09 05:35:30.874704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:40:39.384 [2024-12-09 05:35:30.874835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 01:40:39.384 [2024-12-09 05:35:30.874869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 01:40:39.384 [2024-12-09 05:35:30.874898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:39.384 [2024-12-09 05:35:30.874972] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 01:40:39.384 [2024-12-09 05:35:30.879166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:40:39.384 [2024-12-09 05:35:30.879365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 01:40:39.384 [2024-12-09 05:35:30.879524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.152 ms 01:40:39.384 [2024-12-09 05:35:30.879646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:39.384 [2024-12-09 05:35:30.880330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:40:39.384 [2024-12-09 05:35:30.880495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 01:40:39.384 [2024-12-09 05:35:30.880652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.490 ms 01:40:39.384 [2024-12-09 05:35:30.880792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:39.384 [2024-12-09 05:35:30.884282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:40:39.384 [2024-12-09 05:35:30.884427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 01:40:39.384 [2024-12-09 05:35:30.884600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.423 ms 01:40:39.384 [2024-12-09 05:35:30.884732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:39.384 [2024-12-09 05:35:30.891319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:40:39.384 [2024-12-09 05:35:30.891506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 01:40:39.384 [2024-12-09 05:35:30.891646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.499 ms 01:40:39.384 [2024-12-09 05:35:30.891827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:39.384 [2024-12-09 05:35:30.923218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:40:39.384 [2024-12-09 05:35:30.923445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 01:40:39.384 [2024-12-09 05:35:30.923610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.219 ms 01:40:39.384 [2024-12-09 05:35:30.923760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:39.384 [2024-12-09 05:35:30.942340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:40:39.384 [2024-12-09 05:35:30.942587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 01:40:39.384 [2024-12-09 05:35:30.942745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.384 ms 01:40:39.384 [2024-12-09 05:35:30.942896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:39.384 [2024-12-09 05:35:30.943191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:40:39.384 [2024-12-09 05:35:30.943339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 01:40:39.384 [2024-12-09 05:35:30.943461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.172 ms 01:40:39.384 [2024-12-09 05:35:30.943515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:39.384 [2024-12-09 05:35:30.974235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:40:39.384 [2024-12-09 05:35:30.974446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 01:40:39.384 [2024-12-09 05:35:30.974625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.582 ms 01:40:39.384 [2024-12-09 05:35:30.974779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:39.643 [2024-12-09 05:35:31.005990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:40:39.643 [2024-12-09 05:35:31.006227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 01:40:39.643 [2024-12-09 05:35:31.006367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.100 ms 01:40:39.643 [2024-12-09 05:35:31.006422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:39.643 [2024-12-09 05:35:31.036957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:40:39.643 [2024-12-09 05:35:31.037195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 01:40:39.643 [2024-12-09 05:35:31.037366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.418 ms 01:40:39.643 [2024-12-09 05:35:31.037392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:39.643 [2024-12-09 05:35:31.067154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:40:39.643 [2024-12-09 05:35:31.067360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 01:40:39.643 [2024-12-09 05:35:31.067498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.555 ms 01:40:39.643 [2024-12-09 05:35:31.067553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:39.643 [2024-12-09 05:35:31.067744] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 01:40:39.643 [2024-12-09 05:35:31.067889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 01:40:39.643 [2024-12-09 05:35:31.068114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 01:40:39.643 [2024-12-09 05:35:31.068280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 01:40:39.643 [2024-12-09 05:35:31.068428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 01:40:39.643 [2024-12-09 05:35:31.068580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 01:40:39.643 [2024-12-09 05:35:31.068675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 01:40:39.643 [2024-12-09 05:35:31.068767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 01:40:39.643 [2024-12-09 05:35:31.068903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 01:40:39.643 [2024-12-09 05:35:31.069060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 01:40:39.643 [2024-12-09 05:35:31.069142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 01:40:39.643 [2024-12-09 05:35:31.069244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 01:40:39.643 [2024-12-09 05:35:31.069376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 01:40:39.643 [2024-12-09 05:35:31.069615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 01:40:39.643 [2024-12-09 05:35:31.069826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 01:40:39.643 [2024-12-09 05:35:31.069903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 01:40:39.643 [2024-12-09 05:35:31.070094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 01:40:39.643 [2024-12-09 05:35:31.070235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 01:40:39.643 [2024-12-09 05:35:31.070315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 01:40:39.643 [2024-12-09 05:35:31.070405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 01:40:39.643 [2024-12-09 05:35:31.070551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 01:40:39.643 [2024-12-09 05:35:31.070792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 01:40:39.643 [2024-12-09 05:35:31.070967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 01:40:39.643 [2024-12-09 05:35:31.071041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 01:40:39.643 [2024-12-09 05:35:31.071109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 01:40:39.643 [2024-12-09 05:35:31.071284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 01:40:39.644 [2024-12-09 05:35:31.071434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 01:40:39.644 [2024-12-09 05:35:31.071589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 01:40:39.644 [2024-12-09 05:35:31.071750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 01:40:39.644 [2024-12-09 05:35:31.071884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 01:40:39.644 [2024-12-09 05:35:31.072003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 01:40:39.644 [2024-12-09 05:35:31.072024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 01:40:39.644 [2024-12-09 05:35:31.072040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 01:40:39.644 [2024-12-09 05:35:31.072053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 01:40:39.644 [2024-12-09 05:35:31.072067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 01:40:39.644 [2024-12-09 05:35:31.072079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 01:40:39.644 [2024-12-09 05:35:31.072095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 01:40:39.644 [2024-12-09 05:35:31.072107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 01:40:39.644 [2024-12-09 05:35:31.072121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 01:40:39.644 [2024-12-09 05:35:31.072133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 01:40:39.644 [2024-12-09 05:35:31.072152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 01:40:39.644 [2024-12-09 05:35:31.072165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 01:40:39.644 [2024-12-09 05:35:31.072179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 01:40:39.644 [2024-12-09 05:35:31.072192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 01:40:39.644 [2024-12-09 05:35:31.072206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 01:40:39.644 [2024-12-09 05:35:31.072218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 01:40:39.644 [2024-12-09 05:35:31.072233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 01:40:39.644 [2024-12-09 05:35:31.072245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 01:40:39.644 [2024-12-09 05:35:31.072262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 01:40:39.644 [2024-12-09 05:35:31.072275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 01:40:39.644 [2024-12-09 05:35:31.072290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 01:40:39.644 [2024-12-09 05:35:31.072302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 01:40:39.644 [2024-12-09 05:35:31.072317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 01:40:39.644 [2024-12-09 05:35:31.072329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 01:40:39.644 [2024-12-09 05:35:31.072343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 01:40:39.644 [2024-12-09 05:35:31.072356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 01:40:39.644 [2024-12-09 05:35:31.072373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 01:40:39.644 [2024-12-09 05:35:31.072386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 01:40:39.644 [2024-12-09 05:35:31.072401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 01:40:39.644 [2024-12-09 05:35:31.072413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 01:40:39.644 [2024-12-09 05:35:31.072428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 01:40:39.644 [2024-12-09 05:35:31.072440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 01:40:39.644 [2024-12-09 05:35:31.072455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 01:40:39.644 [2024-12-09 05:35:31.072467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 01:40:39.644 [2024-12-09 05:35:31.072482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 01:40:39.644 [2024-12-09 05:35:31.072494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 01:40:39.644 [2024-12-09 05:35:31.072509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 01:40:39.644 [2024-12-09 05:35:31.072521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 01:40:39.644 [2024-12-09 05:35:31.072536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 01:40:39.644 [2024-12-09 05:35:31.072554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 01:40:39.644 [2024-12-09 05:35:31.072575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 01:40:39.644 [2024-12-09 05:35:31.072588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 01:40:39.644 [2024-12-09 05:35:31.072605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 01:40:39.644 [2024-12-09 05:35:31.072618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 01:40:39.644 [2024-12-09 05:35:31.072634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 01:40:39.644 [2024-12-09 05:35:31.072647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 01:40:39.644 [2024-12-09 05:35:31.072675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 01:40:39.644 [2024-12-09 05:35:31.072692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 01:40:39.644 [2024-12-09 05:35:31.072708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 01:40:39.644 [2024-12-09 05:35:31.072721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 01:40:39.644 [2024-12-09 05:35:31.072736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 01:40:39.644 [2024-12-09 05:35:31.072749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 01:40:39.644 [2024-12-09 05:35:31.072787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 01:40:39.644 [2024-12-09 05:35:31.072800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 01:40:39.644 [2024-12-09 05:35:31.072815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 01:40:39.644 [2024-12-09 05:35:31.072828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 01:40:39.644 [2024-12-09 05:35:31.072843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 01:40:39.644 [2024-12-09 05:35:31.072855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 01:40:39.644 [2024-12-09 05:35:31.072873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 01:40:39.644 [2024-12-09 05:35:31.072887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 01:40:39.644 [2024-12-09 05:35:31.072902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 01:40:39.644 [2024-12-09 05:35:31.072914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 01:40:39.644 [2024-12-09 05:35:31.072929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 01:40:39.644 [2024-12-09 05:35:31.072941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 01:40:39.644 [2024-12-09 05:35:31.072956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 01:40:39.644 [2024-12-09 05:35:31.072969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 01:40:39.644 [2024-12-09 05:35:31.072984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 01:40:39.644 [2024-12-09 05:35:31.072996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 01:40:39.644 [2024-12-09 05:35:31.073010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 01:40:39.644 [2024-12-09 05:35:31.073023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 01:40:39.644 [2024-12-09 05:35:31.073040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 01:40:39.644 [2024-12-09 05:35:31.073082] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 01:40:39.644 [2024-12-09 05:35:31.073112] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 84f9641f-e66b-4728-a381-78b80f3ef027 01:40:39.644 [2024-12-09 05:35:31.073126] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 01:40:39.644 [2024-12-09 05:35:31.073143] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 01:40:39.644 [2024-12-09 05:35:31.073158] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 01:40:39.644 [2024-12-09 05:35:31.073172] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 01:40:39.644 [2024-12-09 05:35:31.073184] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 01:40:39.644 [2024-12-09 05:35:31.073199] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 01:40:39.644 [2024-12-09 05:35:31.073211] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 01:40:39.644 [2024-12-09 05:35:31.073224] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 01:40:39.644 [2024-12-09 05:35:31.073235] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 01:40:39.644 [2024-12-09 05:35:31.073250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:40:39.644 [2024-12-09 05:35:31.073263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 01:40:39.644 [2024-12-09 05:35:31.073279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.530 ms 01:40:39.644 [2024-12-09 05:35:31.073291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:39.644 [2024-12-09 05:35:31.090833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:40:39.644 [2024-12-09 05:35:31.090876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 01:40:39.644 [2024-12-09 05:35:31.090915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.462 ms 01:40:39.645 [2024-12-09 05:35:31.090927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:39.645 [2024-12-09 05:35:31.091490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:40:39.645 [2024-12-09 05:35:31.091529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 01:40:39.645 [2024-12-09 05:35:31.091550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.479 ms 01:40:39.645 [2024-12-09 05:35:31.091562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:39.645 [2024-12-09 05:35:31.150498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:40:39.645 [2024-12-09 05:35:31.150554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 01:40:39.645 [2024-12-09 05:35:31.150593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:40:39.645 [2024-12-09 05:35:31.150606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:39.645 [2024-12-09 05:35:31.150724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:40:39.645 [2024-12-09 05:35:31.150745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 01:40:39.645 [2024-12-09 05:35:31.150762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:40:39.645 [2024-12-09 05:35:31.150774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:39.645 [2024-12-09 05:35:31.150975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:40:39.645 [2024-12-09 05:35:31.151000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 01:40:39.645 [2024-12-09 05:35:31.151017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:40:39.645 [2024-12-09 05:35:31.151029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:39.645 [2024-12-09 05:35:31.151072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:40:39.645 [2024-12-09 05:35:31.151086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 01:40:39.645 [2024-12-09 05:35:31.151101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:40:39.645 [2024-12-09 05:35:31.151112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:39.645 [2024-12-09 05:35:31.259227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:40:39.902 [2024-12-09 05:35:31.259521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 01:40:39.902 [2024-12-09 05:35:31.259558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:40:39.902 [2024-12-09 05:35:31.259572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:39.902 [2024-12-09 05:35:31.341583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:40:39.902 [2024-12-09 05:35:31.341660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 01:40:39.902 [2024-12-09 05:35:31.341758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:40:39.902 [2024-12-09 05:35:31.341771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:39.902 [2024-12-09 05:35:31.341937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:40:39.902 [2024-12-09 05:35:31.341957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 01:40:39.902 [2024-12-09 05:35:31.341978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:40:39.902 [2024-12-09 05:35:31.341990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:39.902 [2024-12-09 05:35:31.342116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:40:39.902 [2024-12-09 05:35:31.342165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 01:40:39.902 [2024-12-09 05:35:31.342181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:40:39.902 [2024-12-09 05:35:31.342192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:39.902 [2024-12-09 05:35:31.342341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:40:39.902 [2024-12-09 05:35:31.342361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 01:40:39.902 [2024-12-09 05:35:31.342380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:40:39.902 [2024-12-09 05:35:31.342391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:39.902 [2024-12-09 05:35:31.342500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:40:39.903 [2024-12-09 05:35:31.342520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 01:40:39.903 [2024-12-09 05:35:31.342536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:40:39.903 [2024-12-09 05:35:31.342548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:39.903 [2024-12-09 05:35:31.342609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:40:39.903 [2024-12-09 05:35:31.342625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 01:40:39.903 [2024-12-09 05:35:31.342640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:40:39.903 [2024-12-09 05:35:31.342655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:39.903 [2024-12-09 05:35:31.342798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:40:39.903 [2024-12-09 05:35:31.342839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 01:40:39.903 [2024-12-09 05:35:31.342857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:40:39.903 [2024-12-09 05:35:31.342868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:40:39.903 [2024-12-09 05:35:31.343105] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 468.381 ms, result 0 01:40:39.903 true 01:40:39.903 05:35:31 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 76957 01:40:39.903 05:35:31 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 76957 ']' 01:40:39.903 05:35:31 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 76957 01:40:39.903 05:35:31 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 01:40:39.903 05:35:31 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:40:39.903 05:35:31 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76957 01:40:39.903 killing process with pid 76957 01:40:39.903 05:35:31 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:40:39.903 05:35:31 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:40:39.903 05:35:31 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76957' 01:40:39.903 05:35:31 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 76957 01:40:39.903 05:35:31 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 76957 01:40:45.168 05:35:35 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 01:40:45.168 05:35:35 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 01:40:45.168 05:35:35 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 01:40:45.168 05:35:35 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 01:40:45.168 05:35:35 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 01:40:45.168 05:35:35 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 01:40:45.168 05:35:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 01:40:45.168 05:35:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 01:40:45.168 05:35:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:40:45.168 05:35:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 01:40:45.168 05:35:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:40:45.168 05:35:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 01:40:45.168 05:35:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 01:40:45.168 05:35:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:40:45.168 05:35:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:40:45.168 05:35:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 01:40:45.168 05:35:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:40:45.168 05:35:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 01:40:45.168 05:35:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 01:40:45.168 05:35:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 01:40:45.168 05:35:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 01:40:45.168 05:35:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 01:40:45.168 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 01:40:45.168 fio-3.35 01:40:45.168 Starting 1 thread 01:40:50.442 01:40:50.442 test: (groupid=0, jobs=1): err= 0: pid=77181: Mon Dec 9 05:35:41 2024 01:40:50.442 read: IOPS=865, BW=57.5MiB/s (60.3MB/s)(255MiB/4427msec) 01:40:50.442 slat (nsec): min=5170, max=44761, avg=7368.80, stdev=3608.49 01:40:50.442 clat (usec): min=371, max=908, avg=512.65, stdev=52.70 01:40:50.442 lat (usec): min=377, max=926, avg=520.01, stdev=53.55 01:40:50.442 clat percentiles (usec): 01:40:50.442 | 1.00th=[ 404], 5.00th=[ 449], 10.00th=[ 457], 20.00th=[ 474], 01:40:50.442 | 30.00th=[ 482], 40.00th=[ 494], 50.00th=[ 502], 60.00th=[ 515], 01:40:50.442 | 70.00th=[ 529], 80.00th=[ 553], 90.00th=[ 586], 95.00th=[ 611], 01:40:50.442 | 99.00th=[ 668], 99.50th=[ 685], 99.90th=[ 734], 99.95th=[ 742], 01:40:50.442 | 99.99th=[ 906] 01:40:50.442 write: IOPS=872, BW=57.9MiB/s (60.7MB/s)(256MiB/4422msec); 0 zone resets 01:40:50.442 slat (usec): min=18, max=159, avg=24.53, stdev= 7.57 01:40:50.442 clat (usec): min=408, max=1092, avg=593.02, stdev=68.62 01:40:50.442 lat (usec): min=430, max=1136, avg=617.55, stdev=69.57 01:40:50.442 clat percentiles (usec): 01:40:50.442 | 1.00th=[ 474], 5.00th=[ 498], 10.00th=[ 519], 20.00th=[ 545], 01:40:50.442 | 30.00th=[ 562], 40.00th=[ 570], 50.00th=[ 586], 60.00th=[ 594], 01:40:50.442 | 70.00th=[ 619], 80.00th=[ 635], 90.00th=[ 668], 95.00th=[ 693], 01:40:50.442 | 99.00th=[ 865], 99.50th=[ 914], 99.90th=[ 1045], 99.95th=[ 1090], 01:40:50.442 | 99.99th=[ 1090] 01:40:50.442 bw ( KiB/s): min=56984, max=61064, per=99.60%, avg=59058.00, stdev=1756.38, samples=8 01:40:50.443 iops : min= 838, max= 898, avg=868.50, stdev=25.83, samples=8 01:40:50.443 lat (usec) : 500=26.51%, 750=72.42%, 1000=1.01% 01:40:50.443 lat (msec) : 2=0.07% 01:40:50.443 cpu : usr=99.10%, sys=0.18%, ctx=15, majf=0, minf=1169 01:40:50.443 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:40:50.443 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:40:50.443 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:40:50.443 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 01:40:50.443 latency : target=0, window=0, percentile=100.00%, depth=1 01:40:50.443 01:40:50.443 Run status group 0 (all jobs): 01:40:50.443 READ: bw=57.5MiB/s (60.3MB/s), 57.5MiB/s-57.5MiB/s (60.3MB/s-60.3MB/s), io=255MiB (267MB), run=4427-4427msec 01:40:50.443 WRITE: bw=57.9MiB/s (60.7MB/s), 57.9MiB/s-57.9MiB/s (60.7MB/s-60.7MB/s), io=256MiB (269MB), run=4422-4422msec 01:40:52.346 ----------------------------------------------------- 01:40:52.346 Suppressions used: 01:40:52.346 count bytes template 01:40:52.346 1 5 /usr/src/fio/parse.c 01:40:52.346 1 8 libtcmalloc_minimal.so 01:40:52.346 1 904 libcrypto.so 01:40:52.346 ----------------------------------------------------- 01:40:52.346 01:40:52.346 05:35:43 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 01:40:52.346 05:35:43 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 01:40:52.346 05:35:43 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 01:40:52.346 05:35:43 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 01:40:52.346 05:35:43 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 01:40:52.346 05:35:43 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 01:40:52.346 05:35:43 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 01:40:52.346 05:35:43 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 01:40:52.346 05:35:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 01:40:52.346 05:35:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 01:40:52.346 05:35:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:40:52.346 05:35:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 01:40:52.346 05:35:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:40:52.346 05:35:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 01:40:52.346 05:35:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 01:40:52.346 05:35:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:40:52.346 05:35:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:40:52.346 05:35:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 01:40:52.346 05:35:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:40:52.346 05:35:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 01:40:52.346 05:35:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 01:40:52.346 05:35:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 01:40:52.346 05:35:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 01:40:52.346 05:35:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 01:40:52.605 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 01:40:52.605 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 01:40:52.605 fio-3.35 01:40:52.605 Starting 2 threads 01:41:24.696 01:41:24.696 first_half: (groupid=0, jobs=1): err= 0: pid=77291: Mon Dec 9 05:36:14 2024 01:41:24.696 read: IOPS=2273, BW=9092KiB/s (9311kB/s)(256MiB/28800msec) 01:41:24.696 slat (usec): min=4, max=943, avg= 8.54, stdev= 6.10 01:41:24.696 clat (usec): min=1106, max=322567, avg=47647.20, stdev=29063.53 01:41:24.696 lat (usec): min=1111, max=322576, avg=47655.73, stdev=29063.77 01:41:24.696 clat percentiles (msec): 01:41:24.696 | 1.00th=[ 14], 5.00th=[ 38], 10.00th=[ 39], 20.00th=[ 39], 01:41:24.696 | 30.00th=[ 40], 40.00th=[ 41], 50.00th=[ 41], 60.00th=[ 42], 01:41:24.696 | 70.00th=[ 45], 80.00th=[ 46], 90.00th=[ 51], 95.00th=[ 91], 01:41:24.696 | 99.00th=[ 201], 99.50th=[ 218], 99.90th=[ 257], 99.95th=[ 284], 01:41:24.696 | 99.99th=[ 317] 01:41:24.696 write: IOPS=2279, BW=9118KiB/s (9337kB/s)(256MiB/28751msec); 0 zone resets 01:41:24.696 slat (usec): min=5, max=226, avg= 9.04, stdev= 5.38 01:41:24.696 clat (usec): min=498, max=56088, avg=8616.31, stdev=8888.87 01:41:24.696 lat (usec): min=505, max=56097, avg=8625.35, stdev=8888.98 01:41:24.696 clat percentiles (usec): 01:41:24.696 | 1.00th=[ 1090], 5.00th=[ 1483], 10.00th=[ 1762], 20.00th=[ 2999], 01:41:24.696 | 30.00th=[ 4228], 40.00th=[ 5407], 50.00th=[ 6325], 60.00th=[ 7373], 01:41:24.696 | 70.00th=[ 8160], 80.00th=[ 9896], 90.00th=[16909], 95.00th=[31851], 01:41:24.696 | 99.00th=[44303], 99.50th=[45351], 99.90th=[48497], 99.95th=[51643], 01:41:24.696 | 99.99th=[55313] 01:41:24.696 bw ( KiB/s): min= 1760, max=53408, per=100.00%, avg=22680.00, stdev=14794.32, samples=23 01:41:24.696 iops : min= 440, max=13352, avg=5670.00, stdev=3698.58, samples=23 01:41:24.696 lat (usec) : 500=0.01%, 750=0.06%, 1000=0.26% 01:41:24.696 lat (msec) : 2=6.54%, 4=6.94%, 10=26.36%, 20=7.94%, 50=46.59% 01:41:24.696 lat (msec) : 100=2.98%, 250=2.25%, 500=0.07% 01:41:24.696 cpu : usr=99.03%, sys=0.29%, ctx=753, majf=0, minf=5552 01:41:24.696 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 01:41:24.696 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:41:24.696 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 01:41:24.696 issued rwts: total=65465,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 01:41:24.696 latency : target=0, window=0, percentile=100.00%, depth=128 01:41:24.696 second_half: (groupid=0, jobs=1): err= 0: pid=77292: Mon Dec 9 05:36:14 2024 01:41:24.696 read: IOPS=2295, BW=9182KiB/s (9402kB/s)(256MiB/28529msec) 01:41:24.696 slat (usec): min=4, max=112, avg= 8.71, stdev= 3.93 01:41:24.696 clat (msec): min=11, max=260, avg=48.05, stdev=25.79 01:41:24.696 lat (msec): min=11, max=260, avg=48.06, stdev=25.79 01:41:24.696 clat percentiles (msec): 01:41:24.696 | 1.00th=[ 36], 5.00th=[ 39], 10.00th=[ 39], 20.00th=[ 39], 01:41:24.696 | 30.00th=[ 40], 40.00th=[ 41], 50.00th=[ 41], 60.00th=[ 43], 01:41:24.696 | 70.00th=[ 45], 80.00th=[ 47], 90.00th=[ 53], 95.00th=[ 90], 01:41:24.696 | 99.00th=[ 186], 99.50th=[ 203], 99.90th=[ 236], 99.95th=[ 245], 01:41:24.696 | 99.99th=[ 255] 01:41:24.696 write: IOPS=2311, BW=9247KiB/s (9469kB/s)(256MiB/28349msec); 0 zone resets 01:41:24.696 slat (usec): min=5, max=562, avg= 9.56, stdev= 6.46 01:41:24.696 clat (usec): min=506, max=48571, avg=7677.46, stdev=5514.29 01:41:24.696 lat (usec): min=513, max=48580, avg=7687.02, stdev=5514.36 01:41:24.696 clat percentiles (usec): 01:41:24.696 | 1.00th=[ 1303], 5.00th=[ 2073], 10.00th=[ 2999], 20.00th=[ 4080], 01:41:24.696 | 30.00th=[ 5014], 40.00th=[ 5800], 50.00th=[ 6521], 60.00th=[ 7242], 01:41:24.696 | 70.00th=[ 7963], 80.00th=[ 9241], 90.00th=[14877], 95.00th=[16909], 01:41:24.696 | 99.00th=[34866], 99.50th=[42206], 99.90th=[46924], 99.95th=[47449], 01:41:24.696 | 99.99th=[48497] 01:41:24.696 bw ( KiB/s): min= 2936, max=40824, per=100.00%, avg=20971.52, stdev=13197.66, samples=25 01:41:24.696 iops : min= 734, max=10206, avg=5242.88, stdev=3299.42, samples=25 01:41:24.696 lat (usec) : 750=0.04%, 1000=0.14% 01:41:24.696 lat (msec) : 2=2.11%, 4=7.11%, 10=31.60%, 20=8.09%, 50=45.05% 01:41:24.696 lat (msec) : 100=3.67%, 250=2.18%, 500=0.01% 01:41:24.696 cpu : usr=98.91%, sys=0.40%, ctx=75, majf=0, minf=5561 01:41:24.696 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 01:41:24.696 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:41:24.696 complete : 0=0.0%, 4=99.9%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 01:41:24.696 issued rwts: total=65489,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 01:41:24.696 latency : target=0, window=0, percentile=100.00%, depth=128 01:41:24.696 01:41:24.696 Run status group 0 (all jobs): 01:41:24.696 READ: bw=17.8MiB/s (18.6MB/s), 9092KiB/s-9182KiB/s (9311kB/s-9402kB/s), io=512MiB (536MB), run=28529-28800msec 01:41:24.696 WRITE: bw=17.8MiB/s (18.7MB/s), 9118KiB/s-9247KiB/s (9337kB/s-9469kB/s), io=512MiB (537MB), run=28349-28751msec 01:41:25.264 ----------------------------------------------------- 01:41:25.264 Suppressions used: 01:41:25.264 count bytes template 01:41:25.264 2 10 /usr/src/fio/parse.c 01:41:25.264 2 192 /usr/src/fio/iolog.c 01:41:25.264 1 8 libtcmalloc_minimal.so 01:41:25.264 1 904 libcrypto.so 01:41:25.264 ----------------------------------------------------- 01:41:25.264 01:41:25.523 05:36:16 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 01:41:25.523 05:36:16 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 01:41:25.523 05:36:16 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 01:41:25.523 05:36:16 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 01:41:25.523 05:36:16 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 01:41:25.523 05:36:16 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 01:41:25.523 05:36:16 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 01:41:25.523 05:36:16 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 01:41:25.523 05:36:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 01:41:25.523 05:36:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 01:41:25.523 05:36:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:41:25.523 05:36:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 01:41:25.523 05:36:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:41:25.523 05:36:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 01:41:25.523 05:36:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 01:41:25.523 05:36:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:41:25.523 05:36:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 01:41:25.523 05:36:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:41:25.523 05:36:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:41:25.523 05:36:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 01:41:25.523 05:36:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 01:41:25.523 05:36:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 01:41:25.523 05:36:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 01:41:25.523 05:36:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 01:41:25.783 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 01:41:25.783 fio-3.35 01:41:25.783 Starting 1 thread 01:41:43.945 01:41:43.945 test: (groupid=0, jobs=1): err= 0: pid=77650: Mon Dec 9 05:36:35 2024 01:41:43.945 read: IOPS=6095, BW=23.8MiB/s (25.0MB/s)(255MiB/10697msec) 01:41:43.945 slat (usec): min=4, max=133, avg= 7.11, stdev= 2.95 01:41:43.945 clat (usec): min=910, max=40426, avg=20988.05, stdev=1053.85 01:41:43.945 lat (usec): min=915, max=40431, avg=20995.15, stdev=1053.89 01:41:43.945 clat percentiles (usec): 01:41:43.945 | 1.00th=[19268], 5.00th=[19792], 10.00th=[20055], 20.00th=[20317], 01:41:43.945 | 30.00th=[20579], 40.00th=[20841], 50.00th=[20841], 60.00th=[21103], 01:41:43.945 | 70.00th=[21365], 80.00th=[21627], 90.00th=[21890], 95.00th=[22152], 01:41:43.945 | 99.00th=[23200], 99.50th=[23987], 99.90th=[30278], 99.95th=[35390], 01:41:43.945 | 99.99th=[39584] 01:41:43.945 write: IOPS=10.9k, BW=42.7MiB/s (44.7MB/s)(256MiB/5999msec); 0 zone resets 01:41:43.945 slat (usec): min=5, max=245, avg= 9.60, stdev= 5.85 01:41:43.945 clat (usec): min=652, max=71453, avg=11659.41, stdev=14734.99 01:41:43.945 lat (usec): min=679, max=71460, avg=11669.00, stdev=14735.03 01:41:43.945 clat percentiles (usec): 01:41:43.945 | 1.00th=[ 1004], 5.00th=[ 1221], 10.00th=[ 1352], 20.00th=[ 1532], 01:41:43.945 | 30.00th=[ 1762], 40.00th=[ 2311], 50.00th=[ 7504], 60.00th=[ 8717], 01:41:43.945 | 70.00th=[10159], 80.00th=[12387], 90.00th=[41157], 95.00th=[46400], 01:41:43.945 | 99.00th=[52167], 99.50th=[54789], 99.90th=[64750], 99.95th=[67634], 01:41:43.945 | 99.99th=[70779] 01:41:43.945 bw ( KiB/s): min=34768, max=61984, per=99.98%, avg=43690.67, stdev=9211.22, samples=12 01:41:43.945 iops : min= 8692, max=15496, avg=10922.67, stdev=2302.81, samples=12 01:41:43.945 lat (usec) : 750=0.01%, 1000=0.49% 01:41:43.945 lat (msec) : 2=17.75%, 4=2.68%, 10=13.89%, 20=11.75%, 50=52.52% 01:41:43.945 lat (msec) : 100=0.91% 01:41:43.945 cpu : usr=98.47%, sys=0.73%, ctx=45, majf=0, minf=5565 01:41:43.945 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 01:41:43.945 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:41:43.945 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:41:43.945 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 01:41:43.945 latency : target=0, window=0, percentile=100.00%, depth=128 01:41:43.945 01:41:43.945 Run status group 0 (all jobs): 01:41:43.945 READ: bw=23.8MiB/s (25.0MB/s), 23.8MiB/s-23.8MiB/s (25.0MB/s-25.0MB/s), io=255MiB (267MB), run=10697-10697msec 01:41:43.945 WRITE: bw=42.7MiB/s (44.7MB/s), 42.7MiB/s-42.7MiB/s (44.7MB/s-44.7MB/s), io=256MiB (268MB), run=5999-5999msec 01:41:45.850 ----------------------------------------------------- 01:41:45.850 Suppressions used: 01:41:45.850 count bytes template 01:41:45.850 1 5 /usr/src/fio/parse.c 01:41:45.850 2 192 /usr/src/fio/iolog.c 01:41:45.850 1 8 libtcmalloc_minimal.so 01:41:45.851 1 904 libcrypto.so 01:41:45.851 ----------------------------------------------------- 01:41:45.851 01:41:45.851 05:36:37 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 01:41:45.851 05:36:37 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 01:41:45.851 05:36:37 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 01:41:45.851 05:36:37 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 01:41:45.851 Remove shared memory files 01:41:45.851 05:36:37 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 01:41:45.851 05:36:37 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 01:41:45.851 05:36:37 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 01:41:45.851 05:36:37 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 01:41:45.851 05:36:37 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid57933 /dev/shm/spdk_tgt_trace.pid75876 01:41:45.851 05:36:37 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 01:41:45.851 05:36:37 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 01:41:45.851 ************************************ 01:41:45.851 END TEST ftl_fio_basic 01:41:45.851 ************************************ 01:41:45.851 01:41:45.851 real 1m16.151s 01:41:45.851 user 2m45.754s 01:41:45.851 sys 0m4.431s 01:41:45.851 05:36:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 01:41:45.851 05:36:37 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 01:41:45.851 05:36:37 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 01:41:45.851 05:36:37 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:41:45.851 05:36:37 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 01:41:45.851 05:36:37 ftl -- common/autotest_common.sh@10 -- # set +x 01:41:45.851 ************************************ 01:41:45.851 START TEST ftl_bdevperf 01:41:45.851 ************************************ 01:41:45.851 05:36:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 01:41:45.851 * Looking for test storage... 01:41:45.851 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 01:41:45.851 05:36:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:41:45.851 05:36:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 01:41:45.851 05:36:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:41:45.851 05:36:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:41:45.851 05:36:37 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:41:45.851 05:36:37 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 01:41:45.851 05:36:37 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 01:41:45.851 05:36:37 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 01:41:45.851 05:36:37 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 01:41:45.851 05:36:37 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 01:41:45.851 05:36:37 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 01:41:45.851 05:36:37 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 01:41:45.851 05:36:37 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 01:41:45.851 05:36:37 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 01:41:45.851 05:36:37 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:41:45.851 05:36:37 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 01:41:45.851 05:36:37 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 01:41:45.851 05:36:37 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 01:41:45.851 05:36:37 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:41:45.851 05:36:37 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 01:41:45.851 05:36:37 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 01:41:45.851 05:36:37 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:41:45.851 05:36:37 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 01:41:45.851 05:36:37 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 01:41:45.851 05:36:37 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 01:41:45.851 05:36:37 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 01:41:45.851 05:36:37 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:41:45.851 05:36:37 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 01:41:45.851 05:36:37 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 01:41:45.851 05:36:37 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:41:45.851 05:36:37 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:41:45.851 05:36:37 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 01:41:45.851 05:36:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:41:45.851 05:36:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:41:45.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:41:45.851 --rc genhtml_branch_coverage=1 01:41:45.851 --rc genhtml_function_coverage=1 01:41:45.851 --rc genhtml_legend=1 01:41:45.851 --rc geninfo_all_blocks=1 01:41:45.851 --rc geninfo_unexecuted_blocks=1 01:41:45.851 01:41:45.851 ' 01:41:45.851 05:36:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:41:45.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:41:45.851 --rc genhtml_branch_coverage=1 01:41:45.851 --rc genhtml_function_coverage=1 01:41:45.851 --rc genhtml_legend=1 01:41:45.851 --rc geninfo_all_blocks=1 01:41:45.851 --rc geninfo_unexecuted_blocks=1 01:41:45.851 01:41:45.851 ' 01:41:45.851 05:36:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:41:45.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:41:45.851 --rc genhtml_branch_coverage=1 01:41:45.851 --rc genhtml_function_coverage=1 01:41:45.851 --rc genhtml_legend=1 01:41:45.851 --rc geninfo_all_blocks=1 01:41:45.851 --rc geninfo_unexecuted_blocks=1 01:41:45.851 01:41:45.851 ' 01:41:45.851 05:36:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:41:45.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:41:45.851 --rc genhtml_branch_coverage=1 01:41:45.851 --rc genhtml_function_coverage=1 01:41:45.851 --rc genhtml_legend=1 01:41:45.851 --rc geninfo_all_blocks=1 01:41:45.851 --rc geninfo_unexecuted_blocks=1 01:41:45.851 01:41:45.851 ' 01:41:45.851 05:36:37 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 01:41:45.851 05:36:37 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 01:41:45.851 05:36:37 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 01:41:45.851 05:36:37 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 01:41:45.851 05:36:37 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 01:41:45.851 05:36:37 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 01:41:45.851 05:36:37 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:41:45.851 05:36:37 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 01:41:45.851 05:36:37 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 01:41:45.851 05:36:37 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:41:45.851 05:36:37 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:41:45.851 05:36:37 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 01:41:45.851 05:36:37 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 01:41:45.851 05:36:37 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 01:41:45.851 05:36:37 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 01:41:45.851 05:36:37 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 01:41:45.851 05:36:37 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 01:41:45.851 05:36:37 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:41:45.851 05:36:37 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:41:45.851 05:36:37 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 01:41:45.851 05:36:37 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 01:41:45.851 05:36:37 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 01:41:45.851 05:36:37 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 01:41:45.851 05:36:37 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 01:41:45.851 05:36:37 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 01:41:45.851 05:36:37 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 01:41:45.851 05:36:37 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 01:41:45.851 05:36:37 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:41:45.851 05:36:37 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:41:45.851 05:36:37 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 01:41:45.851 05:36:37 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 01:41:45.851 05:36:37 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 01:41:45.851 05:36:37 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:41:45.851 05:36:37 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 01:41:45.851 05:36:37 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=77921 01:41:45.851 05:36:37 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 01:41:45.851 05:36:37 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 01:41:45.851 05:36:37 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 77921 01:41:45.851 05:36:37 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 77921 ']' 01:41:45.852 05:36:37 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:41:45.852 05:36:37 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 01:41:45.852 05:36:37 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:41:45.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:41:45.852 05:36:37 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 01:41:45.852 05:36:37 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 01:41:46.110 [2024-12-09 05:36:37.527397] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:41:46.110 [2024-12-09 05:36:37.528162] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77921 ] 01:41:46.110 [2024-12-09 05:36:37.723146] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:41:46.369 [2024-12-09 05:36:37.889885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:41:46.936 05:36:38 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:41:46.936 05:36:38 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 01:41:46.936 05:36:38 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 01:41:46.936 05:36:38 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 01:41:46.936 05:36:38 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 01:41:46.936 05:36:38 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 01:41:46.936 05:36:38 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 01:41:46.936 05:36:38 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 01:41:47.503 05:36:38 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 01:41:47.503 05:36:38 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 01:41:47.503 05:36:38 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 01:41:47.503 05:36:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 01:41:47.503 05:36:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 01:41:47.503 05:36:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 01:41:47.503 05:36:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 01:41:47.503 05:36:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 01:41:47.762 05:36:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 01:41:47.762 { 01:41:47.762 "name": "nvme0n1", 01:41:47.762 "aliases": [ 01:41:47.762 "428d822a-ad9c-4d35-841a-6c939cc66c7a" 01:41:47.762 ], 01:41:47.762 "product_name": "NVMe disk", 01:41:47.762 "block_size": 4096, 01:41:47.762 "num_blocks": 1310720, 01:41:47.762 "uuid": "428d822a-ad9c-4d35-841a-6c939cc66c7a", 01:41:47.762 "numa_id": -1, 01:41:47.762 "assigned_rate_limits": { 01:41:47.762 "rw_ios_per_sec": 0, 01:41:47.762 "rw_mbytes_per_sec": 0, 01:41:47.762 "r_mbytes_per_sec": 0, 01:41:47.762 "w_mbytes_per_sec": 0 01:41:47.762 }, 01:41:47.762 "claimed": true, 01:41:47.762 "claim_type": "read_many_write_one", 01:41:47.762 "zoned": false, 01:41:47.762 "supported_io_types": { 01:41:47.762 "read": true, 01:41:47.762 "write": true, 01:41:47.762 "unmap": true, 01:41:47.762 "flush": true, 01:41:47.762 "reset": true, 01:41:47.762 "nvme_admin": true, 01:41:47.762 "nvme_io": true, 01:41:47.762 "nvme_io_md": false, 01:41:47.762 "write_zeroes": true, 01:41:47.762 "zcopy": false, 01:41:47.762 "get_zone_info": false, 01:41:47.762 "zone_management": false, 01:41:47.762 "zone_append": false, 01:41:47.762 "compare": true, 01:41:47.762 "compare_and_write": false, 01:41:47.762 "abort": true, 01:41:47.762 "seek_hole": false, 01:41:47.762 "seek_data": false, 01:41:47.762 "copy": true, 01:41:47.762 "nvme_iov_md": false 01:41:47.762 }, 01:41:47.762 "driver_specific": { 01:41:47.762 "nvme": [ 01:41:47.762 { 01:41:47.762 "pci_address": "0000:00:11.0", 01:41:47.762 "trid": { 01:41:47.762 "trtype": "PCIe", 01:41:47.762 "traddr": "0000:00:11.0" 01:41:47.762 }, 01:41:47.762 "ctrlr_data": { 01:41:47.762 "cntlid": 0, 01:41:47.762 "vendor_id": "0x1b36", 01:41:47.762 "model_number": "QEMU NVMe Ctrl", 01:41:47.762 "serial_number": "12341", 01:41:47.762 "firmware_revision": "8.0.0", 01:41:47.762 "subnqn": "nqn.2019-08.org.qemu:12341", 01:41:47.762 "oacs": { 01:41:47.762 "security": 0, 01:41:47.762 "format": 1, 01:41:47.762 "firmware": 0, 01:41:47.762 "ns_manage": 1 01:41:47.762 }, 01:41:47.762 "multi_ctrlr": false, 01:41:47.762 "ana_reporting": false 01:41:47.762 }, 01:41:47.762 "vs": { 01:41:47.762 "nvme_version": "1.4" 01:41:47.762 }, 01:41:47.762 "ns_data": { 01:41:47.762 "id": 1, 01:41:47.762 "can_share": false 01:41:47.762 } 01:41:47.762 } 01:41:47.762 ], 01:41:47.762 "mp_policy": "active_passive" 01:41:47.762 } 01:41:47.762 } 01:41:47.762 ]' 01:41:47.762 05:36:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 01:41:47.762 05:36:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 01:41:47.762 05:36:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 01:41:47.762 05:36:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 01:41:47.762 05:36:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 01:41:47.762 05:36:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 01:41:47.762 05:36:39 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 01:41:47.762 05:36:39 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 01:41:47.762 05:36:39 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 01:41:47.762 05:36:39 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 01:41:47.762 05:36:39 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 01:41:48.022 05:36:39 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=5532f1d2-f2f1-4975-ac6b-4e2c7d387ae0 01:41:48.022 05:36:39 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 01:41:48.022 05:36:39 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5532f1d2-f2f1-4975-ac6b-4e2c7d387ae0 01:41:48.281 05:36:39 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 01:41:48.539 05:36:40 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=b53d6e8f-9da7-4320-8cf7-241463b8e6d6 01:41:48.539 05:36:40 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u b53d6e8f-9da7-4320-8cf7-241463b8e6d6 01:41:48.797 05:36:40 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=1aef8fd0-7d4c-4e5d-bca1-69b1256c6574 01:41:48.798 05:36:40 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 1aef8fd0-7d4c-4e5d-bca1-69b1256c6574 01:41:48.798 05:36:40 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 01:41:48.798 05:36:40 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 01:41:48.798 05:36:40 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=1aef8fd0-7d4c-4e5d-bca1-69b1256c6574 01:41:48.798 05:36:40 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 01:41:48.798 05:36:40 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 1aef8fd0-7d4c-4e5d-bca1-69b1256c6574 01:41:48.798 05:36:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=1aef8fd0-7d4c-4e5d-bca1-69b1256c6574 01:41:48.798 05:36:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 01:41:48.798 05:36:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 01:41:48.798 05:36:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 01:41:48.798 05:36:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1aef8fd0-7d4c-4e5d-bca1-69b1256c6574 01:41:49.365 05:36:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 01:41:49.365 { 01:41:49.365 "name": "1aef8fd0-7d4c-4e5d-bca1-69b1256c6574", 01:41:49.365 "aliases": [ 01:41:49.365 "lvs/nvme0n1p0" 01:41:49.365 ], 01:41:49.365 "product_name": "Logical Volume", 01:41:49.365 "block_size": 4096, 01:41:49.365 "num_blocks": 26476544, 01:41:49.365 "uuid": "1aef8fd0-7d4c-4e5d-bca1-69b1256c6574", 01:41:49.365 "assigned_rate_limits": { 01:41:49.365 "rw_ios_per_sec": 0, 01:41:49.365 "rw_mbytes_per_sec": 0, 01:41:49.365 "r_mbytes_per_sec": 0, 01:41:49.365 "w_mbytes_per_sec": 0 01:41:49.365 }, 01:41:49.365 "claimed": false, 01:41:49.365 "zoned": false, 01:41:49.365 "supported_io_types": { 01:41:49.365 "read": true, 01:41:49.365 "write": true, 01:41:49.365 "unmap": true, 01:41:49.365 "flush": false, 01:41:49.365 "reset": true, 01:41:49.365 "nvme_admin": false, 01:41:49.365 "nvme_io": false, 01:41:49.365 "nvme_io_md": false, 01:41:49.365 "write_zeroes": true, 01:41:49.365 "zcopy": false, 01:41:49.365 "get_zone_info": false, 01:41:49.365 "zone_management": false, 01:41:49.365 "zone_append": false, 01:41:49.365 "compare": false, 01:41:49.365 "compare_and_write": false, 01:41:49.365 "abort": false, 01:41:49.365 "seek_hole": true, 01:41:49.365 "seek_data": true, 01:41:49.365 "copy": false, 01:41:49.365 "nvme_iov_md": false 01:41:49.365 }, 01:41:49.365 "driver_specific": { 01:41:49.365 "lvol": { 01:41:49.365 "lvol_store_uuid": "b53d6e8f-9da7-4320-8cf7-241463b8e6d6", 01:41:49.365 "base_bdev": "nvme0n1", 01:41:49.365 "thin_provision": true, 01:41:49.365 "num_allocated_clusters": 0, 01:41:49.365 "snapshot": false, 01:41:49.365 "clone": false, 01:41:49.365 "esnap_clone": false 01:41:49.365 } 01:41:49.365 } 01:41:49.365 } 01:41:49.365 ]' 01:41:49.365 05:36:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 01:41:49.365 05:36:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 01:41:49.365 05:36:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 01:41:49.365 05:36:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 01:41:49.365 05:36:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 01:41:49.365 05:36:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 01:41:49.366 05:36:40 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 01:41:49.366 05:36:40 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 01:41:49.366 05:36:40 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 01:41:49.624 05:36:41 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 01:41:49.624 05:36:41 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 01:41:49.624 05:36:41 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 1aef8fd0-7d4c-4e5d-bca1-69b1256c6574 01:41:49.624 05:36:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=1aef8fd0-7d4c-4e5d-bca1-69b1256c6574 01:41:49.624 05:36:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 01:41:49.624 05:36:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 01:41:49.624 05:36:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 01:41:49.624 05:36:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1aef8fd0-7d4c-4e5d-bca1-69b1256c6574 01:41:49.883 05:36:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 01:41:49.883 { 01:41:49.883 "name": "1aef8fd0-7d4c-4e5d-bca1-69b1256c6574", 01:41:49.883 "aliases": [ 01:41:49.883 "lvs/nvme0n1p0" 01:41:49.883 ], 01:41:49.883 "product_name": "Logical Volume", 01:41:49.883 "block_size": 4096, 01:41:49.883 "num_blocks": 26476544, 01:41:49.883 "uuid": "1aef8fd0-7d4c-4e5d-bca1-69b1256c6574", 01:41:49.883 "assigned_rate_limits": { 01:41:49.883 "rw_ios_per_sec": 0, 01:41:49.883 "rw_mbytes_per_sec": 0, 01:41:49.883 "r_mbytes_per_sec": 0, 01:41:49.883 "w_mbytes_per_sec": 0 01:41:49.883 }, 01:41:49.883 "claimed": false, 01:41:49.883 "zoned": false, 01:41:49.883 "supported_io_types": { 01:41:49.883 "read": true, 01:41:49.883 "write": true, 01:41:49.883 "unmap": true, 01:41:49.883 "flush": false, 01:41:49.883 "reset": true, 01:41:49.883 "nvme_admin": false, 01:41:49.883 "nvme_io": false, 01:41:49.883 "nvme_io_md": false, 01:41:49.883 "write_zeroes": true, 01:41:49.883 "zcopy": false, 01:41:49.883 "get_zone_info": false, 01:41:49.883 "zone_management": false, 01:41:49.883 "zone_append": false, 01:41:49.883 "compare": false, 01:41:49.883 "compare_and_write": false, 01:41:49.883 "abort": false, 01:41:49.883 "seek_hole": true, 01:41:49.883 "seek_data": true, 01:41:49.883 "copy": false, 01:41:49.883 "nvme_iov_md": false 01:41:49.883 }, 01:41:49.883 "driver_specific": { 01:41:49.883 "lvol": { 01:41:49.883 "lvol_store_uuid": "b53d6e8f-9da7-4320-8cf7-241463b8e6d6", 01:41:49.883 "base_bdev": "nvme0n1", 01:41:49.883 "thin_provision": true, 01:41:49.883 "num_allocated_clusters": 0, 01:41:49.883 "snapshot": false, 01:41:49.883 "clone": false, 01:41:49.883 "esnap_clone": false 01:41:49.883 } 01:41:49.883 } 01:41:49.883 } 01:41:49.883 ]' 01:41:49.883 05:36:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 01:41:50.142 05:36:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 01:41:50.142 05:36:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 01:41:50.142 05:36:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 01:41:50.142 05:36:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 01:41:50.142 05:36:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 01:41:50.142 05:36:41 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 01:41:50.142 05:36:41 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 01:41:50.401 05:36:41 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 01:41:50.401 05:36:41 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 1aef8fd0-7d4c-4e5d-bca1-69b1256c6574 01:41:50.401 05:36:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=1aef8fd0-7d4c-4e5d-bca1-69b1256c6574 01:41:50.401 05:36:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 01:41:50.401 05:36:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 01:41:50.401 05:36:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 01:41:50.401 05:36:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1aef8fd0-7d4c-4e5d-bca1-69b1256c6574 01:41:50.660 05:36:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 01:41:50.660 { 01:41:50.660 "name": "1aef8fd0-7d4c-4e5d-bca1-69b1256c6574", 01:41:50.660 "aliases": [ 01:41:50.660 "lvs/nvme0n1p0" 01:41:50.660 ], 01:41:50.660 "product_name": "Logical Volume", 01:41:50.660 "block_size": 4096, 01:41:50.660 "num_blocks": 26476544, 01:41:50.660 "uuid": "1aef8fd0-7d4c-4e5d-bca1-69b1256c6574", 01:41:50.660 "assigned_rate_limits": { 01:41:50.660 "rw_ios_per_sec": 0, 01:41:50.660 "rw_mbytes_per_sec": 0, 01:41:50.660 "r_mbytes_per_sec": 0, 01:41:50.660 "w_mbytes_per_sec": 0 01:41:50.660 }, 01:41:50.660 "claimed": false, 01:41:50.660 "zoned": false, 01:41:50.660 "supported_io_types": { 01:41:50.660 "read": true, 01:41:50.660 "write": true, 01:41:50.660 "unmap": true, 01:41:50.660 "flush": false, 01:41:50.660 "reset": true, 01:41:50.660 "nvme_admin": false, 01:41:50.660 "nvme_io": false, 01:41:50.660 "nvme_io_md": false, 01:41:50.660 "write_zeroes": true, 01:41:50.660 "zcopy": false, 01:41:50.660 "get_zone_info": false, 01:41:50.660 "zone_management": false, 01:41:50.660 "zone_append": false, 01:41:50.660 "compare": false, 01:41:50.660 "compare_and_write": false, 01:41:50.660 "abort": false, 01:41:50.660 "seek_hole": true, 01:41:50.660 "seek_data": true, 01:41:50.660 "copy": false, 01:41:50.660 "nvme_iov_md": false 01:41:50.660 }, 01:41:50.660 "driver_specific": { 01:41:50.660 "lvol": { 01:41:50.660 "lvol_store_uuid": "b53d6e8f-9da7-4320-8cf7-241463b8e6d6", 01:41:50.660 "base_bdev": "nvme0n1", 01:41:50.660 "thin_provision": true, 01:41:50.660 "num_allocated_clusters": 0, 01:41:50.660 "snapshot": false, 01:41:50.660 "clone": false, 01:41:50.660 "esnap_clone": false 01:41:50.660 } 01:41:50.660 } 01:41:50.660 } 01:41:50.660 ]' 01:41:50.660 05:36:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 01:41:50.660 05:36:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 01:41:50.660 05:36:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 01:41:50.660 05:36:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 01:41:50.660 05:36:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 01:41:50.660 05:36:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 01:41:50.660 05:36:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 01:41:50.660 05:36:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 1aef8fd0-7d4c-4e5d-bca1-69b1256c6574 -c nvc0n1p0 --l2p_dram_limit 20 01:41:50.920 [2024-12-09 05:36:42.476050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:41:50.920 [2024-12-09 05:36:42.476140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 01:41:50.920 [2024-12-09 05:36:42.476179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 01:41:50.920 [2024-12-09 05:36:42.476194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:41:50.920 [2024-12-09 05:36:42.476290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:41:50.920 [2024-12-09 05:36:42.476325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 01:41:50.920 [2024-12-09 05:36:42.476352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 01:41:50.920 [2024-12-09 05:36:42.476365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:41:50.920 [2024-12-09 05:36:42.476390] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 01:41:50.920 [2024-12-09 05:36:42.477612] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 01:41:50.920 [2024-12-09 05:36:42.477643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:41:50.920 [2024-12-09 05:36:42.477659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 01:41:50.920 [2024-12-09 05:36:42.477702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.260 ms 01:41:50.920 [2024-12-09 05:36:42.477735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:41:50.920 [2024-12-09 05:36:42.477870] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 9cf5d675-1e6d-4399-a3ae-a030961ffb28 01:41:50.920 [2024-12-09 05:36:42.480095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:41:50.920 [2024-12-09 05:36:42.480146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 01:41:50.920 [2024-12-09 05:36:42.480185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 01:41:50.920 [2024-12-09 05:36:42.480197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:41:50.920 [2024-12-09 05:36:42.491536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:41:50.920 [2024-12-09 05:36:42.491814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 01:41:50.920 [2024-12-09 05:36:42.491866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.209 ms 01:41:50.920 [2024-12-09 05:36:42.491885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:41:50.920 [2024-12-09 05:36:42.492012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:41:50.920 [2024-12-09 05:36:42.492033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 01:41:50.920 [2024-12-09 05:36:42.492054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 01:41:50.920 [2024-12-09 05:36:42.492067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:41:50.920 [2024-12-09 05:36:42.492162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:41:50.920 [2024-12-09 05:36:42.492195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 01:41:50.920 [2024-12-09 05:36:42.492225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 01:41:50.920 [2024-12-09 05:36:42.492238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:41:50.920 [2024-12-09 05:36:42.492275] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 01:41:50.920 [2024-12-09 05:36:42.497837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:41:50.920 [2024-12-09 05:36:42.497894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 01:41:50.920 [2024-12-09 05:36:42.497910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.577 ms 01:41:50.920 [2024-12-09 05:36:42.497929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:41:50.920 [2024-12-09 05:36:42.497983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:41:50.920 [2024-12-09 05:36:42.498001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 01:41:50.920 [2024-12-09 05:36:42.498014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 01:41:50.920 [2024-12-09 05:36:42.498028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:41:50.920 [2024-12-09 05:36:42.498097] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 01:41:50.921 [2024-12-09 05:36:42.498259] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 01:41:50.921 [2024-12-09 05:36:42.498276] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 01:41:50.921 [2024-12-09 05:36:42.498293] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 01:41:50.921 [2024-12-09 05:36:42.498307] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 01:41:50.921 [2024-12-09 05:36:42.498335] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 01:41:50.921 [2024-12-09 05:36:42.498346] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 01:41:50.921 [2024-12-09 05:36:42.498358] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 01:41:50.921 [2024-12-09 05:36:42.498369] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 01:41:50.921 [2024-12-09 05:36:42.498381] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 01:41:50.921 [2024-12-09 05:36:42.498396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:41:50.921 [2024-12-09 05:36:42.498408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 01:41:50.921 [2024-12-09 05:36:42.498419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.300 ms 01:41:50.921 [2024-12-09 05:36:42.498432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:41:50.921 [2024-12-09 05:36:42.498545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:41:50.921 [2024-12-09 05:36:42.498567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 01:41:50.921 [2024-12-09 05:36:42.498581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 01:41:50.921 [2024-12-09 05:36:42.498598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:41:50.921 [2024-12-09 05:36:42.498748] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 01:41:50.921 [2024-12-09 05:36:42.498817] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 01:41:50.921 [2024-12-09 05:36:42.498845] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 01:41:50.921 [2024-12-09 05:36:42.498865] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:41:50.921 [2024-12-09 05:36:42.498876] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 01:41:50.921 [2024-12-09 05:36:42.498889] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 01:41:50.921 [2024-12-09 05:36:42.498900] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 01:41:50.921 [2024-12-09 05:36:42.498913] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 01:41:50.921 [2024-12-09 05:36:42.498924] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 01:41:50.921 [2024-12-09 05:36:42.498937] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 01:41:50.921 [2024-12-09 05:36:42.498947] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 01:41:50.921 [2024-12-09 05:36:42.498974] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 01:41:50.921 [2024-12-09 05:36:42.498984] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 01:41:50.921 [2024-12-09 05:36:42.498997] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 01:41:50.921 [2024-12-09 05:36:42.499008] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 01:41:50.921 [2024-12-09 05:36:42.499023] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:41:50.921 [2024-12-09 05:36:42.499048] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 01:41:50.921 [2024-12-09 05:36:42.499083] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 01:41:50.921 [2024-12-09 05:36:42.499093] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:41:50.921 [2024-12-09 05:36:42.499108] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 01:41:50.921 [2024-12-09 05:36:42.499118] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 01:41:50.921 [2024-12-09 05:36:42.499146] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:41:50.921 [2024-12-09 05:36:42.499172] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 01:41:50.921 [2024-12-09 05:36:42.499199] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 01:41:50.921 [2024-12-09 05:36:42.499209] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:41:50.921 [2024-12-09 05:36:42.499222] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 01:41:50.921 [2024-12-09 05:36:42.499248] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 01:41:50.921 [2024-12-09 05:36:42.499260] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:41:50.921 [2024-12-09 05:36:42.499271] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 01:41:50.921 [2024-12-09 05:36:42.499283] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 01:41:50.921 [2024-12-09 05:36:42.499293] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:41:50.921 [2024-12-09 05:36:42.499308] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 01:41:50.921 [2024-12-09 05:36:42.499319] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 01:41:50.921 [2024-12-09 05:36:42.499347] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 01:41:50.921 [2024-12-09 05:36:42.499358] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 01:41:50.921 [2024-12-09 05:36:42.499372] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 01:41:50.921 [2024-12-09 05:36:42.499382] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 01:41:50.921 [2024-12-09 05:36:42.499396] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 01:41:50.921 [2024-12-09 05:36:42.499407] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 01:41:50.921 [2024-12-09 05:36:42.499420] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:41:50.921 [2024-12-09 05:36:42.499431] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 01:41:50.921 [2024-12-09 05:36:42.499444] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 01:41:50.921 [2024-12-09 05:36:42.499455] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:41:50.921 [2024-12-09 05:36:42.499467] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 01:41:50.921 [2024-12-09 05:36:42.499479] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 01:41:50.921 [2024-12-09 05:36:42.499493] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 01:41:50.921 [2024-12-09 05:36:42.499505] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:41:50.921 [2024-12-09 05:36:42.499524] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 01:41:50.921 [2024-12-09 05:36:42.499545] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 01:41:50.921 [2024-12-09 05:36:42.499560] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 01:41:50.921 [2024-12-09 05:36:42.499572] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 01:41:50.921 [2024-12-09 05:36:42.499586] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 01:41:50.921 [2024-12-09 05:36:42.499597] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 01:41:50.921 [2024-12-09 05:36:42.499615] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 01:41:50.921 [2024-12-09 05:36:42.499630] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 01:41:50.921 [2024-12-09 05:36:42.499646] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 01:41:50.921 [2024-12-09 05:36:42.499658] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 01:41:50.921 [2024-12-09 05:36:42.499672] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 01:41:50.921 [2024-12-09 05:36:42.499684] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 01:41:50.921 [2024-12-09 05:36:42.499698] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 01:41:50.921 [2024-12-09 05:36:42.499725] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 01:41:50.921 [2024-12-09 05:36:42.499739] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 01:41:50.921 [2024-12-09 05:36:42.499751] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 01:41:50.921 [2024-12-09 05:36:42.499768] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 01:41:50.921 [2024-12-09 05:36:42.499781] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 01:41:50.921 [2024-12-09 05:36:42.499796] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 01:41:50.921 [2024-12-09 05:36:42.499808] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 01:41:50.921 [2024-12-09 05:36:42.499823] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 01:41:50.921 [2024-12-09 05:36:42.499835] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 01:41:50.921 [2024-12-09 05:36:42.499850] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 01:41:50.921 [2024-12-09 05:36:42.499876] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 01:41:50.921 [2024-12-09 05:36:42.499900] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 01:41:50.921 [2024-12-09 05:36:42.499913] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 01:41:50.921 [2024-12-09 05:36:42.499928] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 01:41:50.921 [2024-12-09 05:36:42.499941] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 01:41:50.921 [2024-12-09 05:36:42.499957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:41:50.921 [2024-12-09 05:36:42.499970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 01:41:50.921 [2024-12-09 05:36:42.499985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.321 ms 01:41:50.921 [2024-12-09 05:36:42.499996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:41:50.921 [2024-12-09 05:36:42.500050] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 01:41:50.922 [2024-12-09 05:36:42.500067] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 01:41:54.228 [2024-12-09 05:36:45.266684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:41:54.228 [2024-12-09 05:36:45.266785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 01:41:54.228 [2024-12-09 05:36:45.266828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2766.642 ms 01:41:54.228 [2024-12-09 05:36:45.266842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:41:54.228 [2024-12-09 05:36:45.305322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:41:54.228 [2024-12-09 05:36:45.305383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 01:41:54.228 [2024-12-09 05:36:45.305425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.156 ms 01:41:54.228 [2024-12-09 05:36:45.305437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:41:54.228 [2024-12-09 05:36:45.305654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:41:54.228 [2024-12-09 05:36:45.305714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 01:41:54.228 [2024-12-09 05:36:45.305736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 01:41:54.228 [2024-12-09 05:36:45.305749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:41:54.228 [2024-12-09 05:36:45.357304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:41:54.228 [2024-12-09 05:36:45.357381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 01:41:54.228 [2024-12-09 05:36:45.357423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.475 ms 01:41:54.228 [2024-12-09 05:36:45.357436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:41:54.228 [2024-12-09 05:36:45.357514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:41:54.228 [2024-12-09 05:36:45.357530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 01:41:54.228 [2024-12-09 05:36:45.357547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 01:41:54.228 [2024-12-09 05:36:45.357562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:41:54.228 [2024-12-09 05:36:45.358327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:41:54.228 [2024-12-09 05:36:45.358354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 01:41:54.228 [2024-12-09 05:36:45.358373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.646 ms 01:41:54.228 [2024-12-09 05:36:45.358386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:41:54.228 [2024-12-09 05:36:45.358580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:41:54.228 [2024-12-09 05:36:45.358600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 01:41:54.228 [2024-12-09 05:36:45.358618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.164 ms 01:41:54.228 [2024-12-09 05:36:45.358631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:41:54.228 [2024-12-09 05:36:45.376862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:41:54.228 [2024-12-09 05:36:45.376936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 01:41:54.228 [2024-12-09 05:36:45.376977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.196 ms 01:41:54.228 [2024-12-09 05:36:45.377006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:41:54.228 [2024-12-09 05:36:45.391861] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 01:41:54.228 [2024-12-09 05:36:45.399744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:41:54.228 [2024-12-09 05:36:45.399822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 01:41:54.228 [2024-12-09 05:36:45.399843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.588 ms 01:41:54.228 [2024-12-09 05:36:45.399859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:41:54.228 [2024-12-09 05:36:45.476894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:41:54.228 [2024-12-09 05:36:45.476962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 01:41:54.228 [2024-12-09 05:36:45.476985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 76.975 ms 01:41:54.228 [2024-12-09 05:36:45.477002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:41:54.228 [2024-12-09 05:36:45.477254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:41:54.228 [2024-12-09 05:36:45.477280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 01:41:54.228 [2024-12-09 05:36:45.477294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.164 ms 01:41:54.228 [2024-12-09 05:36:45.477313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:41:54.228 [2024-12-09 05:36:45.508346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:41:54.228 [2024-12-09 05:36:45.508453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 01:41:54.228 [2024-12-09 05:36:45.508475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.961 ms 01:41:54.228 [2024-12-09 05:36:45.508491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:41:54.228 [2024-12-09 05:36:45.535984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:41:54.228 [2024-12-09 05:36:45.536047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 01:41:54.228 [2024-12-09 05:36:45.536067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.445 ms 01:41:54.228 [2024-12-09 05:36:45.536081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:41:54.228 [2024-12-09 05:36:45.536940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:41:54.228 [2024-12-09 05:36:45.536977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 01:41:54.228 [2024-12-09 05:36:45.536995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.816 ms 01:41:54.228 [2024-12-09 05:36:45.537010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:41:54.228 [2024-12-09 05:36:45.620451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:41:54.228 [2024-12-09 05:36:45.620536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 01:41:54.229 [2024-12-09 05:36:45.620557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 83.374 ms 01:41:54.229 [2024-12-09 05:36:45.620572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:41:54.229 [2024-12-09 05:36:45.651802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:41:54.229 [2024-12-09 05:36:45.651883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 01:41:54.229 [2024-12-09 05:36:45.651907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.140 ms 01:41:54.229 [2024-12-09 05:36:45.651922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:41:54.229 [2024-12-09 05:36:45.682304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:41:54.229 [2024-12-09 05:36:45.682376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 01:41:54.229 [2024-12-09 05:36:45.682395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.332 ms 01:41:54.229 [2024-12-09 05:36:45.682411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:41:54.229 [2024-12-09 05:36:45.712530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:41:54.229 [2024-12-09 05:36:45.712596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 01:41:54.229 [2024-12-09 05:36:45.712615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.040 ms 01:41:54.229 [2024-12-09 05:36:45.712631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:41:54.229 [2024-12-09 05:36:45.712714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:41:54.229 [2024-12-09 05:36:45.712741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 01:41:54.229 [2024-12-09 05:36:45.712766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 01:41:54.229 [2024-12-09 05:36:45.712798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:41:54.229 [2024-12-09 05:36:45.712929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:41:54.229 [2024-12-09 05:36:45.712963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 01:41:54.229 [2024-12-09 05:36:45.712985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 01:41:54.229 [2024-12-09 05:36:45.713000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:41:54.229 [2024-12-09 05:36:45.714351] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3237.809 ms, result 0 01:41:54.229 { 01:41:54.229 "name": "ftl0", 01:41:54.229 "uuid": "9cf5d675-1e6d-4399-a3ae-a030961ffb28" 01:41:54.229 } 01:41:54.229 05:36:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 01:41:54.229 05:36:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 01:41:54.229 05:36:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 01:41:54.487 05:36:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 01:41:54.744 [2024-12-09 05:36:46.166558] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 01:41:54.744 I/O size of 69632 is greater than zero copy threshold (65536). 01:41:54.744 Zero copy mechanism will not be used. 01:41:54.744 Running I/O for 4 seconds... 01:41:56.612 1581.00 IOPS, 104.99 MiB/s [2024-12-09T05:36:49.603Z] 1618.50 IOPS, 107.48 MiB/s [2024-12-09T05:36:50.536Z] 1637.67 IOPS, 108.75 MiB/s [2024-12-09T05:36:50.536Z] 1646.00 IOPS, 109.30 MiB/s 01:41:58.919 Latency(us) 01:41:58.919 [2024-12-09T05:36:50.536Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:41:58.919 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 01:41:58.919 ftl0 : 4.00 1645.28 109.26 0.00 0.00 637.09 260.65 2636.33 01:41:58.919 [2024-12-09T05:36:50.536Z] =================================================================================================================== 01:41:58.919 [2024-12-09T05:36:50.536Z] Total : 1645.28 109.26 0.00 0.00 637.09 260.65 2636.33 01:41:58.919 { 01:41:58.919 "results": [ 01:41:58.919 { 01:41:58.919 "job": "ftl0", 01:41:58.919 "core_mask": "0x1", 01:41:58.919 "workload": "randwrite", 01:41:58.919 "status": "finished", 01:41:58.919 "queue_depth": 1, 01:41:58.919 "io_size": 69632, 01:41:58.919 "runtime": 4.002362, 01:41:58.919 "iops": 1645.2784630675585, 01:41:58.919 "mibps": 109.25677293808006, 01:41:58.919 "io_failed": 0, 01:41:58.919 "io_timeout": 0, 01:41:58.919 "avg_latency_us": 637.0941075446951, 01:41:58.919 "min_latency_us": 260.6545454545454, 01:41:58.919 "max_latency_us": 2636.3345454545456 01:41:58.919 } 01:41:58.919 ], 01:41:58.919 "core_count": 1 01:41:58.919 } 01:41:58.920 [2024-12-09 05:36:50.179479] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 01:41:58.920 05:36:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 01:41:58.920 [2024-12-09 05:36:50.326289] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 01:41:58.920 Running I/O for 4 seconds... 01:42:00.788 7564.00 IOPS, 29.55 MiB/s [2024-12-09T05:36:53.339Z] 7642.50 IOPS, 29.85 MiB/s [2024-12-09T05:36:54.710Z] 7624.00 IOPS, 29.78 MiB/s [2024-12-09T05:36:54.710Z] 7598.00 IOPS, 29.68 MiB/s 01:42:03.093 Latency(us) 01:42:03.093 [2024-12-09T05:36:54.710Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:42:03.093 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 01:42:03.093 ftl0 : 4.02 7586.14 29.63 0.00 0.00 16825.66 322.09 33125.47 01:42:03.093 [2024-12-09T05:36:54.710Z] =================================================================================================================== 01:42:03.093 [2024-12-09T05:36:54.710Z] Total : 7586.14 29.63 0.00 0.00 16825.66 0.00 33125.47 01:42:03.093 [2024-12-09 05:36:54.359797] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 01:42:03.093 { 01:42:03.093 "results": [ 01:42:03.093 { 01:42:03.093 "job": "ftl0", 01:42:03.093 "core_mask": "0x1", 01:42:03.093 "workload": "randwrite", 01:42:03.093 "status": "finished", 01:42:03.093 "queue_depth": 128, 01:42:03.093 "io_size": 4096, 01:42:03.093 "runtime": 4.022861, 01:42:03.093 "iops": 7586.143294535904, 01:42:03.093 "mibps": 29.633372244280874, 01:42:03.093 "io_failed": 0, 01:42:03.093 "io_timeout": 0, 01:42:03.093 "avg_latency_us": 16825.655600450405, 01:42:03.093 "min_latency_us": 322.0945454545455, 01:42:03.093 "max_latency_us": 33125.46909090909 01:42:03.093 } 01:42:03.093 ], 01:42:03.093 "core_count": 1 01:42:03.093 } 01:42:03.093 05:36:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 01:42:03.093 [2024-12-09 05:36:54.511015] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 01:42:03.093 Running I/O for 4 seconds... 01:42:04.987 5764.00 IOPS, 22.52 MiB/s [2024-12-09T05:36:57.540Z] 5650.50 IOPS, 22.07 MiB/s [2024-12-09T05:36:58.916Z] 5599.33 IOPS, 21.87 MiB/s [2024-12-09T05:36:58.916Z] 5547.50 IOPS, 21.67 MiB/s 01:42:07.299 Latency(us) 01:42:07.299 [2024-12-09T05:36:58.916Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:42:07.299 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 01:42:07.299 Verification LBA range: start 0x0 length 0x1400000 01:42:07.299 ftl0 : 4.01 5559.72 21.72 0.00 0.00 22942.19 392.84 31218.97 01:42:07.299 [2024-12-09T05:36:58.916Z] =================================================================================================================== 01:42:07.299 [2024-12-09T05:36:58.916Z] Total : 5559.72 21.72 0.00 0.00 22942.19 0.00 31218.97 01:42:07.299 { 01:42:07.299 "results": [ 01:42:07.299 { 01:42:07.299 "job": "ftl0", 01:42:07.299 "core_mask": "0x1", 01:42:07.299 "workload": "verify", 01:42:07.299 "status": "finished", 01:42:07.299 "verify_range": { 01:42:07.299 "start": 0, 01:42:07.299 "length": 20971520 01:42:07.299 }, 01:42:07.299 "queue_depth": 128, 01:42:07.299 "io_size": 4096, 01:42:07.299 "runtime": 4.01387, 01:42:07.299 "iops": 5559.721665126175, 01:42:07.299 "mibps": 21.71766275439912, 01:42:07.299 "io_failed": 0, 01:42:07.299 "io_timeout": 0, 01:42:07.299 "avg_latency_us": 22942.194309178903, 01:42:07.300 "min_latency_us": 392.84363636363634, 01:42:07.300 "max_latency_us": 31218.967272727274 01:42:07.300 } 01:42:07.300 ], 01:42:07.300 "core_count": 1 01:42:07.300 } 01:42:07.300 [2024-12-09 05:36:58.544019] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 01:42:07.300 05:36:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 01:42:07.300 [2024-12-09 05:36:58.865340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:07.300 [2024-12-09 05:36:58.865408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 01:42:07.300 [2024-12-09 05:36:58.865432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 01:42:07.300 [2024-12-09 05:36:58.865448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:07.300 [2024-12-09 05:36:58.865482] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 01:42:07.300 [2024-12-09 05:36:58.869245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:07.300 [2024-12-09 05:36:58.869283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 01:42:07.300 [2024-12-09 05:36:58.869302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.734 ms 01:42:07.300 [2024-12-09 05:36:58.869315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:07.300 [2024-12-09 05:36:58.871349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:07.300 [2024-12-09 05:36:58.871584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 01:42:07.300 [2024-12-09 05:36:58.871628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.987 ms 01:42:07.300 [2024-12-09 05:36:58.871642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:07.559 [2024-12-09 05:36:59.075960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:07.559 [2024-12-09 05:36:59.076060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 01:42:07.559 [2024-12-09 05:36:59.076108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 204.240 ms 01:42:07.560 [2024-12-09 05:36:59.076123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:07.560 [2024-12-09 05:36:59.082983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:07.560 [2024-12-09 05:36:59.083199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 01:42:07.560 [2024-12-09 05:36:59.083232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.807 ms 01:42:07.560 [2024-12-09 05:36:59.083251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:07.560 [2024-12-09 05:36:59.116554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:07.560 [2024-12-09 05:36:59.116599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 01:42:07.560 [2024-12-09 05:36:59.116622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.196 ms 01:42:07.560 [2024-12-09 05:36:59.116634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:07.560 [2024-12-09 05:36:59.136328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:07.560 [2024-12-09 05:36:59.136403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 01:42:07.560 [2024-12-09 05:36:59.136434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.574 ms 01:42:07.560 [2024-12-09 05:36:59.136447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:07.560 [2024-12-09 05:36:59.136635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:07.560 [2024-12-09 05:36:59.136657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 01:42:07.560 [2024-12-09 05:36:59.136742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 01:42:07.560 [2024-12-09 05:36:59.136757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:07.560 [2024-12-09 05:36:59.167919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:07.560 [2024-12-09 05:36:59.167973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 01:42:07.560 [2024-12-09 05:36:59.168004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.130 ms 01:42:07.560 [2024-12-09 05:36:59.168017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:07.821 [2024-12-09 05:36:59.199339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:07.821 [2024-12-09 05:36:59.199553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 01:42:07.821 [2024-12-09 05:36:59.199589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.271 ms 01:42:07.821 [2024-12-09 05:36:59.199603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:07.821 [2024-12-09 05:36:59.230863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:07.821 [2024-12-09 05:36:59.230910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 01:42:07.821 [2024-12-09 05:36:59.230933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.202 ms 01:42:07.821 [2024-12-09 05:36:59.230946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:07.821 [2024-12-09 05:36:59.262011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:07.821 [2024-12-09 05:36:59.262066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 01:42:07.821 [2024-12-09 05:36:59.262091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.933 ms 01:42:07.821 [2024-12-09 05:36:59.262104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:07.821 [2024-12-09 05:36:59.262154] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 01:42:07.821 [2024-12-09 05:36:59.262178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 01:42:07.821 [2024-12-09 05:36:59.262196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 01:42:07.821 [2024-12-09 05:36:59.262209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 01:42:07.821 [2024-12-09 05:36:59.262224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 01:42:07.821 [2024-12-09 05:36:59.262237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 01:42:07.821 [2024-12-09 05:36:59.262258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 01:42:07.821 [2024-12-09 05:36:59.262271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 01:42:07.821 [2024-12-09 05:36:59.262287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 01:42:07.821 [2024-12-09 05:36:59.262299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 01:42:07.821 [2024-12-09 05:36:59.262314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 01:42:07.821 [2024-12-09 05:36:59.262326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 01:42:07.821 [2024-12-09 05:36:59.262340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 01:42:07.821 [2024-12-09 05:36:59.262353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 01:42:07.821 [2024-12-09 05:36:59.262371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 01:42:07.821 [2024-12-09 05:36:59.262386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 01:42:07.821 [2024-12-09 05:36:59.262408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 01:42:07.821 [2024-12-09 05:36:59.262420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 01:42:07.821 [2024-12-09 05:36:59.262446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 01:42:07.821 [2024-12-09 05:36:59.262490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 01:42:07.821 [2024-12-09 05:36:59.262505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 01:42:07.821 [2024-12-09 05:36:59.262519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 01:42:07.821 [2024-12-09 05:36:59.262535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 01:42:07.821 [2024-12-09 05:36:59.262548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 01:42:07.821 [2024-12-09 05:36:59.262564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 01:42:07.821 [2024-12-09 05:36:59.262577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 01:42:07.821 [2024-12-09 05:36:59.262592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 01:42:07.821 [2024-12-09 05:36:59.262606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 01:42:07.821 [2024-12-09 05:36:59.262621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 01:42:07.821 [2024-12-09 05:36:59.262635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 01:42:07.821 [2024-12-09 05:36:59.262653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 01:42:07.821 [2024-12-09 05:36:59.262666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 01:42:07.821 [2024-12-09 05:36:59.262699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 01:42:07.821 [2024-12-09 05:36:59.262715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 01:42:07.821 [2024-12-09 05:36:59.262731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 01:42:07.821 [2024-12-09 05:36:59.262745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 01:42:07.821 [2024-12-09 05:36:59.262761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 01:42:07.821 [2024-12-09 05:36:59.262781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 01:42:07.821 [2024-12-09 05:36:59.262807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 01:42:07.821 [2024-12-09 05:36:59.262835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 01:42:07.821 [2024-12-09 05:36:59.262852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 01:42:07.821 [2024-12-09 05:36:59.262866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 01:42:07.821 [2024-12-09 05:36:59.262882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 01:42:07.821 [2024-12-09 05:36:59.262895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 01:42:07.821 [2024-12-09 05:36:59.262913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 01:42:07.821 [2024-12-09 05:36:59.262927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 01:42:07.822 [2024-12-09 05:36:59.262945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 01:42:07.822 [2024-12-09 05:36:59.262966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 01:42:07.822 [2024-12-09 05:36:59.262990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 01:42:07.822 [2024-12-09 05:36:59.263014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 01:42:07.822 [2024-12-09 05:36:59.263054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 01:42:07.822 [2024-12-09 05:36:59.263068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 01:42:07.822 [2024-12-09 05:36:59.263098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 01:42:07.822 [2024-12-09 05:36:59.263110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 01:42:07.822 [2024-12-09 05:36:59.263125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 01:42:07.822 [2024-12-09 05:36:59.263138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 01:42:07.822 [2024-12-09 05:36:59.263154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 01:42:07.822 [2024-12-09 05:36:59.263167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 01:42:07.822 [2024-12-09 05:36:59.263182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 01:42:07.822 [2024-12-09 05:36:59.263194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 01:42:07.822 [2024-12-09 05:36:59.263209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 01:42:07.822 [2024-12-09 05:36:59.263222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 01:42:07.822 [2024-12-09 05:36:59.263239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 01:42:07.822 [2024-12-09 05:36:59.263252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 01:42:07.822 [2024-12-09 05:36:59.263267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 01:42:07.822 [2024-12-09 05:36:59.263280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 01:42:07.822 [2024-12-09 05:36:59.263295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 01:42:07.822 [2024-12-09 05:36:59.263308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 01:42:07.822 [2024-12-09 05:36:59.263323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 01:42:07.822 [2024-12-09 05:36:59.263336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 01:42:07.822 [2024-12-09 05:36:59.263364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 01:42:07.822 [2024-12-09 05:36:59.263389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 01:42:07.822 [2024-12-09 05:36:59.263413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 01:42:07.822 [2024-12-09 05:36:59.263437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 01:42:07.822 [2024-12-09 05:36:59.263452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 01:42:07.822 [2024-12-09 05:36:59.263465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 01:42:07.822 [2024-12-09 05:36:59.263480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 01:42:07.822 [2024-12-09 05:36:59.263493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 01:42:07.822 [2024-12-09 05:36:59.263510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 01:42:07.822 [2024-12-09 05:36:59.263524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 01:42:07.822 [2024-12-09 05:36:59.263539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 01:42:07.822 [2024-12-09 05:36:59.263552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 01:42:07.822 [2024-12-09 05:36:59.263567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 01:42:07.822 [2024-12-09 05:36:59.263580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 01:42:07.822 [2024-12-09 05:36:59.263595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 01:42:07.822 [2024-12-09 05:36:59.263608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 01:42:07.822 [2024-12-09 05:36:59.263623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 01:42:07.822 [2024-12-09 05:36:59.263635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 01:42:07.822 [2024-12-09 05:36:59.263650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 01:42:07.822 [2024-12-09 05:36:59.263663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 01:42:07.822 [2024-12-09 05:36:59.263678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 01:42:07.822 [2024-12-09 05:36:59.263702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 01:42:07.822 [2024-12-09 05:36:59.263735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 01:42:07.822 [2024-12-09 05:36:59.263764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 01:42:07.822 [2024-12-09 05:36:59.263783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 01:42:07.822 [2024-12-09 05:36:59.263797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 01:42:07.822 [2024-12-09 05:36:59.263814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 01:42:07.822 [2024-12-09 05:36:59.263828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 01:42:07.822 [2024-12-09 05:36:59.263844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 01:42:07.822 [2024-12-09 05:36:59.263865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 01:42:07.822 [2024-12-09 05:36:59.263881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 01:42:07.822 [2024-12-09 05:36:59.263904] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 01:42:07.822 [2024-12-09 05:36:59.263920] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9cf5d675-1e6d-4399-a3ae-a030961ffb28 01:42:07.822 [2024-12-09 05:36:59.263938] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 01:42:07.822 [2024-12-09 05:36:59.263965] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 01:42:07.822 [2024-12-09 05:36:59.263978] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 01:42:07.822 [2024-12-09 05:36:59.263993] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 01:42:07.822 [2024-12-09 05:36:59.264005] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 01:42:07.822 [2024-12-09 05:36:59.264030] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 01:42:07.822 [2024-12-09 05:36:59.264043] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 01:42:07.822 [2024-12-09 05:36:59.264059] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 01:42:07.822 [2024-12-09 05:36:59.264071] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 01:42:07.822 [2024-12-09 05:36:59.264087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:07.822 [2024-12-09 05:36:59.264100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 01:42:07.822 [2024-12-09 05:36:59.264158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.937 ms 01:42:07.822 [2024-12-09 05:36:59.264171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:07.822 [2024-12-09 05:36:59.282127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:07.822 [2024-12-09 05:36:59.282172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 01:42:07.823 [2024-12-09 05:36:59.282195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.890 ms 01:42:07.823 [2024-12-09 05:36:59.282208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:07.823 [2024-12-09 05:36:59.282728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:07.823 [2024-12-09 05:36:59.282828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 01:42:07.823 [2024-12-09 05:36:59.282856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.487 ms 01:42:07.823 [2024-12-09 05:36:59.282871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:07.823 [2024-12-09 05:36:59.332093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:42:07.823 [2024-12-09 05:36:59.332151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 01:42:07.823 [2024-12-09 05:36:59.332176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:42:07.823 [2024-12-09 05:36:59.332200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:07.823 [2024-12-09 05:36:59.332279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:42:07.823 [2024-12-09 05:36:59.332294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 01:42:07.823 [2024-12-09 05:36:59.332320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:42:07.823 [2024-12-09 05:36:59.332332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:07.823 [2024-12-09 05:36:59.332508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:42:07.823 [2024-12-09 05:36:59.332529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 01:42:07.823 [2024-12-09 05:36:59.332545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:42:07.823 [2024-12-09 05:36:59.332558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:07.823 [2024-12-09 05:36:59.332585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:42:07.823 [2024-12-09 05:36:59.332599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 01:42:07.823 [2024-12-09 05:36:59.332614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:42:07.823 [2024-12-09 05:36:59.332626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:08.082 [2024-12-09 05:36:59.444361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:42:08.082 [2024-12-09 05:36:59.444433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 01:42:08.082 [2024-12-09 05:36:59.444461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:42:08.082 [2024-12-09 05:36:59.444475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:08.082 [2024-12-09 05:36:59.533411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:42:08.082 [2024-12-09 05:36:59.533479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 01:42:08.082 [2024-12-09 05:36:59.533505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:42:08.082 [2024-12-09 05:36:59.533518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:08.082 [2024-12-09 05:36:59.533706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:42:08.082 [2024-12-09 05:36:59.533745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 01:42:08.082 [2024-12-09 05:36:59.533763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:42:08.082 [2024-12-09 05:36:59.533776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:08.082 [2024-12-09 05:36:59.533853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:42:08.082 [2024-12-09 05:36:59.533872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 01:42:08.082 [2024-12-09 05:36:59.533889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:42:08.082 [2024-12-09 05:36:59.533902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:08.083 [2024-12-09 05:36:59.534051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:42:08.083 [2024-12-09 05:36:59.534074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 01:42:08.083 [2024-12-09 05:36:59.534094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:42:08.083 [2024-12-09 05:36:59.534107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:08.083 [2024-12-09 05:36:59.534178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:42:08.083 [2024-12-09 05:36:59.534204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 01:42:08.083 [2024-12-09 05:36:59.534223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:42:08.083 [2024-12-09 05:36:59.534235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:08.083 [2024-12-09 05:36:59.534287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:42:08.083 [2024-12-09 05:36:59.534306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 01:42:08.083 [2024-12-09 05:36:59.534332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:42:08.083 [2024-12-09 05:36:59.534358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:08.083 [2024-12-09 05:36:59.534426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:42:08.083 [2024-12-09 05:36:59.534465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 01:42:08.083 [2024-12-09 05:36:59.534492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:42:08.083 [2024-12-09 05:36:59.534505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:08.083 [2024-12-09 05:36:59.534699] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 669.290 ms, result 0 01:42:08.083 true 01:42:08.083 05:36:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 77921 01:42:08.083 05:36:59 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 77921 ']' 01:42:08.083 05:36:59 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 77921 01:42:08.083 05:36:59 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 01:42:08.083 05:36:59 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:42:08.083 05:36:59 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77921 01:42:08.083 05:36:59 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:42:08.083 05:36:59 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:42:08.083 05:36:59 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77921' 01:42:08.083 killing process with pid 77921 01:42:08.083 05:36:59 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 77921 01:42:08.083 Received shutdown signal, test time was about 4.000000 seconds 01:42:08.083 01:42:08.083 Latency(us) 01:42:08.083 [2024-12-09T05:36:59.700Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:42:08.083 [2024-12-09T05:36:59.700Z] =================================================================================================================== 01:42:08.083 [2024-12-09T05:36:59.700Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:42:08.083 05:36:59 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 77921 01:42:12.309 05:37:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 01:42:12.309 05:37:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 01:42:12.309 05:37:03 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 01:42:12.309 Remove shared memory files 01:42:12.309 05:37:03 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 01:42:12.309 05:37:03 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 01:42:12.309 05:37:03 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 01:42:12.309 05:37:03 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 01:42:12.309 05:37:03 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 01:42:12.309 ************************************ 01:42:12.309 END TEST ftl_bdevperf 01:42:12.309 ************************************ 01:42:12.309 01:42:12.309 real 0m25.960s 01:42:12.309 user 0m29.855s 01:42:12.309 sys 0m1.256s 01:42:12.309 05:37:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 01:42:12.309 05:37:03 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 01:42:12.309 05:37:03 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 01:42:12.309 05:37:03 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:42:12.309 05:37:03 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 01:42:12.309 05:37:03 ftl -- common/autotest_common.sh@10 -- # set +x 01:42:12.309 ************************************ 01:42:12.309 START TEST ftl_trim 01:42:12.309 ************************************ 01:42:12.309 05:37:03 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 01:42:12.309 * Looking for test storage... 01:42:12.309 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 01:42:12.309 05:37:03 ftl.ftl_trim -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:42:12.309 05:37:03 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lcov --version 01:42:12.309 05:37:03 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:42:12.310 05:37:03 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:42:12.310 05:37:03 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:42:12.310 05:37:03 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 01:42:12.310 05:37:03 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 01:42:12.310 05:37:03 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 01:42:12.310 05:37:03 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 01:42:12.310 05:37:03 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 01:42:12.310 05:37:03 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 01:42:12.310 05:37:03 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 01:42:12.310 05:37:03 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 01:42:12.310 05:37:03 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 01:42:12.310 05:37:03 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:42:12.310 05:37:03 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 01:42:12.310 05:37:03 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 01:42:12.310 05:37:03 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 01:42:12.310 05:37:03 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:42:12.310 05:37:03 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 01:42:12.310 05:37:03 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 01:42:12.310 05:37:03 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:42:12.310 05:37:03 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 01:42:12.310 05:37:03 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 01:42:12.310 05:37:03 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 01:42:12.310 05:37:03 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 01:42:12.310 05:37:03 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:42:12.310 05:37:03 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 01:42:12.310 05:37:03 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 01:42:12.310 05:37:03 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:42:12.310 05:37:03 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:42:12.310 05:37:03 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 01:42:12.310 05:37:03 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:42:12.310 05:37:03 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:42:12.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:42:12.310 --rc genhtml_branch_coverage=1 01:42:12.310 --rc genhtml_function_coverage=1 01:42:12.310 --rc genhtml_legend=1 01:42:12.310 --rc geninfo_all_blocks=1 01:42:12.310 --rc geninfo_unexecuted_blocks=1 01:42:12.310 01:42:12.310 ' 01:42:12.310 05:37:03 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:42:12.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:42:12.310 --rc genhtml_branch_coverage=1 01:42:12.310 --rc genhtml_function_coverage=1 01:42:12.310 --rc genhtml_legend=1 01:42:12.310 --rc geninfo_all_blocks=1 01:42:12.310 --rc geninfo_unexecuted_blocks=1 01:42:12.310 01:42:12.310 ' 01:42:12.310 05:37:03 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:42:12.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:42:12.310 --rc genhtml_branch_coverage=1 01:42:12.310 --rc genhtml_function_coverage=1 01:42:12.310 --rc genhtml_legend=1 01:42:12.310 --rc geninfo_all_blocks=1 01:42:12.310 --rc geninfo_unexecuted_blocks=1 01:42:12.310 01:42:12.310 ' 01:42:12.310 05:37:03 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:42:12.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:42:12.310 --rc genhtml_branch_coverage=1 01:42:12.310 --rc genhtml_function_coverage=1 01:42:12.310 --rc genhtml_legend=1 01:42:12.310 --rc geninfo_all_blocks=1 01:42:12.310 --rc geninfo_unexecuted_blocks=1 01:42:12.310 01:42:12.310 ' 01:42:12.310 05:37:03 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 01:42:12.310 05:37:03 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 01:42:12.310 05:37:03 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 01:42:12.310 05:37:03 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 01:42:12.310 05:37:03 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 01:42:12.310 05:37:03 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 01:42:12.310 05:37:03 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:42:12.310 05:37:03 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 01:42:12.310 05:37:03 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 01:42:12.310 05:37:03 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:42:12.310 05:37:03 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:42:12.310 05:37:03 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 01:42:12.310 05:37:03 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 01:42:12.310 05:37:03 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 01:42:12.310 05:37:03 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 01:42:12.310 05:37:03 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 01:42:12.310 05:37:03 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 01:42:12.310 05:37:03 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:42:12.310 05:37:03 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:42:12.310 05:37:03 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 01:42:12.310 05:37:03 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 01:42:12.310 05:37:03 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 01:42:12.310 05:37:03 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 01:42:12.310 05:37:03 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 01:42:12.310 05:37:03 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 01:42:12.310 05:37:03 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 01:42:12.310 05:37:03 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 01:42:12.310 05:37:03 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:42:12.310 05:37:03 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:42:12.310 05:37:03 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:42:12.310 05:37:03 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 01:42:12.310 05:37:03 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 01:42:12.310 05:37:03 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 01:42:12.310 05:37:03 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 01:42:12.310 05:37:03 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 01:42:12.310 05:37:03 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 01:42:12.310 05:37:03 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 01:42:12.310 05:37:03 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 01:42:12.310 05:37:03 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 01:42:12.310 05:37:03 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 01:42:12.310 05:37:03 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 01:42:12.310 05:37:03 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=78274 01:42:12.310 05:37:03 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 01:42:12.310 05:37:03 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 78274 01:42:12.310 05:37:03 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 78274 ']' 01:42:12.310 05:37:03 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:42:12.310 05:37:03 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 01:42:12.310 05:37:03 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:42:12.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:42:12.310 05:37:03 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 01:42:12.310 05:37:03 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 01:42:12.310 [2024-12-09 05:37:03.571324] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:42:12.310 [2024-12-09 05:37:03.571881] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78274 ] 01:42:12.310 [2024-12-09 05:37:03.756547] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 01:42:12.310 [2024-12-09 05:37:03.893449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:42:12.311 [2024-12-09 05:37:03.893534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:42:12.311 [2024-12-09 05:37:03.893539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:42:13.251 05:37:04 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:42:13.251 05:37:04 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 01:42:13.251 05:37:04 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 01:42:13.251 05:37:04 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 01:42:13.251 05:37:04 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 01:42:13.251 05:37:04 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 01:42:13.251 05:37:04 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 01:42:13.251 05:37:04 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 01:42:13.818 05:37:05 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 01:42:13.818 05:37:05 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 01:42:13.818 05:37:05 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 01:42:13.818 05:37:05 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 01:42:13.818 05:37:05 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 01:42:13.818 05:37:05 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 01:42:13.818 05:37:05 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 01:42:13.818 05:37:05 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 01:42:14.076 05:37:05 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 01:42:14.076 { 01:42:14.076 "name": "nvme0n1", 01:42:14.076 "aliases": [ 01:42:14.076 "8d6fc9d7-f4d5-4c96-b23c-871a6c86d7b7" 01:42:14.076 ], 01:42:14.076 "product_name": "NVMe disk", 01:42:14.076 "block_size": 4096, 01:42:14.076 "num_blocks": 1310720, 01:42:14.076 "uuid": "8d6fc9d7-f4d5-4c96-b23c-871a6c86d7b7", 01:42:14.076 "numa_id": -1, 01:42:14.076 "assigned_rate_limits": { 01:42:14.076 "rw_ios_per_sec": 0, 01:42:14.076 "rw_mbytes_per_sec": 0, 01:42:14.076 "r_mbytes_per_sec": 0, 01:42:14.076 "w_mbytes_per_sec": 0 01:42:14.076 }, 01:42:14.076 "claimed": true, 01:42:14.076 "claim_type": "read_many_write_one", 01:42:14.076 "zoned": false, 01:42:14.076 "supported_io_types": { 01:42:14.076 "read": true, 01:42:14.076 "write": true, 01:42:14.076 "unmap": true, 01:42:14.076 "flush": true, 01:42:14.076 "reset": true, 01:42:14.076 "nvme_admin": true, 01:42:14.076 "nvme_io": true, 01:42:14.076 "nvme_io_md": false, 01:42:14.076 "write_zeroes": true, 01:42:14.076 "zcopy": false, 01:42:14.076 "get_zone_info": false, 01:42:14.076 "zone_management": false, 01:42:14.076 "zone_append": false, 01:42:14.076 "compare": true, 01:42:14.076 "compare_and_write": false, 01:42:14.076 "abort": true, 01:42:14.076 "seek_hole": false, 01:42:14.076 "seek_data": false, 01:42:14.076 "copy": true, 01:42:14.076 "nvme_iov_md": false 01:42:14.076 }, 01:42:14.076 "driver_specific": { 01:42:14.076 "nvme": [ 01:42:14.077 { 01:42:14.077 "pci_address": "0000:00:11.0", 01:42:14.077 "trid": { 01:42:14.077 "trtype": "PCIe", 01:42:14.077 "traddr": "0000:00:11.0" 01:42:14.077 }, 01:42:14.077 "ctrlr_data": { 01:42:14.077 "cntlid": 0, 01:42:14.077 "vendor_id": "0x1b36", 01:42:14.077 "model_number": "QEMU NVMe Ctrl", 01:42:14.077 "serial_number": "12341", 01:42:14.077 "firmware_revision": "8.0.0", 01:42:14.077 "subnqn": "nqn.2019-08.org.qemu:12341", 01:42:14.077 "oacs": { 01:42:14.077 "security": 0, 01:42:14.077 "format": 1, 01:42:14.077 "firmware": 0, 01:42:14.077 "ns_manage": 1 01:42:14.077 }, 01:42:14.077 "multi_ctrlr": false, 01:42:14.077 "ana_reporting": false 01:42:14.077 }, 01:42:14.077 "vs": { 01:42:14.077 "nvme_version": "1.4" 01:42:14.077 }, 01:42:14.077 "ns_data": { 01:42:14.077 "id": 1, 01:42:14.077 "can_share": false 01:42:14.077 } 01:42:14.077 } 01:42:14.077 ], 01:42:14.077 "mp_policy": "active_passive" 01:42:14.077 } 01:42:14.077 } 01:42:14.077 ]' 01:42:14.077 05:37:05 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 01:42:14.077 05:37:05 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 01:42:14.077 05:37:05 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 01:42:14.077 05:37:05 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 01:42:14.077 05:37:05 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 01:42:14.077 05:37:05 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 01:42:14.077 05:37:05 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 01:42:14.077 05:37:05 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 01:42:14.077 05:37:05 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 01:42:14.077 05:37:05 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 01:42:14.077 05:37:05 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 01:42:14.335 05:37:05 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=b53d6e8f-9da7-4320-8cf7-241463b8e6d6 01:42:14.335 05:37:05 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 01:42:14.335 05:37:05 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b53d6e8f-9da7-4320-8cf7-241463b8e6d6 01:42:14.593 05:37:06 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 01:42:14.850 05:37:06 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=c9760a6f-d727-43f2-aa49-541fb916ec32 01:42:14.850 05:37:06 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u c9760a6f-d727-43f2-aa49-541fb916ec32 01:42:15.415 05:37:06 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=0f53a1d4-295b-4ce3-809d-d0c4aaa149ce 01:42:15.415 05:37:06 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 0f53a1d4-295b-4ce3-809d-d0c4aaa149ce 01:42:15.415 05:37:06 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 01:42:15.415 05:37:06 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 01:42:15.415 05:37:06 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=0f53a1d4-295b-4ce3-809d-d0c4aaa149ce 01:42:15.415 05:37:06 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 01:42:15.415 05:37:06 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 0f53a1d4-295b-4ce3-809d-d0c4aaa149ce 01:42:15.415 05:37:06 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=0f53a1d4-295b-4ce3-809d-d0c4aaa149ce 01:42:15.415 05:37:06 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 01:42:15.415 05:37:06 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 01:42:15.415 05:37:06 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 01:42:15.415 05:37:06 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0f53a1d4-295b-4ce3-809d-d0c4aaa149ce 01:42:15.415 05:37:07 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 01:42:15.415 { 01:42:15.415 "name": "0f53a1d4-295b-4ce3-809d-d0c4aaa149ce", 01:42:15.415 "aliases": [ 01:42:15.415 "lvs/nvme0n1p0" 01:42:15.415 ], 01:42:15.415 "product_name": "Logical Volume", 01:42:15.415 "block_size": 4096, 01:42:15.415 "num_blocks": 26476544, 01:42:15.415 "uuid": "0f53a1d4-295b-4ce3-809d-d0c4aaa149ce", 01:42:15.415 "assigned_rate_limits": { 01:42:15.415 "rw_ios_per_sec": 0, 01:42:15.415 "rw_mbytes_per_sec": 0, 01:42:15.415 "r_mbytes_per_sec": 0, 01:42:15.415 "w_mbytes_per_sec": 0 01:42:15.415 }, 01:42:15.415 "claimed": false, 01:42:15.415 "zoned": false, 01:42:15.415 "supported_io_types": { 01:42:15.415 "read": true, 01:42:15.415 "write": true, 01:42:15.415 "unmap": true, 01:42:15.415 "flush": false, 01:42:15.415 "reset": true, 01:42:15.415 "nvme_admin": false, 01:42:15.415 "nvme_io": false, 01:42:15.415 "nvme_io_md": false, 01:42:15.415 "write_zeroes": true, 01:42:15.415 "zcopy": false, 01:42:15.415 "get_zone_info": false, 01:42:15.415 "zone_management": false, 01:42:15.415 "zone_append": false, 01:42:15.415 "compare": false, 01:42:15.415 "compare_and_write": false, 01:42:15.415 "abort": false, 01:42:15.415 "seek_hole": true, 01:42:15.415 "seek_data": true, 01:42:15.415 "copy": false, 01:42:15.415 "nvme_iov_md": false 01:42:15.415 }, 01:42:15.415 "driver_specific": { 01:42:15.415 "lvol": { 01:42:15.415 "lvol_store_uuid": "c9760a6f-d727-43f2-aa49-541fb916ec32", 01:42:15.415 "base_bdev": "nvme0n1", 01:42:15.415 "thin_provision": true, 01:42:15.415 "num_allocated_clusters": 0, 01:42:15.415 "snapshot": false, 01:42:15.415 "clone": false, 01:42:15.415 "esnap_clone": false 01:42:15.415 } 01:42:15.415 } 01:42:15.415 } 01:42:15.415 ]' 01:42:15.415 05:37:07 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 01:42:15.673 05:37:07 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 01:42:15.673 05:37:07 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 01:42:15.673 05:37:07 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 01:42:15.673 05:37:07 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 01:42:15.673 05:37:07 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 01:42:15.673 05:37:07 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 01:42:15.673 05:37:07 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 01:42:15.673 05:37:07 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 01:42:15.948 05:37:07 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 01:42:15.948 05:37:07 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 01:42:15.948 05:37:07 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 0f53a1d4-295b-4ce3-809d-d0c4aaa149ce 01:42:15.948 05:37:07 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=0f53a1d4-295b-4ce3-809d-d0c4aaa149ce 01:42:15.948 05:37:07 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 01:42:15.948 05:37:07 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 01:42:15.948 05:37:07 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 01:42:15.948 05:37:07 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0f53a1d4-295b-4ce3-809d-d0c4aaa149ce 01:42:16.206 05:37:07 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 01:42:16.206 { 01:42:16.206 "name": "0f53a1d4-295b-4ce3-809d-d0c4aaa149ce", 01:42:16.206 "aliases": [ 01:42:16.206 "lvs/nvme0n1p0" 01:42:16.206 ], 01:42:16.206 "product_name": "Logical Volume", 01:42:16.206 "block_size": 4096, 01:42:16.206 "num_blocks": 26476544, 01:42:16.206 "uuid": "0f53a1d4-295b-4ce3-809d-d0c4aaa149ce", 01:42:16.207 "assigned_rate_limits": { 01:42:16.207 "rw_ios_per_sec": 0, 01:42:16.207 "rw_mbytes_per_sec": 0, 01:42:16.207 "r_mbytes_per_sec": 0, 01:42:16.207 "w_mbytes_per_sec": 0 01:42:16.207 }, 01:42:16.207 "claimed": false, 01:42:16.207 "zoned": false, 01:42:16.207 "supported_io_types": { 01:42:16.207 "read": true, 01:42:16.207 "write": true, 01:42:16.207 "unmap": true, 01:42:16.207 "flush": false, 01:42:16.207 "reset": true, 01:42:16.207 "nvme_admin": false, 01:42:16.207 "nvme_io": false, 01:42:16.207 "nvme_io_md": false, 01:42:16.207 "write_zeroes": true, 01:42:16.207 "zcopy": false, 01:42:16.207 "get_zone_info": false, 01:42:16.207 "zone_management": false, 01:42:16.207 "zone_append": false, 01:42:16.207 "compare": false, 01:42:16.207 "compare_and_write": false, 01:42:16.207 "abort": false, 01:42:16.207 "seek_hole": true, 01:42:16.207 "seek_data": true, 01:42:16.207 "copy": false, 01:42:16.207 "nvme_iov_md": false 01:42:16.207 }, 01:42:16.207 "driver_specific": { 01:42:16.207 "lvol": { 01:42:16.207 "lvol_store_uuid": "c9760a6f-d727-43f2-aa49-541fb916ec32", 01:42:16.207 "base_bdev": "nvme0n1", 01:42:16.207 "thin_provision": true, 01:42:16.207 "num_allocated_clusters": 0, 01:42:16.207 "snapshot": false, 01:42:16.207 "clone": false, 01:42:16.207 "esnap_clone": false 01:42:16.207 } 01:42:16.207 } 01:42:16.207 } 01:42:16.207 ]' 01:42:16.207 05:37:07 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 01:42:16.466 05:37:07 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 01:42:16.466 05:37:07 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 01:42:16.466 05:37:07 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 01:42:16.466 05:37:07 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 01:42:16.466 05:37:07 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 01:42:16.466 05:37:07 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 01:42:16.466 05:37:07 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 01:42:16.724 05:37:08 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 01:42:16.724 05:37:08 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 01:42:16.725 05:37:08 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 0f53a1d4-295b-4ce3-809d-d0c4aaa149ce 01:42:16.725 05:37:08 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=0f53a1d4-295b-4ce3-809d-d0c4aaa149ce 01:42:16.725 05:37:08 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 01:42:16.725 05:37:08 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 01:42:16.725 05:37:08 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 01:42:16.725 05:37:08 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0f53a1d4-295b-4ce3-809d-d0c4aaa149ce 01:42:16.984 05:37:08 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 01:42:16.984 { 01:42:16.984 "name": "0f53a1d4-295b-4ce3-809d-d0c4aaa149ce", 01:42:16.984 "aliases": [ 01:42:16.984 "lvs/nvme0n1p0" 01:42:16.984 ], 01:42:16.984 "product_name": "Logical Volume", 01:42:16.984 "block_size": 4096, 01:42:16.984 "num_blocks": 26476544, 01:42:16.984 "uuid": "0f53a1d4-295b-4ce3-809d-d0c4aaa149ce", 01:42:16.984 "assigned_rate_limits": { 01:42:16.984 "rw_ios_per_sec": 0, 01:42:16.984 "rw_mbytes_per_sec": 0, 01:42:16.984 "r_mbytes_per_sec": 0, 01:42:16.984 "w_mbytes_per_sec": 0 01:42:16.984 }, 01:42:16.984 "claimed": false, 01:42:16.984 "zoned": false, 01:42:16.984 "supported_io_types": { 01:42:16.984 "read": true, 01:42:16.984 "write": true, 01:42:16.984 "unmap": true, 01:42:16.984 "flush": false, 01:42:16.984 "reset": true, 01:42:16.984 "nvme_admin": false, 01:42:16.984 "nvme_io": false, 01:42:16.984 "nvme_io_md": false, 01:42:16.984 "write_zeroes": true, 01:42:16.984 "zcopy": false, 01:42:16.984 "get_zone_info": false, 01:42:16.984 "zone_management": false, 01:42:16.984 "zone_append": false, 01:42:16.984 "compare": false, 01:42:16.984 "compare_and_write": false, 01:42:16.984 "abort": false, 01:42:16.984 "seek_hole": true, 01:42:16.984 "seek_data": true, 01:42:16.984 "copy": false, 01:42:16.984 "nvme_iov_md": false 01:42:16.984 }, 01:42:16.984 "driver_specific": { 01:42:16.984 "lvol": { 01:42:16.984 "lvol_store_uuid": "c9760a6f-d727-43f2-aa49-541fb916ec32", 01:42:16.984 "base_bdev": "nvme0n1", 01:42:16.984 "thin_provision": true, 01:42:16.984 "num_allocated_clusters": 0, 01:42:16.984 "snapshot": false, 01:42:16.984 "clone": false, 01:42:16.984 "esnap_clone": false 01:42:16.984 } 01:42:16.984 } 01:42:16.984 } 01:42:16.984 ]' 01:42:16.984 05:37:08 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 01:42:16.984 05:37:08 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 01:42:16.984 05:37:08 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 01:42:16.984 05:37:08 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 01:42:16.984 05:37:08 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 01:42:16.984 05:37:08 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 01:42:16.984 05:37:08 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 01:42:16.984 05:37:08 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 0f53a1d4-295b-4ce3-809d-d0c4aaa149ce -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 01:42:17.243 [2024-12-09 05:37:08.722181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:17.243 [2024-12-09 05:37:08.722244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 01:42:17.243 [2024-12-09 05:37:08.722287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 01:42:17.243 [2024-12-09 05:37:08.722302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:17.243 [2024-12-09 05:37:08.726277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:17.243 [2024-12-09 05:37:08.726527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 01:42:17.243 [2024-12-09 05:37:08.726565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.928 ms 01:42:17.243 [2024-12-09 05:37:08.726581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:17.243 [2024-12-09 05:37:08.726890] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 01:42:17.243 [2024-12-09 05:37:08.728030] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 01:42:17.243 [2024-12-09 05:37:08.728127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:17.243 [2024-12-09 05:37:08.728145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 01:42:17.243 [2024-12-09 05:37:08.728161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.233 ms 01:42:17.243 [2024-12-09 05:37:08.728174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:17.243 [2024-12-09 05:37:08.728341] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 5177abd3-cafa-411b-b43c-d71befe750fc 01:42:17.243 [2024-12-09 05:37:08.730513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:17.243 [2024-12-09 05:37:08.730579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 01:42:17.243 [2024-12-09 05:37:08.730599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 01:42:17.243 [2024-12-09 05:37:08.730615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:17.243 [2024-12-09 05:37:08.741799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:17.243 [2024-12-09 05:37:08.741866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 01:42:17.243 [2024-12-09 05:37:08.741888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.030 ms 01:42:17.243 [2024-12-09 05:37:08.741908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:17.243 [2024-12-09 05:37:08.742158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:17.243 [2024-12-09 05:37:08.742187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 01:42:17.243 [2024-12-09 05:37:08.742203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 01:42:17.243 [2024-12-09 05:37:08.742239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:17.243 [2024-12-09 05:37:08.742319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:17.243 [2024-12-09 05:37:08.742341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 01:42:17.243 [2024-12-09 05:37:08.742355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 01:42:17.243 [2024-12-09 05:37:08.742373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:17.244 [2024-12-09 05:37:08.742488] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 01:42:17.244 [2024-12-09 05:37:08.747996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:17.244 [2024-12-09 05:37:08.748055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 01:42:17.244 [2024-12-09 05:37:08.748091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.566 ms 01:42:17.244 [2024-12-09 05:37:08.748104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:17.244 [2024-12-09 05:37:08.748210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:17.244 [2024-12-09 05:37:08.748258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 01:42:17.244 [2024-12-09 05:37:08.748275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 01:42:17.244 [2024-12-09 05:37:08.748288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:17.244 [2024-12-09 05:37:08.748353] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 01:42:17.244 [2024-12-09 05:37:08.748501] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 01:42:17.244 [2024-12-09 05:37:08.748526] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 01:42:17.244 [2024-12-09 05:37:08.748543] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 01:42:17.244 [2024-12-09 05:37:08.748561] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 01:42:17.244 [2024-12-09 05:37:08.748576] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 01:42:17.244 [2024-12-09 05:37:08.748592] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 01:42:17.244 [2024-12-09 05:37:08.748604] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 01:42:17.244 [2024-12-09 05:37:08.748621] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 01:42:17.244 [2024-12-09 05:37:08.748633] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 01:42:17.244 [2024-12-09 05:37:08.748649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:17.244 [2024-12-09 05:37:08.748703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 01:42:17.244 [2024-12-09 05:37:08.748745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.299 ms 01:42:17.244 [2024-12-09 05:37:08.748759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:17.244 [2024-12-09 05:37:08.748877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:17.244 [2024-12-09 05:37:08.748894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 01:42:17.244 [2024-12-09 05:37:08.748918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 01:42:17.244 [2024-12-09 05:37:08.748931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:17.244 [2024-12-09 05:37:08.749115] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 01:42:17.244 [2024-12-09 05:37:08.749184] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 01:42:17.244 [2024-12-09 05:37:08.749210] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 01:42:17.244 [2024-12-09 05:37:08.749225] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:42:17.244 [2024-12-09 05:37:08.749242] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 01:42:17.244 [2024-12-09 05:37:08.749254] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 01:42:17.244 [2024-12-09 05:37:08.749269] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 01:42:17.244 [2024-12-09 05:37:08.749282] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 01:42:17.244 [2024-12-09 05:37:08.749297] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 01:42:17.244 [2024-12-09 05:37:08.749309] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 01:42:17.244 [2024-12-09 05:37:08.749324] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 01:42:17.244 [2024-12-09 05:37:08.749339] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 01:42:17.244 [2024-12-09 05:37:08.749354] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 01:42:17.244 [2024-12-09 05:37:08.749366] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 01:42:17.244 [2024-12-09 05:37:08.749381] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 01:42:17.244 [2024-12-09 05:37:08.749394] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:42:17.244 [2024-12-09 05:37:08.749411] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 01:42:17.244 [2024-12-09 05:37:08.749423] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 01:42:17.244 [2024-12-09 05:37:08.749440] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:42:17.244 [2024-12-09 05:37:08.749452] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 01:42:17.244 [2024-12-09 05:37:08.749467] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 01:42:17.244 [2024-12-09 05:37:08.749480] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:42:17.244 [2024-12-09 05:37:08.749494] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 01:42:17.244 [2024-12-09 05:37:08.749506] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 01:42:17.244 [2024-12-09 05:37:08.749521] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:42:17.244 [2024-12-09 05:37:08.749533] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 01:42:17.244 [2024-12-09 05:37:08.749548] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 01:42:17.244 [2024-12-09 05:37:08.749560] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:42:17.244 [2024-12-09 05:37:08.749575] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 01:42:17.244 [2024-12-09 05:37:08.749587] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 01:42:17.244 [2024-12-09 05:37:08.749602] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:42:17.244 [2024-12-09 05:37:08.749614] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 01:42:17.244 [2024-12-09 05:37:08.749631] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 01:42:17.244 [2024-12-09 05:37:08.749658] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 01:42:17.244 [2024-12-09 05:37:08.749674] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 01:42:17.244 [2024-12-09 05:37:08.749687] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 01:42:17.244 [2024-12-09 05:37:08.749703] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 01:42:17.244 [2024-12-09 05:37:08.749751] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 01:42:17.244 [2024-12-09 05:37:08.749770] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 01:42:17.244 [2024-12-09 05:37:08.749783] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:42:17.244 [2024-12-09 05:37:08.749798] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 01:42:17.244 [2024-12-09 05:37:08.749811] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 01:42:17.244 [2024-12-09 05:37:08.749826] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:42:17.244 [2024-12-09 05:37:08.749839] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 01:42:17.244 [2024-12-09 05:37:08.749855] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 01:42:17.244 [2024-12-09 05:37:08.749868] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 01:42:17.244 [2024-12-09 05:37:08.749886] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:42:17.244 [2024-12-09 05:37:08.749900] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 01:42:17.244 [2024-12-09 05:37:08.749918] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 01:42:17.244 [2024-12-09 05:37:08.749931] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 01:42:17.244 [2024-12-09 05:37:08.749947] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 01:42:17.244 [2024-12-09 05:37:08.749959] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 01:42:17.244 [2024-12-09 05:37:08.749975] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 01:42:17.244 [2024-12-09 05:37:08.749993] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 01:42:17.244 [2024-12-09 05:37:08.750013] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 01:42:17.244 [2024-12-09 05:37:08.750034] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 01:42:17.244 [2024-12-09 05:37:08.750065] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 01:42:17.244 [2024-12-09 05:37:08.750108] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 01:42:17.244 [2024-12-09 05:37:08.750123] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 01:42:17.244 [2024-12-09 05:37:08.750136] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 01:42:17.244 [2024-12-09 05:37:08.750150] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 01:42:17.244 [2024-12-09 05:37:08.750163] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 01:42:17.244 [2024-12-09 05:37:08.750178] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 01:42:17.244 [2024-12-09 05:37:08.750191] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 01:42:17.244 [2024-12-09 05:37:08.750208] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 01:42:17.244 [2024-12-09 05:37:08.750220] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 01:42:17.244 [2024-12-09 05:37:08.750238] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 01:42:17.244 [2024-12-09 05:37:08.750250] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 01:42:17.244 [2024-12-09 05:37:08.750277] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 01:42:17.244 [2024-12-09 05:37:08.750290] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 01:42:17.244 [2024-12-09 05:37:08.750307] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 01:42:17.245 [2024-12-09 05:37:08.750320] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 01:42:17.245 [2024-12-09 05:37:08.750335] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 01:42:17.245 [2024-12-09 05:37:08.750348] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 01:42:17.245 [2024-12-09 05:37:08.750363] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 01:42:17.245 [2024-12-09 05:37:08.750377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:17.245 [2024-12-09 05:37:08.750392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 01:42:17.245 [2024-12-09 05:37:08.750405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.335 ms 01:42:17.245 [2024-12-09 05:37:08.750420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:17.245 [2024-12-09 05:37:08.750590] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 01:42:17.245 [2024-12-09 05:37:08.750617] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 01:42:20.538 [2024-12-09 05:37:11.677278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:20.538 [2024-12-09 05:37:11.677391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 01:42:20.538 [2024-12-09 05:37:11.677415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2926.705 ms 01:42:20.538 [2024-12-09 05:37:11.677432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:20.538 [2024-12-09 05:37:11.718582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:20.538 [2024-12-09 05:37:11.718692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 01:42:20.538 [2024-12-09 05:37:11.718720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.749 ms 01:42:20.538 [2024-12-09 05:37:11.718737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:20.538 [2024-12-09 05:37:11.718987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:20.538 [2024-12-09 05:37:11.719020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 01:42:20.538 [2024-12-09 05:37:11.719061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 01:42:20.538 [2024-12-09 05:37:11.719087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:20.538 [2024-12-09 05:37:11.778462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:20.538 [2024-12-09 05:37:11.778562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 01:42:20.538 [2024-12-09 05:37:11.778586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.306 ms 01:42:20.538 [2024-12-09 05:37:11.778605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:20.538 [2024-12-09 05:37:11.778831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:20.538 [2024-12-09 05:37:11.778859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 01:42:20.539 [2024-12-09 05:37:11.778876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 01:42:20.539 [2024-12-09 05:37:11.778891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:20.539 [2024-12-09 05:37:11.779547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:20.539 [2024-12-09 05:37:11.779795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 01:42:20.539 [2024-12-09 05:37:11.779825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.614 ms 01:42:20.539 [2024-12-09 05:37:11.779842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:20.539 [2024-12-09 05:37:11.780042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:20.539 [2024-12-09 05:37:11.780064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 01:42:20.539 [2024-12-09 05:37:11.780100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.151 ms 01:42:20.539 [2024-12-09 05:37:11.780120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:20.539 [2024-12-09 05:37:11.803626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:20.539 [2024-12-09 05:37:11.803928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 01:42:20.539 [2024-12-09 05:37:11.803961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.461 ms 01:42:20.539 [2024-12-09 05:37:11.803979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:20.539 [2024-12-09 05:37:11.819141] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 01:42:20.539 [2024-12-09 05:37:11.842351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:20.539 [2024-12-09 05:37:11.842424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 01:42:20.539 [2024-12-09 05:37:11.842491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.193 ms 01:42:20.539 [2024-12-09 05:37:11.842505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:20.539 [2024-12-09 05:37:11.929571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:20.539 [2024-12-09 05:37:11.929651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 01:42:20.539 [2024-12-09 05:37:11.929741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.896 ms 01:42:20.539 [2024-12-09 05:37:11.929757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:20.539 [2024-12-09 05:37:11.930069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:20.539 [2024-12-09 05:37:11.930092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 01:42:20.539 [2024-12-09 05:37:11.930113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.187 ms 01:42:20.539 [2024-12-09 05:37:11.930126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:20.539 [2024-12-09 05:37:11.959530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:20.539 [2024-12-09 05:37:11.959785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 01:42:20.539 [2024-12-09 05:37:11.959825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.334 ms 01:42:20.539 [2024-12-09 05:37:11.959845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:20.539 [2024-12-09 05:37:11.990072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:20.539 [2024-12-09 05:37:11.990116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 01:42:20.539 [2024-12-09 05:37:11.990171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.109 ms 01:42:20.539 [2024-12-09 05:37:11.990184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:20.539 [2024-12-09 05:37:11.991220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:20.539 [2024-12-09 05:37:11.991260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 01:42:20.539 [2024-12-09 05:37:11.991282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.935 ms 01:42:20.539 [2024-12-09 05:37:11.991296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:20.539 [2024-12-09 05:37:12.082972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:20.539 [2024-12-09 05:37:12.083190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 01:42:20.539 [2024-12-09 05:37:12.083234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 91.601 ms 01:42:20.539 [2024-12-09 05:37:12.083250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:20.539 [2024-12-09 05:37:12.116651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:20.539 [2024-12-09 05:37:12.116725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 01:42:20.539 [2024-12-09 05:37:12.116769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.248 ms 01:42:20.539 [2024-12-09 05:37:12.116799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:20.539 [2024-12-09 05:37:12.148171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:20.539 [2024-12-09 05:37:12.148216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 01:42:20.539 [2024-12-09 05:37:12.148255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.255 ms 01:42:20.539 [2024-12-09 05:37:12.148268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:20.814 [2024-12-09 05:37:12.179958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:20.814 [2024-12-09 05:37:12.180019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 01:42:20.814 [2024-12-09 05:37:12.180067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.571 ms 01:42:20.814 [2024-12-09 05:37:12.180095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:20.814 [2024-12-09 05:37:12.180210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:20.814 [2024-12-09 05:37:12.180232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 01:42:20.814 [2024-12-09 05:37:12.180253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 01:42:20.814 [2024-12-09 05:37:12.180266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:20.814 [2024-12-09 05:37:12.180364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:20.814 [2024-12-09 05:37:12.180382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 01:42:20.814 [2024-12-09 05:37:12.180397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 01:42:20.814 [2024-12-09 05:37:12.180410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:20.814 [2024-12-09 05:37:12.181707] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 01:42:20.814 [2024-12-09 05:37:12.185821] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3459.209 ms, result 0 01:42:20.814 [2024-12-09 05:37:12.186845] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 01:42:20.814 { 01:42:20.814 "name": "ftl0", 01:42:20.814 "uuid": "5177abd3-cafa-411b-b43c-d71befe750fc" 01:42:20.814 } 01:42:20.814 05:37:12 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 01:42:20.814 05:37:12 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 01:42:20.814 05:37:12 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:42:20.814 05:37:12 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 01:42:20.814 05:37:12 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:42:20.815 05:37:12 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:42:20.815 05:37:12 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 01:42:21.074 05:37:12 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 01:42:21.333 [ 01:42:21.333 { 01:42:21.333 "name": "ftl0", 01:42:21.333 "aliases": [ 01:42:21.333 "5177abd3-cafa-411b-b43c-d71befe750fc" 01:42:21.333 ], 01:42:21.333 "product_name": "FTL disk", 01:42:21.333 "block_size": 4096, 01:42:21.333 "num_blocks": 23592960, 01:42:21.333 "uuid": "5177abd3-cafa-411b-b43c-d71befe750fc", 01:42:21.333 "assigned_rate_limits": { 01:42:21.333 "rw_ios_per_sec": 0, 01:42:21.333 "rw_mbytes_per_sec": 0, 01:42:21.333 "r_mbytes_per_sec": 0, 01:42:21.333 "w_mbytes_per_sec": 0 01:42:21.333 }, 01:42:21.333 "claimed": false, 01:42:21.333 "zoned": false, 01:42:21.333 "supported_io_types": { 01:42:21.333 "read": true, 01:42:21.333 "write": true, 01:42:21.333 "unmap": true, 01:42:21.333 "flush": true, 01:42:21.333 "reset": false, 01:42:21.333 "nvme_admin": false, 01:42:21.333 "nvme_io": false, 01:42:21.333 "nvme_io_md": false, 01:42:21.333 "write_zeroes": true, 01:42:21.333 "zcopy": false, 01:42:21.333 "get_zone_info": false, 01:42:21.333 "zone_management": false, 01:42:21.333 "zone_append": false, 01:42:21.333 "compare": false, 01:42:21.333 "compare_and_write": false, 01:42:21.333 "abort": false, 01:42:21.333 "seek_hole": false, 01:42:21.333 "seek_data": false, 01:42:21.333 "copy": false, 01:42:21.333 "nvme_iov_md": false 01:42:21.333 }, 01:42:21.333 "driver_specific": { 01:42:21.333 "ftl": { 01:42:21.333 "base_bdev": "0f53a1d4-295b-4ce3-809d-d0c4aaa149ce", 01:42:21.333 "cache": "nvc0n1p0" 01:42:21.333 } 01:42:21.333 } 01:42:21.333 } 01:42:21.333 ] 01:42:21.333 05:37:12 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 01:42:21.333 05:37:12 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 01:42:21.333 05:37:12 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 01:42:21.593 05:37:13 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 01:42:21.593 05:37:13 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 01:42:21.853 05:37:13 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 01:42:21.853 { 01:42:21.853 "name": "ftl0", 01:42:21.853 "aliases": [ 01:42:21.853 "5177abd3-cafa-411b-b43c-d71befe750fc" 01:42:21.853 ], 01:42:21.853 "product_name": "FTL disk", 01:42:21.853 "block_size": 4096, 01:42:21.853 "num_blocks": 23592960, 01:42:21.853 "uuid": "5177abd3-cafa-411b-b43c-d71befe750fc", 01:42:21.853 "assigned_rate_limits": { 01:42:21.853 "rw_ios_per_sec": 0, 01:42:21.853 "rw_mbytes_per_sec": 0, 01:42:21.853 "r_mbytes_per_sec": 0, 01:42:21.853 "w_mbytes_per_sec": 0 01:42:21.853 }, 01:42:21.853 "claimed": false, 01:42:21.853 "zoned": false, 01:42:21.853 "supported_io_types": { 01:42:21.853 "read": true, 01:42:21.853 "write": true, 01:42:21.853 "unmap": true, 01:42:21.853 "flush": true, 01:42:21.853 "reset": false, 01:42:21.853 "nvme_admin": false, 01:42:21.853 "nvme_io": false, 01:42:21.853 "nvme_io_md": false, 01:42:21.853 "write_zeroes": true, 01:42:21.853 "zcopy": false, 01:42:21.853 "get_zone_info": false, 01:42:21.853 "zone_management": false, 01:42:21.853 "zone_append": false, 01:42:21.853 "compare": false, 01:42:21.853 "compare_and_write": false, 01:42:21.853 "abort": false, 01:42:21.853 "seek_hole": false, 01:42:21.853 "seek_data": false, 01:42:21.853 "copy": false, 01:42:21.853 "nvme_iov_md": false 01:42:21.853 }, 01:42:21.853 "driver_specific": { 01:42:21.853 "ftl": { 01:42:21.853 "base_bdev": "0f53a1d4-295b-4ce3-809d-d0c4aaa149ce", 01:42:21.853 "cache": "nvc0n1p0" 01:42:21.853 } 01:42:21.853 } 01:42:21.853 } 01:42:21.853 ]' 01:42:21.853 05:37:13 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 01:42:21.853 05:37:13 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 01:42:21.853 05:37:13 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 01:42:22.113 [2024-12-09 05:37:13.663328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:22.113 [2024-12-09 05:37:13.663634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 01:42:22.113 [2024-12-09 05:37:13.663672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 01:42:22.113 [2024-12-09 05:37:13.663707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:22.113 [2024-12-09 05:37:13.663771] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 01:42:22.113 [2024-12-09 05:37:13.667596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:22.113 [2024-12-09 05:37:13.667631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 01:42:22.113 [2024-12-09 05:37:13.667670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.794 ms 01:42:22.113 [2024-12-09 05:37:13.667695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:22.113 [2024-12-09 05:37:13.668277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:22.113 [2024-12-09 05:37:13.668337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 01:42:22.113 [2024-12-09 05:37:13.668356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.519 ms 01:42:22.113 [2024-12-09 05:37:13.668369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:22.113 [2024-12-09 05:37:13.671936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:22.113 [2024-12-09 05:37:13.671969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 01:42:22.113 [2024-12-09 05:37:13.671988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.525 ms 01:42:22.113 [2024-12-09 05:37:13.672000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:22.113 [2024-12-09 05:37:13.679248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:22.113 [2024-12-09 05:37:13.679286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 01:42:22.113 [2024-12-09 05:37:13.679337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.172 ms 01:42:22.113 [2024-12-09 05:37:13.679349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:22.113 [2024-12-09 05:37:13.711666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:22.113 [2024-12-09 05:37:13.711917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 01:42:22.113 [2024-12-09 05:37:13.711957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.224 ms 01:42:22.113 [2024-12-09 05:37:13.711972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:22.373 [2024-12-09 05:37:13.731022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:22.373 [2024-12-09 05:37:13.731249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 01:42:22.373 [2024-12-09 05:37:13.731292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.938 ms 01:42:22.373 [2024-12-09 05:37:13.731307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:22.373 [2024-12-09 05:37:13.731584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:22.373 [2024-12-09 05:37:13.731606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 01:42:22.373 [2024-12-09 05:37:13.731624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.165 ms 01:42:22.373 [2024-12-09 05:37:13.731637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:22.373 [2024-12-09 05:37:13.763687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:22.373 [2024-12-09 05:37:13.763931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 01:42:22.373 [2024-12-09 05:37:13.763969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.007 ms 01:42:22.373 [2024-12-09 05:37:13.763984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:22.373 [2024-12-09 05:37:13.794236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:22.373 [2024-12-09 05:37:13.794279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 01:42:22.373 [2024-12-09 05:37:13.794321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.144 ms 01:42:22.373 [2024-12-09 05:37:13.794334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:22.373 [2024-12-09 05:37:13.825426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:22.373 [2024-12-09 05:37:13.825469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 01:42:22.373 [2024-12-09 05:37:13.825507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.950 ms 01:42:22.373 [2024-12-09 05:37:13.825520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:22.373 [2024-12-09 05:37:13.856426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:22.373 [2024-12-09 05:37:13.856469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 01:42:22.373 [2024-12-09 05:37:13.856506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.723 ms 01:42:22.373 [2024-12-09 05:37:13.856518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:22.373 [2024-12-09 05:37:13.856616] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 01:42:22.373 [2024-12-09 05:37:13.856643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 01:42:22.373 [2024-12-09 05:37:13.856683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 01:42:22.373 [2024-12-09 05:37:13.856701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 01:42:22.373 [2024-12-09 05:37:13.856752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 01:42:22.373 [2024-12-09 05:37:13.856766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 01:42:22.373 [2024-12-09 05:37:13.856785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 01:42:22.373 [2024-12-09 05:37:13.856799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 01:42:22.373 [2024-12-09 05:37:13.856814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 01:42:22.373 [2024-12-09 05:37:13.856844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 01:42:22.373 [2024-12-09 05:37:13.856860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 01:42:22.373 [2024-12-09 05:37:13.856873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 01:42:22.373 [2024-12-09 05:37:13.856889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 01:42:22.373 [2024-12-09 05:37:13.856902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 01:42:22.373 [2024-12-09 05:37:13.856918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 01:42:22.373 [2024-12-09 05:37:13.856931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 01:42:22.373 [2024-12-09 05:37:13.856947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 01:42:22.373 [2024-12-09 05:37:13.856960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 01:42:22.373 [2024-12-09 05:37:13.856976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 01:42:22.374 [2024-12-09 05:37:13.856989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 01:42:22.374 [2024-12-09 05:37:13.857030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 01:42:22.374 [2024-12-09 05:37:13.857045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 01:42:22.374 [2024-12-09 05:37:13.857064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 01:42:22.374 [2024-12-09 05:37:13.857077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 01:42:22.374 [2024-12-09 05:37:13.857093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 01:42:22.374 [2024-12-09 05:37:13.857106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 01:42:22.374 [2024-12-09 05:37:13.857122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 01:42:22.374 [2024-12-09 05:37:13.857135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 01:42:22.374 [2024-12-09 05:37:13.857151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 01:42:22.374 [2024-12-09 05:37:13.857165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 01:42:22.374 [2024-12-09 05:37:13.857181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 01:42:22.374 [2024-12-09 05:37:13.857194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 01:42:22.374 [2024-12-09 05:37:13.857209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 01:42:22.374 [2024-12-09 05:37:13.857223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 01:42:22.374 [2024-12-09 05:37:13.857241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 01:42:22.374 [2024-12-09 05:37:13.857255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 01:42:22.374 [2024-12-09 05:37:13.857271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 01:42:22.374 [2024-12-09 05:37:13.857285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 01:42:22.374 [2024-12-09 05:37:13.857303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 01:42:22.374 [2024-12-09 05:37:13.857331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 01:42:22.374 [2024-12-09 05:37:13.857346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 01:42:22.374 [2024-12-09 05:37:13.857360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 01:42:22.374 [2024-12-09 05:37:13.857375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 01:42:22.374 [2024-12-09 05:37:13.857388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 01:42:22.374 [2024-12-09 05:37:13.857403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 01:42:22.374 [2024-12-09 05:37:13.857416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 01:42:22.374 [2024-12-09 05:37:13.857434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 01:42:22.374 [2024-12-09 05:37:13.857447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 01:42:22.374 [2024-12-09 05:37:13.857462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 01:42:22.374 [2024-12-09 05:37:13.857476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 01:42:22.374 [2024-12-09 05:37:13.857491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 01:42:22.374 [2024-12-09 05:37:13.857504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 01:42:22.374 [2024-12-09 05:37:13.857520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 01:42:22.374 [2024-12-09 05:37:13.857533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 01:42:22.374 [2024-12-09 05:37:13.857550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 01:42:22.374 [2024-12-09 05:37:13.857564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 01:42:22.374 [2024-12-09 05:37:13.857579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 01:42:22.374 [2024-12-09 05:37:13.857593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 01:42:22.374 [2024-12-09 05:37:13.857608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 01:42:22.374 [2024-12-09 05:37:13.857622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 01:42:22.374 [2024-12-09 05:37:13.857637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 01:42:22.374 [2024-12-09 05:37:13.857650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 01:42:22.374 [2024-12-09 05:37:13.857666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 01:42:22.374 [2024-12-09 05:37:13.857692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 01:42:22.374 [2024-12-09 05:37:13.857711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 01:42:22.374 [2024-12-09 05:37:13.857725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 01:42:22.374 [2024-12-09 05:37:13.857742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 01:42:22.374 [2024-12-09 05:37:13.857755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 01:42:22.374 [2024-12-09 05:37:13.857771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 01:42:22.374 [2024-12-09 05:37:13.857784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 01:42:22.374 [2024-12-09 05:37:13.857802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 01:42:22.374 [2024-12-09 05:37:13.857816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 01:42:22.374 [2024-12-09 05:37:13.857834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 01:42:22.374 [2024-12-09 05:37:13.857847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 01:42:22.374 [2024-12-09 05:37:13.857863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 01:42:22.374 [2024-12-09 05:37:13.857876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 01:42:22.374 [2024-12-09 05:37:13.857891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 01:42:22.374 [2024-12-09 05:37:13.857904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 01:42:22.374 [2024-12-09 05:37:13.857920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 01:42:22.374 [2024-12-09 05:37:13.857933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 01:42:22.374 [2024-12-09 05:37:13.857965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 01:42:22.374 [2024-12-09 05:37:13.857979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 01:42:22.374 [2024-12-09 05:37:13.857996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 01:42:22.374 [2024-12-09 05:37:13.858010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 01:42:22.374 [2024-12-09 05:37:13.858026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 01:42:22.374 [2024-12-09 05:37:13.858039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 01:42:22.374 [2024-12-09 05:37:13.858058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 01:42:22.374 [2024-12-09 05:37:13.858071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 01:42:22.374 [2024-12-09 05:37:13.858088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 01:42:22.374 [2024-12-09 05:37:13.858101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 01:42:22.374 [2024-12-09 05:37:13.858117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 01:42:22.374 [2024-12-09 05:37:13.858131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 01:42:22.374 [2024-12-09 05:37:13.858146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 01:42:22.374 [2024-12-09 05:37:13.858160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 01:42:22.374 [2024-12-09 05:37:13.858176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 01:42:22.374 [2024-12-09 05:37:13.858190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 01:42:22.374 [2024-12-09 05:37:13.858206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 01:42:22.374 [2024-12-09 05:37:13.858221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 01:42:22.374 [2024-12-09 05:37:13.858238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 01:42:22.374 [2024-12-09 05:37:13.858252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 01:42:22.374 [2024-12-09 05:37:13.858270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 01:42:22.374 [2024-12-09 05:37:13.858293] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 01:42:22.374 [2024-12-09 05:37:13.858311] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5177abd3-cafa-411b-b43c-d71befe750fc 01:42:22.374 [2024-12-09 05:37:13.858326] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 01:42:22.374 [2024-12-09 05:37:13.858341] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 01:42:22.374 [2024-12-09 05:37:13.858357] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 01:42:22.374 [2024-12-09 05:37:13.858372] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 01:42:22.374 [2024-12-09 05:37:13.858385] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 01:42:22.374 [2024-12-09 05:37:13.858400] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 01:42:22.375 [2024-12-09 05:37:13.858413] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 01:42:22.375 [2024-12-09 05:37:13.858427] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 01:42:22.375 [2024-12-09 05:37:13.858449] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 01:42:22.375 [2024-12-09 05:37:13.858469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:22.375 [2024-12-09 05:37:13.858482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 01:42:22.375 [2024-12-09 05:37:13.858499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.854 ms 01:42:22.375 [2024-12-09 05:37:13.858511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:22.375 [2024-12-09 05:37:13.875654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:22.375 [2024-12-09 05:37:13.875732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 01:42:22.375 [2024-12-09 05:37:13.875774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.100 ms 01:42:22.375 [2024-12-09 05:37:13.875788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:22.375 [2024-12-09 05:37:13.876337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:22.375 [2024-12-09 05:37:13.876373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 01:42:22.375 [2024-12-09 05:37:13.876395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.464 ms 01:42:22.375 [2024-12-09 05:37:13.876409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:22.375 [2024-12-09 05:37:13.936581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:42:22.375 [2024-12-09 05:37:13.936639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 01:42:22.375 [2024-12-09 05:37:13.936697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:42:22.375 [2024-12-09 05:37:13.936748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:22.375 [2024-12-09 05:37:13.936917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:42:22.375 [2024-12-09 05:37:13.936938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 01:42:22.375 [2024-12-09 05:37:13.936956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:42:22.375 [2024-12-09 05:37:13.936969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:22.375 [2024-12-09 05:37:13.937064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:42:22.375 [2024-12-09 05:37:13.937089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 01:42:22.375 [2024-12-09 05:37:13.937109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:42:22.375 [2024-12-09 05:37:13.937123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:22.375 [2024-12-09 05:37:13.937171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:42:22.375 [2024-12-09 05:37:13.937186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 01:42:22.375 [2024-12-09 05:37:13.937202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:42:22.375 [2024-12-09 05:37:13.937215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:22.634 [2024-12-09 05:37:14.051685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:42:22.634 [2024-12-09 05:37:14.051773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 01:42:22.634 [2024-12-09 05:37:14.051799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:42:22.634 [2024-12-09 05:37:14.051813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:22.634 [2024-12-09 05:37:14.140296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:42:22.634 [2024-12-09 05:37:14.140383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 01:42:22.634 [2024-12-09 05:37:14.140426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:42:22.634 [2024-12-09 05:37:14.140442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:22.634 [2024-12-09 05:37:14.140610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:42:22.634 [2024-12-09 05:37:14.140631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 01:42:22.634 [2024-12-09 05:37:14.140656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:42:22.635 [2024-12-09 05:37:14.140669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:22.635 [2024-12-09 05:37:14.140789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:42:22.635 [2024-12-09 05:37:14.140807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 01:42:22.635 [2024-12-09 05:37:14.140824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:42:22.635 [2024-12-09 05:37:14.140836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:22.635 [2024-12-09 05:37:14.141008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:42:22.635 [2024-12-09 05:37:14.141030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 01:42:22.635 [2024-12-09 05:37:14.141048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:42:22.635 [2024-12-09 05:37:14.141064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:22.635 [2024-12-09 05:37:14.141153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:42:22.635 [2024-12-09 05:37:14.141179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 01:42:22.635 [2024-12-09 05:37:14.141197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:42:22.635 [2024-12-09 05:37:14.141211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:22.635 [2024-12-09 05:37:14.141282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:42:22.635 [2024-12-09 05:37:14.141304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 01:42:22.635 [2024-12-09 05:37:14.141324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:42:22.635 [2024-12-09 05:37:14.141340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:22.635 [2024-12-09 05:37:14.141416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:42:22.635 [2024-12-09 05:37:14.141434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 01:42:22.635 [2024-12-09 05:37:14.141451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:42:22.635 [2024-12-09 05:37:14.141464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:22.635 [2024-12-09 05:37:14.141727] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 478.367 ms, result 0 01:42:22.635 true 01:42:22.635 05:37:14 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 78274 01:42:22.635 05:37:14 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78274 ']' 01:42:22.635 05:37:14 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78274 01:42:22.635 05:37:14 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 01:42:22.635 05:37:14 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:42:22.635 05:37:14 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78274 01:42:22.635 killing process with pid 78274 01:42:22.635 05:37:14 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:42:22.635 05:37:14 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:42:22.635 05:37:14 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78274' 01:42:22.635 05:37:14 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 78274 01:42:22.635 05:37:14 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 78274 01:42:27.900 05:37:19 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 01:42:28.882 65536+0 records in 01:42:28.882 65536+0 records out 01:42:28.882 268435456 bytes (268 MB, 256 MiB) copied, 1.21623 s, 221 MB/s 01:42:28.882 05:37:20 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 01:42:28.882 [2024-12-09 05:37:20.398101] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:42:28.882 [2024-12-09 05:37:20.398380] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78483 ] 01:42:29.141 [2024-12-09 05:37:20.587145] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:42:29.141 [2024-12-09 05:37:20.750651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:42:29.709 [2024-12-09 05:37:21.133806] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 01:42:29.709 [2024-12-09 05:37:21.133893] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 01:42:29.709 [2024-12-09 05:37:21.302286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:29.709 [2024-12-09 05:37:21.302338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 01:42:29.709 [2024-12-09 05:37:21.302373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 01:42:29.710 [2024-12-09 05:37:21.302394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:29.710 [2024-12-09 05:37:21.306361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:29.710 [2024-12-09 05:37:21.306403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 01:42:29.710 [2024-12-09 05:37:21.306462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.940 ms 01:42:29.710 [2024-12-09 05:37:21.306476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:29.710 [2024-12-09 05:37:21.306608] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 01:42:29.710 [2024-12-09 05:37:21.307699] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 01:42:29.710 [2024-12-09 05:37:21.307739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:29.710 [2024-12-09 05:37:21.307754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 01:42:29.710 [2024-12-09 05:37:21.307767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.143 ms 01:42:29.710 [2024-12-09 05:37:21.307778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:29.710 [2024-12-09 05:37:21.309985] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 01:42:29.970 [2024-12-09 05:37:21.327089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:29.970 [2024-12-09 05:37:21.327129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 01:42:29.970 [2024-12-09 05:37:21.327162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.105 ms 01:42:29.970 [2024-12-09 05:37:21.327174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:29.970 [2024-12-09 05:37:21.327280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:29.970 [2024-12-09 05:37:21.327300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 01:42:29.970 [2024-12-09 05:37:21.327313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 01:42:29.970 [2024-12-09 05:37:21.327324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:29.970 [2024-12-09 05:37:21.337096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:29.970 [2024-12-09 05:37:21.337144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 01:42:29.970 [2024-12-09 05:37:21.337160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.720 ms 01:42:29.970 [2024-12-09 05:37:21.337172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:29.970 [2024-12-09 05:37:21.337314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:29.970 [2024-12-09 05:37:21.337335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 01:42:29.970 [2024-12-09 05:37:21.337349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 01:42:29.970 [2024-12-09 05:37:21.337360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:29.970 [2024-12-09 05:37:21.337423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:29.970 [2024-12-09 05:37:21.337441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 01:42:29.970 [2024-12-09 05:37:21.337454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 01:42:29.970 [2024-12-09 05:37:21.337480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:29.970 [2024-12-09 05:37:21.337544] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 01:42:29.970 [2024-12-09 05:37:21.343337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:29.970 [2024-12-09 05:37:21.343513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 01:42:29.970 [2024-12-09 05:37:21.343685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.802 ms 01:42:29.970 [2024-12-09 05:37:21.343849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:29.970 [2024-12-09 05:37:21.343960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:29.970 [2024-12-09 05:37:21.344161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 01:42:29.970 [2024-12-09 05:37:21.344213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 01:42:29.970 [2024-12-09 05:37:21.344230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:29.970 [2024-12-09 05:37:21.344277] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 01:42:29.970 [2024-12-09 05:37:21.344306] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 01:42:29.970 [2024-12-09 05:37:21.344348] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 01:42:29.970 [2024-12-09 05:37:21.344367] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 01:42:29.970 [2024-12-09 05:37:21.344489] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 01:42:29.970 [2024-12-09 05:37:21.344503] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 01:42:29.970 [2024-12-09 05:37:21.344518] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 01:42:29.970 [2024-12-09 05:37:21.344538] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 01:42:29.970 [2024-12-09 05:37:21.344551] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 01:42:29.970 [2024-12-09 05:37:21.344563] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 01:42:29.970 [2024-12-09 05:37:21.344574] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 01:42:29.970 [2024-12-09 05:37:21.344584] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 01:42:29.970 [2024-12-09 05:37:21.344595] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 01:42:29.970 [2024-12-09 05:37:21.344607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:29.970 [2024-12-09 05:37:21.344618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 01:42:29.970 [2024-12-09 05:37:21.344630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.333 ms 01:42:29.970 [2024-12-09 05:37:21.344640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:29.970 [2024-12-09 05:37:21.344786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:29.970 [2024-12-09 05:37:21.344810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 01:42:29.970 [2024-12-09 05:37:21.344823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 01:42:29.970 [2024-12-09 05:37:21.344834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:29.970 [2024-12-09 05:37:21.344954] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 01:42:29.970 [2024-12-09 05:37:21.344972] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 01:42:29.970 [2024-12-09 05:37:21.344987] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 01:42:29.970 [2024-12-09 05:37:21.344998] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:42:29.970 [2024-12-09 05:37:21.345010] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 01:42:29.970 [2024-12-09 05:37:21.345019] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 01:42:29.970 [2024-12-09 05:37:21.345029] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 01:42:29.970 [2024-12-09 05:37:21.345039] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 01:42:29.970 [2024-12-09 05:37:21.345049] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 01:42:29.970 [2024-12-09 05:37:21.345059] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 01:42:29.970 [2024-12-09 05:37:21.345079] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 01:42:29.970 [2024-12-09 05:37:21.345103] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 01:42:29.970 [2024-12-09 05:37:21.345114] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 01:42:29.971 [2024-12-09 05:37:21.345124] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 01:42:29.971 [2024-12-09 05:37:21.345135] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 01:42:29.971 [2024-12-09 05:37:21.345145] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:42:29.971 [2024-12-09 05:37:21.345156] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 01:42:29.971 [2024-12-09 05:37:21.345166] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 01:42:29.971 [2024-12-09 05:37:21.345176] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:42:29.971 [2024-12-09 05:37:21.345186] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 01:42:29.971 [2024-12-09 05:37:21.345197] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 01:42:29.971 [2024-12-09 05:37:21.345207] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:42:29.971 [2024-12-09 05:37:21.345217] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 01:42:29.971 [2024-12-09 05:37:21.345228] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 01:42:29.971 [2024-12-09 05:37:21.345238] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:42:29.971 [2024-12-09 05:37:21.345248] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 01:42:29.971 [2024-12-09 05:37:21.345258] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 01:42:29.971 [2024-12-09 05:37:21.345268] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:42:29.971 [2024-12-09 05:37:21.345278] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 01:42:29.971 [2024-12-09 05:37:21.345288] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 01:42:29.971 [2024-12-09 05:37:21.345299] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:42:29.971 [2024-12-09 05:37:21.345309] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 01:42:29.971 [2024-12-09 05:37:21.345319] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 01:42:29.971 [2024-12-09 05:37:21.345329] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 01:42:29.971 [2024-12-09 05:37:21.345347] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 01:42:29.971 [2024-12-09 05:37:21.345357] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 01:42:29.971 [2024-12-09 05:37:21.345367] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 01:42:29.971 [2024-12-09 05:37:21.345377] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 01:42:29.971 [2024-12-09 05:37:21.345387] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 01:42:29.971 [2024-12-09 05:37:21.345397] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:42:29.971 [2024-12-09 05:37:21.345408] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 01:42:29.971 [2024-12-09 05:37:21.345418] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 01:42:29.971 [2024-12-09 05:37:21.345427] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:42:29.971 [2024-12-09 05:37:21.345437] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 01:42:29.971 [2024-12-09 05:37:21.345448] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 01:42:29.971 [2024-12-09 05:37:21.345464] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 01:42:29.971 [2024-12-09 05:37:21.345476] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:42:29.971 [2024-12-09 05:37:21.345487] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 01:42:29.971 [2024-12-09 05:37:21.345498] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 01:42:29.971 [2024-12-09 05:37:21.345509] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 01:42:29.971 [2024-12-09 05:37:21.345519] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 01:42:29.971 [2024-12-09 05:37:21.345529] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 01:42:29.971 [2024-12-09 05:37:21.345540] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 01:42:29.971 [2024-12-09 05:37:21.345553] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 01:42:29.971 [2024-12-09 05:37:21.345567] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 01:42:29.971 [2024-12-09 05:37:21.345580] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 01:42:29.971 [2024-12-09 05:37:21.345592] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 01:42:29.971 [2024-12-09 05:37:21.345603] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 01:42:29.971 [2024-12-09 05:37:21.345614] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 01:42:29.971 [2024-12-09 05:37:21.345625] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 01:42:29.971 [2024-12-09 05:37:21.345636] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 01:42:29.971 [2024-12-09 05:37:21.345646] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 01:42:29.971 [2024-12-09 05:37:21.345657] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 01:42:29.971 [2024-12-09 05:37:21.345685] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 01:42:29.971 [2024-12-09 05:37:21.345697] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 01:42:29.971 [2024-12-09 05:37:21.345708] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 01:42:29.971 [2024-12-09 05:37:21.345725] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 01:42:29.971 [2024-12-09 05:37:21.345736] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 01:42:29.971 [2024-12-09 05:37:21.345747] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 01:42:29.971 [2024-12-09 05:37:21.345758] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 01:42:29.971 [2024-12-09 05:37:21.345771] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 01:42:29.971 [2024-12-09 05:37:21.345782] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 01:42:29.971 [2024-12-09 05:37:21.345794] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 01:42:29.971 [2024-12-09 05:37:21.345805] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 01:42:29.971 [2024-12-09 05:37:21.345816] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 01:42:29.971 [2024-12-09 05:37:21.345829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:29.971 [2024-12-09 05:37:21.345847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 01:42:29.971 [2024-12-09 05:37:21.345859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.943 ms 01:42:29.971 [2024-12-09 05:37:21.345870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:29.971 [2024-12-09 05:37:21.389172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:29.971 [2024-12-09 05:37:21.389241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 01:42:29.971 [2024-12-09 05:37:21.389280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.219 ms 01:42:29.971 [2024-12-09 05:37:21.389292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:29.971 [2024-12-09 05:37:21.389509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:29.971 [2024-12-09 05:37:21.389529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 01:42:29.971 [2024-12-09 05:37:21.389543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 01:42:29.971 [2024-12-09 05:37:21.389555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:29.971 [2024-12-09 05:37:21.447659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:29.971 [2024-12-09 05:37:21.447755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 01:42:29.971 [2024-12-09 05:37:21.447783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.070 ms 01:42:29.971 [2024-12-09 05:37:21.447796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:29.971 [2024-12-09 05:37:21.447968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:29.971 [2024-12-09 05:37:21.447990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 01:42:29.971 [2024-12-09 05:37:21.448004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 01:42:29.971 [2024-12-09 05:37:21.448016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:29.971 [2024-12-09 05:37:21.448613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:29.971 [2024-12-09 05:37:21.448638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 01:42:29.971 [2024-12-09 05:37:21.448680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.565 ms 01:42:29.971 [2024-12-09 05:37:21.448696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:29.971 [2024-12-09 05:37:21.448875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:29.971 [2024-12-09 05:37:21.448936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 01:42:29.971 [2024-12-09 05:37:21.448949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.145 ms 01:42:29.971 [2024-12-09 05:37:21.448960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:29.971 [2024-12-09 05:37:21.471237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:29.971 [2024-12-09 05:37:21.471476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 01:42:29.971 [2024-12-09 05:37:21.471506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.247 ms 01:42:29.971 [2024-12-09 05:37:21.471519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:29.972 [2024-12-09 05:37:21.489466] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 01:42:29.972 [2024-12-09 05:37:21.489633] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 01:42:29.972 [2024-12-09 05:37:21.489680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:29.972 [2024-12-09 05:37:21.489707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 01:42:29.972 [2024-12-09 05:37:21.489722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.975 ms 01:42:29.972 [2024-12-09 05:37:21.489734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:29.972 [2024-12-09 05:37:21.520759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:29.972 [2024-12-09 05:37:21.520807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 01:42:29.972 [2024-12-09 05:37:21.520826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.910 ms 01:42:29.972 [2024-12-09 05:37:21.520838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:29.972 [2024-12-09 05:37:21.536344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:29.972 [2024-12-09 05:37:21.536416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 01:42:29.972 [2024-12-09 05:37:21.536448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.421 ms 01:42:29.972 [2024-12-09 05:37:21.536459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:29.972 [2024-12-09 05:37:21.552159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:29.972 [2024-12-09 05:37:21.552215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 01:42:29.972 [2024-12-09 05:37:21.552247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.611 ms 01:42:29.972 [2024-12-09 05:37:21.552257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:29.972 [2024-12-09 05:37:21.553265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:29.972 [2024-12-09 05:37:21.553301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 01:42:29.972 [2024-12-09 05:37:21.553317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.837 ms 01:42:29.972 [2024-12-09 05:37:21.553329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:30.231 [2024-12-09 05:37:21.638574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:30.231 [2024-12-09 05:37:21.638645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 01:42:30.231 [2024-12-09 05:37:21.638683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 85.210 ms 01:42:30.231 [2024-12-09 05:37:21.638699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:30.231 [2024-12-09 05:37:21.652034] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 01:42:30.231 [2024-12-09 05:37:21.675030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:30.231 [2024-12-09 05:37:21.675338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 01:42:30.231 [2024-12-09 05:37:21.675385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.150 ms 01:42:30.231 [2024-12-09 05:37:21.675398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:30.231 [2024-12-09 05:37:21.675618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:30.231 [2024-12-09 05:37:21.675638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 01:42:30.231 [2024-12-09 05:37:21.675669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 01:42:30.231 [2024-12-09 05:37:21.675711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:30.231 [2024-12-09 05:37:21.675794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:30.231 [2024-12-09 05:37:21.675812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 01:42:30.231 [2024-12-09 05:37:21.675826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 01:42:30.231 [2024-12-09 05:37:21.675837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:30.231 [2024-12-09 05:37:21.675900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:30.231 [2024-12-09 05:37:21.675927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 01:42:30.231 [2024-12-09 05:37:21.675940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 01:42:30.231 [2024-12-09 05:37:21.675951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:30.231 [2024-12-09 05:37:21.676007] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 01:42:30.231 [2024-12-09 05:37:21.676025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:30.231 [2024-12-09 05:37:21.676038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 01:42:30.231 [2024-12-09 05:37:21.676050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 01:42:30.231 [2024-12-09 05:37:21.676061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:30.231 [2024-12-09 05:37:21.708256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:30.231 [2024-12-09 05:37:21.708316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 01:42:30.231 [2024-12-09 05:37:21.708337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.166 ms 01:42:30.231 [2024-12-09 05:37:21.708348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:30.231 [2024-12-09 05:37:21.708484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:30.231 [2024-12-09 05:37:21.708504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 01:42:30.231 [2024-12-09 05:37:21.708517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 01:42:30.231 [2024-12-09 05:37:21.708527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:30.231 [2024-12-09 05:37:21.709767] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 01:42:30.231 [2024-12-09 05:37:21.714119] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 407.068 ms, result 0 01:42:30.231 [2024-12-09 05:37:21.715040] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 01:42:30.232 [2024-12-09 05:37:21.731391] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 01:42:31.167  [2024-12-09T05:37:24.159Z] Copying: 21/256 [MB] (21 MBps) [2024-12-09T05:37:25.093Z] Copying: 42/256 [MB] (21 MBps) [2024-12-09T05:37:26.025Z] Copying: 64/256 [MB] (21 MBps) [2024-12-09T05:37:26.960Z] Copying: 86/256 [MB] (21 MBps) [2024-12-09T05:37:27.907Z] Copying: 107/256 [MB] (21 MBps) [2024-12-09T05:37:28.841Z] Copying: 129/256 [MB] (22 MBps) [2024-12-09T05:37:29.773Z] Copying: 151/256 [MB] (22 MBps) [2024-12-09T05:37:31.153Z] Copying: 174/256 [MB] (22 MBps) [2024-12-09T05:37:32.088Z] Copying: 194/256 [MB] (20 MBps) [2024-12-09T05:37:33.023Z] Copying: 216/256 [MB] (21 MBps) [2024-12-09T05:37:33.959Z] Copying: 236/256 [MB] (20 MBps) [2024-12-09T05:37:33.959Z] Copying: 256/256 [MB] (average 21 MBps)[2024-12-09 05:37:33.661805] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 01:42:42.342 [2024-12-09 05:37:33.675768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:42.342 [2024-12-09 05:37:33.675812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 01:42:42.342 [2024-12-09 05:37:33.675849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 01:42:42.342 [2024-12-09 05:37:33.675869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:42.342 [2024-12-09 05:37:33.675899] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 01:42:42.342 [2024-12-09 05:37:33.679954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:42.342 [2024-12-09 05:37:33.680016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 01:42:42.342 [2024-12-09 05:37:33.680030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.035 ms 01:42:42.342 [2024-12-09 05:37:33.680076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:42.342 [2024-12-09 05:37:33.682191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:42.342 [2024-12-09 05:37:33.682231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 01:42:42.342 [2024-12-09 05:37:33.682263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.070 ms 01:42:42.342 [2024-12-09 05:37:33.682273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:42.342 [2024-12-09 05:37:33.690126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:42.342 [2024-12-09 05:37:33.690223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 01:42:42.342 [2024-12-09 05:37:33.690254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.830 ms 01:42:42.342 [2024-12-09 05:37:33.690265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:42.342 [2024-12-09 05:37:33.697763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:42.342 [2024-12-09 05:37:33.697820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 01:42:42.342 [2024-12-09 05:37:33.697901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.456 ms 01:42:42.342 [2024-12-09 05:37:33.697928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:42.343 [2024-12-09 05:37:33.728896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:42.343 [2024-12-09 05:37:33.728950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 01:42:42.343 [2024-12-09 05:37:33.728982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.882 ms 01:42:42.343 [2024-12-09 05:37:33.728992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:42.343 [2024-12-09 05:37:33.747545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:42.343 [2024-12-09 05:37:33.747783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 01:42:42.343 [2024-12-09 05:37:33.747820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.493 ms 01:42:42.343 [2024-12-09 05:37:33.747833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:42.343 [2024-12-09 05:37:33.747977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:42.343 [2024-12-09 05:37:33.747995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 01:42:42.343 [2024-12-09 05:37:33.748009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 01:42:42.343 [2024-12-09 05:37:33.748035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:42.343 [2024-12-09 05:37:33.781110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:42.343 [2024-12-09 05:37:33.781153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 01:42:42.343 [2024-12-09 05:37:33.781186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.041 ms 01:42:42.343 [2024-12-09 05:37:33.781197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:42.343 [2024-12-09 05:37:33.813681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:42.343 [2024-12-09 05:37:33.813752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 01:42:42.343 [2024-12-09 05:37:33.813785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.346 ms 01:42:42.343 [2024-12-09 05:37:33.813796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:42.343 [2024-12-09 05:37:33.845078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:42.343 [2024-12-09 05:37:33.845119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 01:42:42.343 [2024-12-09 05:37:33.845151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.204 ms 01:42:42.343 [2024-12-09 05:37:33.845161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:42.343 [2024-12-09 05:37:33.876616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:42.343 [2024-12-09 05:37:33.876696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 01:42:42.343 [2024-12-09 05:37:33.876746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.353 ms 01:42:42.343 [2024-12-09 05:37:33.876757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:42.343 [2024-12-09 05:37:33.876823] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 01:42:42.343 [2024-12-09 05:37:33.876847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 01:42:42.343 [2024-12-09 05:37:33.876862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 01:42:42.343 [2024-12-09 05:37:33.876874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 01:42:42.343 [2024-12-09 05:37:33.876886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 01:42:42.343 [2024-12-09 05:37:33.876897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 01:42:42.343 [2024-12-09 05:37:33.876909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 01:42:42.343 [2024-12-09 05:37:33.876925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 01:42:42.343 [2024-12-09 05:37:33.876937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 01:42:42.343 [2024-12-09 05:37:33.876956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 01:42:42.343 [2024-12-09 05:37:33.876967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 01:42:42.343 [2024-12-09 05:37:33.876978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 01:42:42.343 [2024-12-09 05:37:33.876990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 01:42:42.343 [2024-12-09 05:37:33.877001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 01:42:42.343 [2024-12-09 05:37:33.877012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 01:42:42.343 [2024-12-09 05:37:33.877024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 01:42:42.343 [2024-12-09 05:37:33.877035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 01:42:42.343 [2024-12-09 05:37:33.877046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 01:42:42.343 [2024-12-09 05:37:33.877058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 01:42:42.343 [2024-12-09 05:37:33.877069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 01:42:42.343 [2024-12-09 05:37:33.877080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 01:42:42.343 [2024-12-09 05:37:33.877090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 01:42:42.343 [2024-12-09 05:37:33.877101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 01:42:42.343 [2024-12-09 05:37:33.877112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 01:42:42.343 [2024-12-09 05:37:33.877124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 01:42:42.343 [2024-12-09 05:37:33.877135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 01:42:42.343 [2024-12-09 05:37:33.877146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 01:42:42.343 [2024-12-09 05:37:33.877157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 01:42:42.343 [2024-12-09 05:37:33.877168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 01:42:42.343 [2024-12-09 05:37:33.877179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 01:42:42.343 [2024-12-09 05:37:33.877190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 01:42:42.343 [2024-12-09 05:37:33.877201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 01:42:42.343 [2024-12-09 05:37:33.877212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 01:42:42.343 [2024-12-09 05:37:33.877223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 01:42:42.343 [2024-12-09 05:37:33.877236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 01:42:42.343 [2024-12-09 05:37:33.877247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 01:42:42.343 [2024-12-09 05:37:33.877281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 01:42:42.343 [2024-12-09 05:37:33.877322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 01:42:42.343 [2024-12-09 05:37:33.877333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 01:42:42.343 [2024-12-09 05:37:33.877345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 01:42:42.343 [2024-12-09 05:37:33.877356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 01:42:42.343 [2024-12-09 05:37:33.877367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 01:42:42.343 [2024-12-09 05:37:33.877379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 01:42:42.343 [2024-12-09 05:37:33.877390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 01:42:42.343 [2024-12-09 05:37:33.877401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 01:42:42.343 [2024-12-09 05:37:33.877411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 01:42:42.343 [2024-12-09 05:37:33.877423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 01:42:42.343 [2024-12-09 05:37:33.877433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 01:42:42.343 [2024-12-09 05:37:33.877444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 01:42:42.343 [2024-12-09 05:37:33.877456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 01:42:42.343 [2024-12-09 05:37:33.877468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 01:42:42.343 [2024-12-09 05:37:33.877479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 01:42:42.343 [2024-12-09 05:37:33.877490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 01:42:42.343 [2024-12-09 05:37:33.877501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 01:42:42.343 [2024-12-09 05:37:33.877512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 01:42:42.343 [2024-12-09 05:37:33.877524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 01:42:42.343 [2024-12-09 05:37:33.877536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 01:42:42.343 [2024-12-09 05:37:33.877547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 01:42:42.343 [2024-12-09 05:37:33.877558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 01:42:42.343 [2024-12-09 05:37:33.877569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 01:42:42.343 [2024-12-09 05:37:33.877580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 01:42:42.343 [2024-12-09 05:37:33.877592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 01:42:42.343 [2024-12-09 05:37:33.877604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 01:42:42.343 [2024-12-09 05:37:33.877615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 01:42:42.343 [2024-12-09 05:37:33.877626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 01:42:42.343 [2024-12-09 05:37:33.877637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 01:42:42.343 [2024-12-09 05:37:33.877650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 01:42:42.344 [2024-12-09 05:37:33.877662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 01:42:42.344 [2024-12-09 05:37:33.877673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 01:42:42.344 [2024-12-09 05:37:33.877684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 01:42:42.344 [2024-12-09 05:37:33.877696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 01:42:42.344 [2024-12-09 05:37:33.877719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 01:42:42.344 [2024-12-09 05:37:33.877733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 01:42:42.344 [2024-12-09 05:37:33.877745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 01:42:42.344 [2024-12-09 05:37:33.877756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 01:42:42.344 [2024-12-09 05:37:33.877768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 01:42:42.344 [2024-12-09 05:37:33.877779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 01:42:42.344 [2024-12-09 05:37:33.877791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 01:42:42.344 [2024-12-09 05:37:33.877802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 01:42:42.344 [2024-12-09 05:37:33.877814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 01:42:42.344 [2024-12-09 05:37:33.877826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 01:42:42.344 [2024-12-09 05:37:33.877837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 01:42:42.344 [2024-12-09 05:37:33.877848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 01:42:42.344 [2024-12-09 05:37:33.877859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 01:42:42.344 [2024-12-09 05:37:33.877871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 01:42:42.344 [2024-12-09 05:37:33.877882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 01:42:42.344 [2024-12-09 05:37:33.877893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 01:42:42.344 [2024-12-09 05:37:33.877905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 01:42:42.344 [2024-12-09 05:37:33.877916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 01:42:42.344 [2024-12-09 05:37:33.877927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 01:42:42.344 [2024-12-09 05:37:33.877938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 01:42:42.344 [2024-12-09 05:37:33.877949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 01:42:42.344 [2024-12-09 05:37:33.877960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 01:42:42.344 [2024-12-09 05:37:33.877980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 01:42:42.344 [2024-12-09 05:37:33.877991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 01:42:42.344 [2024-12-09 05:37:33.878029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 01:42:42.344 [2024-12-09 05:37:33.878041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 01:42:42.344 [2024-12-09 05:37:33.878053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 01:42:42.344 [2024-12-09 05:37:33.878065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 01:42:42.344 [2024-12-09 05:37:33.878076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 01:42:42.344 [2024-12-09 05:37:33.878088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 01:42:42.344 [2024-12-09 05:37:33.878128] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 01:42:42.344 [2024-12-09 05:37:33.878147] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5177abd3-cafa-411b-b43c-d71befe750fc 01:42:42.344 [2024-12-09 05:37:33.878159] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 01:42:42.344 [2024-12-09 05:37:33.878170] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 01:42:42.344 [2024-12-09 05:37:33.878180] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 01:42:42.344 [2024-12-09 05:37:33.878192] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 01:42:42.344 [2024-12-09 05:37:33.878203] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 01:42:42.344 [2024-12-09 05:37:33.878214] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 01:42:42.344 [2024-12-09 05:37:33.878225] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 01:42:42.344 [2024-12-09 05:37:33.878234] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 01:42:42.344 [2024-12-09 05:37:33.878245] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 01:42:42.344 [2024-12-09 05:37:33.878257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:42.344 [2024-12-09 05:37:33.878281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 01:42:42.344 [2024-12-09 05:37:33.878294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.435 ms 01:42:42.344 [2024-12-09 05:37:33.878305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:42.344 [2024-12-09 05:37:33.896127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:42.344 [2024-12-09 05:37:33.896164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 01:42:42.344 [2024-12-09 05:37:33.896196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.788 ms 01:42:42.344 [2024-12-09 05:37:33.896207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:42.344 [2024-12-09 05:37:33.896807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:42.344 [2024-12-09 05:37:33.896839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 01:42:42.344 [2024-12-09 05:37:33.896852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.539 ms 01:42:42.344 [2024-12-09 05:37:33.896877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:42.344 [2024-12-09 05:37:33.948996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:42:42.344 [2024-12-09 05:37:33.949056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 01:42:42.344 [2024-12-09 05:37:33.949082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:42:42.344 [2024-12-09 05:37:33.949094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:42.344 [2024-12-09 05:37:33.949221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:42:42.344 [2024-12-09 05:37:33.949238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 01:42:42.344 [2024-12-09 05:37:33.949252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:42:42.344 [2024-12-09 05:37:33.949263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:42.344 [2024-12-09 05:37:33.949332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:42:42.344 [2024-12-09 05:37:33.949351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 01:42:42.344 [2024-12-09 05:37:33.949363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:42:42.344 [2024-12-09 05:37:33.949374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:42.344 [2024-12-09 05:37:33.949401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:42:42.344 [2024-12-09 05:37:33.949422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 01:42:42.344 [2024-12-09 05:37:33.949434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:42:42.344 [2024-12-09 05:37:33.949460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:42.602 [2024-12-09 05:37:34.067784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:42:42.602 [2024-12-09 05:37:34.068042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 01:42:42.602 [2024-12-09 05:37:34.068073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:42:42.603 [2024-12-09 05:37:34.068087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:42.603 [2024-12-09 05:37:34.163853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:42:42.603 [2024-12-09 05:37:34.163928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 01:42:42.603 [2024-12-09 05:37:34.163949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:42:42.603 [2024-12-09 05:37:34.163961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:42.603 [2024-12-09 05:37:34.164048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:42:42.603 [2024-12-09 05:37:34.164064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 01:42:42.603 [2024-12-09 05:37:34.164077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:42:42.603 [2024-12-09 05:37:34.164089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:42.603 [2024-12-09 05:37:34.164126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:42:42.603 [2024-12-09 05:37:34.164153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 01:42:42.603 [2024-12-09 05:37:34.164189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:42:42.603 [2024-12-09 05:37:34.164200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:42.603 [2024-12-09 05:37:34.164324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:42:42.603 [2024-12-09 05:37:34.164343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 01:42:42.603 [2024-12-09 05:37:34.164356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:42:42.603 [2024-12-09 05:37:34.164367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:42.603 [2024-12-09 05:37:34.164418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:42:42.603 [2024-12-09 05:37:34.164436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 01:42:42.603 [2024-12-09 05:37:34.164449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:42:42.603 [2024-12-09 05:37:34.164467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:42.603 [2024-12-09 05:37:34.164549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:42:42.603 [2024-12-09 05:37:34.164567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 01:42:42.603 [2024-12-09 05:37:34.164579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:42:42.603 [2024-12-09 05:37:34.164590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:42.603 [2024-12-09 05:37:34.164645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:42:42.603 [2024-12-09 05:37:34.164661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 01:42:42.603 [2024-12-09 05:37:34.164679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:42:42.603 [2024-12-09 05:37:34.164690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:42.603 [2024-12-09 05:37:34.164951] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 489.129 ms, result 0 01:42:44.008 01:42:44.008 01:42:44.008 05:37:35 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=78636 01:42:44.008 05:37:35 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 01:42:44.008 05:37:35 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 78636 01:42:44.008 05:37:35 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 78636 ']' 01:42:44.008 05:37:35 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:42:44.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:42:44.008 05:37:35 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 01:42:44.008 05:37:35 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:42:44.008 05:37:35 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 01:42:44.008 05:37:35 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 01:42:44.008 [2024-12-09 05:37:35.612063] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:42:44.008 [2024-12-09 05:37:35.612458] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78636 ] 01:42:44.266 [2024-12-09 05:37:35.802126] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:42:44.523 [2024-12-09 05:37:35.950068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:42:45.458 05:37:36 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:42:45.458 05:37:36 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 01:42:45.458 05:37:36 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 01:42:45.716 [2024-12-09 05:37:37.237280] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 01:42:45.716 [2024-12-09 05:37:37.237636] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 01:42:45.976 [2024-12-09 05:37:37.431391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:45.976 [2024-12-09 05:37:37.431734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 01:42:45.976 [2024-12-09 05:37:37.431892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 01:42:45.976 [2024-12-09 05:37:37.431918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:45.977 [2024-12-09 05:37:37.436517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:45.977 [2024-12-09 05:37:37.436555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 01:42:45.977 [2024-12-09 05:37:37.436621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.527 ms 01:42:45.977 [2024-12-09 05:37:37.436649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:45.977 [2024-12-09 05:37:37.436817] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 01:42:45.977 [2024-12-09 05:37:37.437848] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 01:42:45.977 [2024-12-09 05:37:37.437888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:45.977 [2024-12-09 05:37:37.437902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 01:42:45.977 [2024-12-09 05:37:37.437915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.086 ms 01:42:45.977 [2024-12-09 05:37:37.437928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:45.977 [2024-12-09 05:37:37.440283] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 01:42:45.977 [2024-12-09 05:37:37.458721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:45.977 [2024-12-09 05:37:37.458775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 01:42:45.977 [2024-12-09 05:37:37.458795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.445 ms 01:42:45.977 [2024-12-09 05:37:37.458814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:45.977 [2024-12-09 05:37:37.458965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:45.977 [2024-12-09 05:37:37.459021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 01:42:45.977 [2024-12-09 05:37:37.459049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 01:42:45.977 [2024-12-09 05:37:37.459065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:45.977 [2024-12-09 05:37:37.469074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:45.977 [2024-12-09 05:37:37.469290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 01:42:45.977 [2024-12-09 05:37:37.469319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.912 ms 01:42:45.977 [2024-12-09 05:37:37.469369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:45.977 [2024-12-09 05:37:37.469571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:45.977 [2024-12-09 05:37:37.469599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 01:42:45.977 [2024-12-09 05:37:37.469612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 01:42:45.977 [2024-12-09 05:37:37.469637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:45.977 [2024-12-09 05:37:37.469755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:45.977 [2024-12-09 05:37:37.469781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 01:42:45.977 [2024-12-09 05:37:37.469796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 01:42:45.977 [2024-12-09 05:37:37.469813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:45.977 [2024-12-09 05:37:37.469851] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 01:42:45.977 [2024-12-09 05:37:37.475251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:45.977 [2024-12-09 05:37:37.475289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 01:42:45.977 [2024-12-09 05:37:37.475313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.404 ms 01:42:45.977 [2024-12-09 05:37:37.475326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:45.977 [2024-12-09 05:37:37.475406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:45.977 [2024-12-09 05:37:37.475425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 01:42:45.977 [2024-12-09 05:37:37.475451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 01:42:45.977 [2024-12-09 05:37:37.475464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:45.977 [2024-12-09 05:37:37.475502] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 01:42:45.977 [2024-12-09 05:37:37.475532] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 01:42:45.977 [2024-12-09 05:37:37.475618] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 01:42:45.977 [2024-12-09 05:37:37.475641] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 01:42:45.977 [2024-12-09 05:37:37.475847] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 01:42:45.977 [2024-12-09 05:37:37.475869] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 01:42:45.977 [2024-12-09 05:37:37.475900] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 01:42:45.977 [2024-12-09 05:37:37.475917] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 01:42:45.977 [2024-12-09 05:37:37.475936] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 01:42:45.977 [2024-12-09 05:37:37.475951] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 01:42:45.977 [2024-12-09 05:37:37.475968] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 01:42:45.977 [2024-12-09 05:37:37.475980] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 01:42:45.977 [2024-12-09 05:37:37.476001] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 01:42:45.977 [2024-12-09 05:37:37.476015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:45.977 [2024-12-09 05:37:37.476032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 01:42:45.977 [2024-12-09 05:37:37.476045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.524 ms 01:42:45.977 [2024-12-09 05:37:37.476062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:45.977 [2024-12-09 05:37:37.476162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:45.977 [2024-12-09 05:37:37.476180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 01:42:45.977 [2024-12-09 05:37:37.476193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 01:42:45.977 [2024-12-09 05:37:37.476206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:45.977 [2024-12-09 05:37:37.476360] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 01:42:45.977 [2024-12-09 05:37:37.476377] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 01:42:45.977 [2024-12-09 05:37:37.476403] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 01:42:45.977 [2024-12-09 05:37:37.476431] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:42:45.977 [2024-12-09 05:37:37.476442] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 01:42:45.977 [2024-12-09 05:37:37.476453] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 01:42:45.977 [2024-12-09 05:37:37.476463] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 01:42:45.977 [2024-12-09 05:37:37.476480] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 01:42:45.977 [2024-12-09 05:37:37.476490] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 01:42:45.977 [2024-12-09 05:37:37.476502] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 01:42:45.977 [2024-12-09 05:37:37.476512] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 01:42:45.977 [2024-12-09 05:37:37.476523] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 01:42:45.977 [2024-12-09 05:37:37.476533] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 01:42:45.977 [2024-12-09 05:37:37.476544] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 01:42:45.977 [2024-12-09 05:37:37.476554] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 01:42:45.977 [2024-12-09 05:37:37.476565] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:42:45.977 [2024-12-09 05:37:37.476575] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 01:42:45.977 [2024-12-09 05:37:37.476586] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 01:42:45.977 [2024-12-09 05:37:37.476606] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:42:45.977 [2024-12-09 05:37:37.476621] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 01:42:45.977 [2024-12-09 05:37:37.476631] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 01:42:45.977 [2024-12-09 05:37:37.476648] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:42:45.977 [2024-12-09 05:37:37.476659] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 01:42:45.977 [2024-12-09 05:37:37.476991] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 01:42:45.977 [2024-12-09 05:37:37.477040] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:42:45.977 [2024-12-09 05:37:37.477087] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 01:42:45.977 [2024-12-09 05:37:37.477221] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 01:42:45.977 [2024-12-09 05:37:37.477310] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:42:45.977 [2024-12-09 05:37:37.477367] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 01:42:45.977 [2024-12-09 05:37:37.477408] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 01:42:45.977 [2024-12-09 05:37:37.477510] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:42:45.977 [2024-12-09 05:37:37.477565] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 01:42:45.977 [2024-12-09 05:37:37.477606] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 01:42:45.977 [2024-12-09 05:37:37.477651] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 01:42:45.977 [2024-12-09 05:37:37.477832] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 01:42:45.977 [2024-12-09 05:37:37.477894] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 01:42:45.977 [2024-12-09 05:37:37.477941] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 01:42:45.977 [2024-12-09 05:37:37.477988] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 01:42:45.977 [2024-12-09 05:37:37.478127] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 01:42:45.977 [2024-12-09 05:37:37.478208] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:42:45.977 [2024-12-09 05:37:37.478253] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 01:42:45.977 [2024-12-09 05:37:37.478299] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 01:42:45.978 [2024-12-09 05:37:37.478411] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:42:45.978 [2024-12-09 05:37:37.478487] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 01:42:45.978 [2024-12-09 05:37:37.478546] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 01:42:45.978 [2024-12-09 05:37:37.478604] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 01:42:45.978 [2024-12-09 05:37:37.478648] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:42:45.978 [2024-12-09 05:37:37.478778] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 01:42:45.978 [2024-12-09 05:37:37.478832] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 01:42:45.978 [2024-12-09 05:37:37.478881] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 01:42:45.978 [2024-12-09 05:37:37.478924] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 01:42:45.978 [2024-12-09 05:37:37.479014] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 01:42:45.978 [2024-12-09 05:37:37.479152] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 01:42:45.978 [2024-12-09 05:37:37.479179] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 01:42:45.978 [2024-12-09 05:37:37.479195] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 01:42:45.978 [2024-12-09 05:37:37.479217] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 01:42:45.978 [2024-12-09 05:37:37.479228] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 01:42:45.978 [2024-12-09 05:37:37.479245] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 01:42:45.978 [2024-12-09 05:37:37.479256] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 01:42:45.978 [2024-12-09 05:37:37.479271] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 01:42:45.978 [2024-12-09 05:37:37.479282] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 01:42:45.978 [2024-12-09 05:37:37.479297] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 01:42:45.978 [2024-12-09 05:37:37.479340] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 01:42:45.978 [2024-12-09 05:37:37.479371] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 01:42:45.978 [2024-12-09 05:37:37.479384] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 01:42:45.978 [2024-12-09 05:37:37.479401] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 01:42:45.978 [2024-12-09 05:37:37.479418] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 01:42:45.978 [2024-12-09 05:37:37.479434] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 01:42:45.978 [2024-12-09 05:37:37.479447] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 01:42:45.978 [2024-12-09 05:37:37.479463] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 01:42:45.978 [2024-12-09 05:37:37.479488] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 01:42:45.978 [2024-12-09 05:37:37.479511] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 01:42:45.978 [2024-12-09 05:37:37.479523] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 01:42:45.978 [2024-12-09 05:37:37.479550] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 01:42:45.978 [2024-12-09 05:37:37.479563] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 01:42:45.978 [2024-12-09 05:37:37.479579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:45.978 [2024-12-09 05:37:37.479591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 01:42:45.978 [2024-12-09 05:37:37.479606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.324 ms 01:42:45.978 [2024-12-09 05:37:37.479620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:45.978 [2024-12-09 05:37:37.525498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:45.978 [2024-12-09 05:37:37.525553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 01:42:45.978 [2024-12-09 05:37:37.525612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.719 ms 01:42:45.978 [2024-12-09 05:37:37.525632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:45.978 [2024-12-09 05:37:37.525946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:45.978 [2024-12-09 05:37:37.525969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 01:42:45.978 [2024-12-09 05:37:37.525989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 01:42:45.978 [2024-12-09 05:37:37.526002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:45.978 [2024-12-09 05:37:37.577408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:45.978 [2024-12-09 05:37:37.577495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 01:42:45.978 [2024-12-09 05:37:37.577532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.353 ms 01:42:45.978 [2024-12-09 05:37:37.577558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:45.978 [2024-12-09 05:37:37.577745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:45.978 [2024-12-09 05:37:37.577767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 01:42:45.978 [2024-12-09 05:37:37.577784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 01:42:45.978 [2024-12-09 05:37:37.577797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:45.978 [2024-12-09 05:37:37.578397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:45.978 [2024-12-09 05:37:37.578423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 01:42:45.978 [2024-12-09 05:37:37.578465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.566 ms 01:42:45.978 [2024-12-09 05:37:37.578480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:45.978 [2024-12-09 05:37:37.578656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:45.978 [2024-12-09 05:37:37.578674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 01:42:45.978 [2024-12-09 05:37:37.578711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.143 ms 01:42:45.978 [2024-12-09 05:37:37.578725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:46.237 [2024-12-09 05:37:37.604231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:46.237 [2024-12-09 05:37:37.604279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 01:42:46.237 [2024-12-09 05:37:37.604305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.466 ms 01:42:46.237 [2024-12-09 05:37:37.604319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:46.237 [2024-12-09 05:37:37.636545] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 01:42:46.237 [2024-12-09 05:37:37.636797] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 01:42:46.237 [2024-12-09 05:37:37.636842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:46.237 [2024-12-09 05:37:37.636858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 01:42:46.237 [2024-12-09 05:37:37.636878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.320 ms 01:42:46.237 [2024-12-09 05:37:37.636905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:46.237 [2024-12-09 05:37:37.668522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:46.237 [2024-12-09 05:37:37.668597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 01:42:46.237 [2024-12-09 05:37:37.668641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.484 ms 01:42:46.237 [2024-12-09 05:37:37.668654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:46.237 [2024-12-09 05:37:37.685061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:46.237 [2024-12-09 05:37:37.685130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 01:42:46.237 [2024-12-09 05:37:37.685183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.264 ms 01:42:46.237 [2024-12-09 05:37:37.685217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:46.237 [2024-12-09 05:37:37.701282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:46.237 [2024-12-09 05:37:37.701328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 01:42:46.237 [2024-12-09 05:37:37.701348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.941 ms 01:42:46.237 [2024-12-09 05:37:37.701360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:46.237 [2024-12-09 05:37:37.702355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:46.237 [2024-12-09 05:37:37.702394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 01:42:46.237 [2024-12-09 05:37:37.702417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.857 ms 01:42:46.237 [2024-12-09 05:37:37.702431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:46.237 [2024-12-09 05:37:37.786373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:46.237 [2024-12-09 05:37:37.786472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 01:42:46.237 [2024-12-09 05:37:37.786503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 83.878 ms 01:42:46.237 [2024-12-09 05:37:37.786524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:46.237 [2024-12-09 05:37:37.799840] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 01:42:46.237 [2024-12-09 05:37:37.822122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:46.237 [2024-12-09 05:37:37.822248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 01:42:46.238 [2024-12-09 05:37:37.822269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.381 ms 01:42:46.238 [2024-12-09 05:37:37.822288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:46.238 [2024-12-09 05:37:37.822430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:46.238 [2024-12-09 05:37:37.822467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 01:42:46.238 [2024-12-09 05:37:37.822482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 01:42:46.238 [2024-12-09 05:37:37.822500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:46.238 [2024-12-09 05:37:37.822578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:46.238 [2024-12-09 05:37:37.822603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 01:42:46.238 [2024-12-09 05:37:37.822617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 01:42:46.238 [2024-12-09 05:37:37.822644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:46.238 [2024-12-09 05:37:37.822704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:46.238 [2024-12-09 05:37:37.822731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 01:42:46.238 [2024-12-09 05:37:37.822746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 01:42:46.238 [2024-12-09 05:37:37.822763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:46.238 [2024-12-09 05:37:37.822863] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 01:42:46.238 [2024-12-09 05:37:37.822885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:46.238 [2024-12-09 05:37:37.822900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 01:42:46.238 [2024-12-09 05:37:37.822913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 01:42:46.238 [2024-12-09 05:37:37.822927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:46.497 [2024-12-09 05:37:37.856347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:46.497 [2024-12-09 05:37:37.856386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 01:42:46.497 [2024-12-09 05:37:37.856405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.386 ms 01:42:46.497 [2024-12-09 05:37:37.856416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:46.497 [2024-12-09 05:37:37.856549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:46.497 [2024-12-09 05:37:37.856568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 01:42:46.497 [2024-12-09 05:37:37.856585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 01:42:46.497 [2024-12-09 05:37:37.856595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:46.497 [2024-12-09 05:37:37.858081] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 01:42:46.497 [2024-12-09 05:37:37.862334] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 426.105 ms, result 0 01:42:46.497 [2024-12-09 05:37:37.863632] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 01:42:46.497 Some configs were skipped because the RPC state that can call them passed over. 01:42:46.497 05:37:37 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 01:42:46.755 [2024-12-09 05:37:38.214896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:46.755 [2024-12-09 05:37:38.215124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 01:42:46.755 [2024-12-09 05:37:38.215264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.008 ms 01:42:46.755 [2024-12-09 05:37:38.215330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:46.755 [2024-12-09 05:37:38.215505] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.614 ms, result 0 01:42:46.755 true 01:42:46.756 05:37:38 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 01:42:47.014 [2024-12-09 05:37:38.542803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:47.014 [2024-12-09 05:37:38.543106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 01:42:47.014 [2024-12-09 05:37:38.543151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.308 ms 01:42:47.014 [2024-12-09 05:37:38.543166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:47.014 [2024-12-09 05:37:38.543242] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.744 ms, result 0 01:42:47.014 true 01:42:47.014 05:37:38 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 78636 01:42:47.014 05:37:38 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78636 ']' 01:42:47.014 05:37:38 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78636 01:42:47.014 05:37:38 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 01:42:47.014 05:37:38 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:42:47.014 05:37:38 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78636 01:42:47.014 killing process with pid 78636 01:42:47.014 05:37:38 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:42:47.014 05:37:38 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:42:47.014 05:37:38 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78636' 01:42:47.014 05:37:38 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 78636 01:42:47.014 05:37:38 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 78636 01:42:48.389 [2024-12-09 05:37:39.792391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:48.389 [2024-12-09 05:37:39.792470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 01:42:48.389 [2024-12-09 05:37:39.792493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 01:42:48.389 [2024-12-09 05:37:39.792509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:48.390 [2024-12-09 05:37:39.792544] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 01:42:48.390 [2024-12-09 05:37:39.796430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:48.390 [2024-12-09 05:37:39.796461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 01:42:48.390 [2024-12-09 05:37:39.796480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.860 ms 01:42:48.390 [2024-12-09 05:37:39.796492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:48.390 [2024-12-09 05:37:39.796862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:48.390 [2024-12-09 05:37:39.796883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 01:42:48.390 [2024-12-09 05:37:39.796899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.324 ms 01:42:48.390 [2024-12-09 05:37:39.796911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:48.390 [2024-12-09 05:37:39.801177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:48.390 [2024-12-09 05:37:39.801220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 01:42:48.390 [2024-12-09 05:37:39.801240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.237 ms 01:42:48.390 [2024-12-09 05:37:39.801252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:48.390 [2024-12-09 05:37:39.809267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:48.390 [2024-12-09 05:37:39.809314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 01:42:48.390 [2024-12-09 05:37:39.809334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.966 ms 01:42:48.390 [2024-12-09 05:37:39.809345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:48.390 [2024-12-09 05:37:39.822495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:48.390 [2024-12-09 05:37:39.822544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 01:42:48.390 [2024-12-09 05:37:39.822567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.082 ms 01:42:48.390 [2024-12-09 05:37:39.822578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:48.390 [2024-12-09 05:37:39.832648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:48.390 [2024-12-09 05:37:39.832705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 01:42:48.390 [2024-12-09 05:37:39.832727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.003 ms 01:42:48.390 [2024-12-09 05:37:39.832739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:48.390 [2024-12-09 05:37:39.832957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:48.390 [2024-12-09 05:37:39.832978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 01:42:48.390 [2024-12-09 05:37:39.832994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 01:42:48.390 [2024-12-09 05:37:39.833005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:48.390 [2024-12-09 05:37:39.846622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:48.390 [2024-12-09 05:37:39.846670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 01:42:48.390 [2024-12-09 05:37:39.846698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.587 ms 01:42:48.390 [2024-12-09 05:37:39.846711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:48.390 [2024-12-09 05:37:39.859959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:48.390 [2024-12-09 05:37:39.859998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 01:42:48.390 [2024-12-09 05:37:39.860026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.173 ms 01:42:48.390 [2024-12-09 05:37:39.860039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:48.390 [2024-12-09 05:37:39.873013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:48.390 [2024-12-09 05:37:39.873051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 01:42:48.390 [2024-12-09 05:37:39.873074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.919 ms 01:42:48.390 [2024-12-09 05:37:39.873086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:48.390 [2024-12-09 05:37:39.886246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:48.390 [2024-12-09 05:37:39.886284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 01:42:48.390 [2024-12-09 05:37:39.886307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.071 ms 01:42:48.390 [2024-12-09 05:37:39.886319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:48.390 [2024-12-09 05:37:39.886368] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 01:42:48.390 [2024-12-09 05:37:39.886391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 01:42:48.390 [2024-12-09 05:37:39.886419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 01:42:48.390 [2024-12-09 05:37:39.886440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 01:42:48.390 [2024-12-09 05:37:39.886462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 01:42:48.390 [2024-12-09 05:37:39.886475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 01:42:48.390 [2024-12-09 05:37:39.886497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 01:42:48.390 [2024-12-09 05:37:39.886510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 01:42:48.390 [2024-12-09 05:37:39.886527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 01:42:48.390 [2024-12-09 05:37:39.886539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 01:42:48.390 [2024-12-09 05:37:39.886557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 01:42:48.390 [2024-12-09 05:37:39.886569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 01:42:48.390 [2024-12-09 05:37:39.886586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 01:42:48.390 [2024-12-09 05:37:39.886599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 01:42:48.390 [2024-12-09 05:37:39.886616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 01:42:48.390 [2024-12-09 05:37:39.886628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 01:42:48.390 [2024-12-09 05:37:39.886648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 01:42:48.390 [2024-12-09 05:37:39.886680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 01:42:48.390 [2024-12-09 05:37:39.886703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 01:42:48.390 [2024-12-09 05:37:39.886716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 01:42:48.390 [2024-12-09 05:37:39.886733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 01:42:48.390 [2024-12-09 05:37:39.886756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 01:42:48.390 [2024-12-09 05:37:39.886778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 01:42:48.390 [2024-12-09 05:37:39.886791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 01:42:48.390 [2024-12-09 05:37:39.886808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 01:42:48.390 [2024-12-09 05:37:39.886821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 01:42:48.390 [2024-12-09 05:37:39.886837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 01:42:48.390 [2024-12-09 05:37:39.886851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 01:42:48.390 [2024-12-09 05:37:39.886871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 01:42:48.390 [2024-12-09 05:37:39.886884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 01:42:48.390 [2024-12-09 05:37:39.886901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 01:42:48.390 [2024-12-09 05:37:39.886913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 01:42:48.391 [2024-12-09 05:37:39.886932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 01:42:48.391 [2024-12-09 05:37:39.886945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 01:42:48.391 [2024-12-09 05:37:39.886969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 01:42:48.391 [2024-12-09 05:37:39.886982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 01:42:48.391 [2024-12-09 05:37:39.886999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 01:42:48.391 [2024-12-09 05:37:39.887012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 01:42:48.391 [2024-12-09 05:37:39.887033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 01:42:48.391 [2024-12-09 05:37:39.887045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 01:42:48.391 [2024-12-09 05:37:39.887063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 01:42:48.391 [2024-12-09 05:37:39.887076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 01:42:48.391 [2024-12-09 05:37:39.887125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 01:42:48.391 [2024-12-09 05:37:39.887152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 01:42:48.391 [2024-12-09 05:37:39.887169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 01:42:48.391 [2024-12-09 05:37:39.887181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 01:42:48.391 [2024-12-09 05:37:39.887197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 01:42:48.391 [2024-12-09 05:37:39.887209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 01:42:48.391 [2024-12-09 05:37:39.887226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 01:42:48.391 [2024-12-09 05:37:39.887238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 01:42:48.391 [2024-12-09 05:37:39.887254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 01:42:48.391 [2024-12-09 05:37:39.887266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 01:42:48.391 [2024-12-09 05:37:39.887283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 01:42:48.391 [2024-12-09 05:37:39.887296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 01:42:48.391 [2024-12-09 05:37:39.887349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 01:42:48.391 [2024-12-09 05:37:39.887366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 01:42:48.391 [2024-12-09 05:37:39.887384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 01:42:48.391 [2024-12-09 05:37:39.887408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 01:42:48.391 [2024-12-09 05:37:39.887424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 01:42:48.391 [2024-12-09 05:37:39.887437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 01:42:48.391 [2024-12-09 05:37:39.887454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 01:42:48.391 [2024-12-09 05:37:39.887467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 01:42:48.391 [2024-12-09 05:37:39.887484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 01:42:48.391 [2024-12-09 05:37:39.887497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 01:42:48.391 [2024-12-09 05:37:39.887514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 01:42:48.391 [2024-12-09 05:37:39.887527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 01:42:48.391 [2024-12-09 05:37:39.887544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 01:42:48.391 [2024-12-09 05:37:39.887557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 01:42:48.391 [2024-12-09 05:37:39.887575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 01:42:48.391 [2024-12-09 05:37:39.887588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 01:42:48.391 [2024-12-09 05:37:39.887611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 01:42:48.391 [2024-12-09 05:37:39.887625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 01:42:48.391 [2024-12-09 05:37:39.887642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 01:42:48.391 [2024-12-09 05:37:39.887654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 01:42:48.391 [2024-12-09 05:37:39.887683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 01:42:48.391 [2024-12-09 05:37:39.887697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 01:42:48.391 [2024-12-09 05:37:39.887714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 01:42:48.391 [2024-12-09 05:37:39.887727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 01:42:48.391 [2024-12-09 05:37:39.887744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 01:42:48.391 [2024-12-09 05:37:39.887756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 01:42:48.391 [2024-12-09 05:37:39.887789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 01:42:48.391 [2024-12-09 05:37:39.887801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 01:42:48.391 [2024-12-09 05:37:39.887816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 01:42:48.391 [2024-12-09 05:37:39.887828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 01:42:48.391 [2024-12-09 05:37:39.887844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 01:42:48.391 [2024-12-09 05:37:39.887856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 01:42:48.391 [2024-12-09 05:37:39.887875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 01:42:48.391 [2024-12-09 05:37:39.887887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 01:42:48.391 [2024-12-09 05:37:39.887903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 01:42:48.391 [2024-12-09 05:37:39.887915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 01:42:48.391 [2024-12-09 05:37:39.887931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 01:42:48.391 [2024-12-09 05:37:39.887943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 01:42:48.391 [2024-12-09 05:37:39.887960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 01:42:48.391 [2024-12-09 05:37:39.887972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 01:42:48.391 [2024-12-09 05:37:39.887988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 01:42:48.391 [2024-12-09 05:37:39.888032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 01:42:48.391 [2024-12-09 05:37:39.888052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 01:42:48.391 [2024-12-09 05:37:39.888072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 01:42:48.391 [2024-12-09 05:37:39.888090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 01:42:48.391 [2024-12-09 05:37:39.888102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 01:42:48.391 [2024-12-09 05:37:39.888120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 01:42:48.391 [2024-12-09 05:37:39.888155] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 01:42:48.391 [2024-12-09 05:37:39.888178] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5177abd3-cafa-411b-b43c-d71befe750fc 01:42:48.391 [2024-12-09 05:37:39.888197] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 01:42:48.391 [2024-12-09 05:37:39.888210] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 01:42:48.391 [2024-12-09 05:37:39.888221] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 01:42:48.391 [2024-12-09 05:37:39.888246] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 01:42:48.391 [2024-12-09 05:37:39.888257] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 01:42:48.391 [2024-12-09 05:37:39.888271] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 01:42:48.391 [2024-12-09 05:37:39.888282] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 01:42:48.391 [2024-12-09 05:37:39.888294] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 01:42:48.391 [2024-12-09 05:37:39.888304] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 01:42:48.391 [2024-12-09 05:37:39.888319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:48.391 [2024-12-09 05:37:39.888330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 01:42:48.391 [2024-12-09 05:37:39.888345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.958 ms 01:42:48.391 [2024-12-09 05:37:39.888360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:48.391 [2024-12-09 05:37:39.906847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:48.391 [2024-12-09 05:37:39.907089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 01:42:48.391 [2024-12-09 05:37:39.907122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.418 ms 01:42:48.391 [2024-12-09 05:37:39.907135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:48.391 [2024-12-09 05:37:39.907744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:48.391 [2024-12-09 05:37:39.907799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 01:42:48.391 [2024-12-09 05:37:39.907822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.490 ms 01:42:48.391 [2024-12-09 05:37:39.907834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:48.392 [2024-12-09 05:37:39.972607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:42:48.392 [2024-12-09 05:37:39.972689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 01:42:48.392 [2024-12-09 05:37:39.972716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:42:48.392 [2024-12-09 05:37:39.972729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:48.392 [2024-12-09 05:37:39.972977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:42:48.392 [2024-12-09 05:37:39.972996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 01:42:48.392 [2024-12-09 05:37:39.973023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:42:48.392 [2024-12-09 05:37:39.973036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:48.392 [2024-12-09 05:37:39.973115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:42:48.392 [2024-12-09 05:37:39.973134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 01:42:48.392 [2024-12-09 05:37:39.973158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:42:48.392 [2024-12-09 05:37:39.973171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:48.392 [2024-12-09 05:37:39.973205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:42:48.392 [2024-12-09 05:37:39.973220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 01:42:48.392 [2024-12-09 05:37:39.973237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:42:48.392 [2024-12-09 05:37:39.973255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:48.649 [2024-12-09 05:37:40.094846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:42:48.649 [2024-12-09 05:37:40.095138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 01:42:48.649 [2024-12-09 05:37:40.095181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:42:48.649 [2024-12-09 05:37:40.095197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:48.649 [2024-12-09 05:37:40.192655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:42:48.649 [2024-12-09 05:37:40.192776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 01:42:48.649 [2024-12-09 05:37:40.192809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:42:48.649 [2024-12-09 05:37:40.192823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:48.649 [2024-12-09 05:37:40.192937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:42:48.649 [2024-12-09 05:37:40.192957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 01:42:48.649 [2024-12-09 05:37:40.192981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:42:48.649 [2024-12-09 05:37:40.192994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:48.649 [2024-12-09 05:37:40.193042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:42:48.649 [2024-12-09 05:37:40.193058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 01:42:48.649 [2024-12-09 05:37:40.193090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:42:48.649 [2024-12-09 05:37:40.193101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:48.649 [2024-12-09 05:37:40.193244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:42:48.649 [2024-12-09 05:37:40.193262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 01:42:48.649 [2024-12-09 05:37:40.193279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:42:48.649 [2024-12-09 05:37:40.193290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:48.649 [2024-12-09 05:37:40.193389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:42:48.650 [2024-12-09 05:37:40.193422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 01:42:48.650 [2024-12-09 05:37:40.193456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:42:48.650 [2024-12-09 05:37:40.193468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:48.650 [2024-12-09 05:37:40.193533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:42:48.650 [2024-12-09 05:37:40.193550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 01:42:48.650 [2024-12-09 05:37:40.193573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:42:48.650 [2024-12-09 05:37:40.193585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:48.650 [2024-12-09 05:37:40.193651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:42:48.650 [2024-12-09 05:37:40.193693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 01:42:48.650 [2024-12-09 05:37:40.193716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:42:48.650 [2024-12-09 05:37:40.193729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:48.650 [2024-12-09 05:37:40.194005] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 401.547 ms, result 0 01:42:50.024 05:37:41 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 01:42:50.024 05:37:41 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 01:42:50.024 [2024-12-09 05:37:41.469835] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:42:50.024 [2024-12-09 05:37:41.470033] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78707 ] 01:42:50.283 [2024-12-09 05:37:41.663391] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:42:50.283 [2024-12-09 05:37:41.818585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:42:50.849 [2024-12-09 05:37:42.190398] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 01:42:50.849 [2024-12-09 05:37:42.190533] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 01:42:50.849 [2024-12-09 05:37:42.357161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:50.849 [2024-12-09 05:37:42.357233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 01:42:50.849 [2024-12-09 05:37:42.357255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 01:42:50.850 [2024-12-09 05:37:42.357268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:50.850 [2024-12-09 05:37:42.361103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:50.850 [2024-12-09 05:37:42.361160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 01:42:50.850 [2024-12-09 05:37:42.361179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.805 ms 01:42:50.850 [2024-12-09 05:37:42.361191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:50.850 [2024-12-09 05:37:42.361329] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 01:42:50.850 [2024-12-09 05:37:42.362319] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 01:42:50.850 [2024-12-09 05:37:42.362361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:50.850 [2024-12-09 05:37:42.362377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 01:42:50.850 [2024-12-09 05:37:42.362398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.042 ms 01:42:50.850 [2024-12-09 05:37:42.362419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:50.850 [2024-12-09 05:37:42.364887] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 01:42:50.850 [2024-12-09 05:37:42.383208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:50.850 [2024-12-09 05:37:42.383256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 01:42:50.850 [2024-12-09 05:37:42.383306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.322 ms 01:42:50.850 [2024-12-09 05:37:42.383318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:50.850 [2024-12-09 05:37:42.383454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:50.850 [2024-12-09 05:37:42.383475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 01:42:50.850 [2024-12-09 05:37:42.383488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 01:42:50.850 [2024-12-09 05:37:42.383499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:50.850 [2024-12-09 05:37:42.393547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:50.850 [2024-12-09 05:37:42.393845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 01:42:50.850 [2024-12-09 05:37:42.393884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.989 ms 01:42:50.850 [2024-12-09 05:37:42.393901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:50.850 [2024-12-09 05:37:42.394124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:50.850 [2024-12-09 05:37:42.394173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 01:42:50.850 [2024-12-09 05:37:42.394192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 01:42:50.850 [2024-12-09 05:37:42.394209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:50.850 [2024-12-09 05:37:42.394284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:50.850 [2024-12-09 05:37:42.394306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 01:42:50.850 [2024-12-09 05:37:42.394323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 01:42:50.850 [2024-12-09 05:37:42.394350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:50.850 [2024-12-09 05:37:42.394403] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 01:42:50.850 [2024-12-09 05:37:42.399842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:50.850 [2024-12-09 05:37:42.399885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 01:42:50.850 [2024-12-09 05:37:42.399903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.452 ms 01:42:50.850 [2024-12-09 05:37:42.399915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:50.850 [2024-12-09 05:37:42.399983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:50.850 [2024-12-09 05:37:42.400003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 01:42:50.850 [2024-12-09 05:37:42.400016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 01:42:50.850 [2024-12-09 05:37:42.400028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:50.850 [2024-12-09 05:37:42.400067] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 01:42:50.850 [2024-12-09 05:37:42.400102] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 01:42:50.850 [2024-12-09 05:37:42.400146] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 01:42:50.850 [2024-12-09 05:37:42.400167] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 01:42:50.850 [2024-12-09 05:37:42.400279] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 01:42:50.850 [2024-12-09 05:37:42.400295] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 01:42:50.850 [2024-12-09 05:37:42.400310] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 01:42:50.850 [2024-12-09 05:37:42.400330] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 01:42:50.850 [2024-12-09 05:37:42.400344] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 01:42:50.850 [2024-12-09 05:37:42.400358] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 01:42:50.850 [2024-12-09 05:37:42.400369] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 01:42:50.850 [2024-12-09 05:37:42.400380] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 01:42:50.850 [2024-12-09 05:37:42.400391] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 01:42:50.850 [2024-12-09 05:37:42.400403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:50.850 [2024-12-09 05:37:42.400415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 01:42:50.850 [2024-12-09 05:37:42.400427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.339 ms 01:42:50.850 [2024-12-09 05:37:42.400449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:50.850 [2024-12-09 05:37:42.400552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:50.850 [2024-12-09 05:37:42.400573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 01:42:50.850 [2024-12-09 05:37:42.400586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 01:42:50.850 [2024-12-09 05:37:42.400597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:50.850 [2024-12-09 05:37:42.400740] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 01:42:50.850 [2024-12-09 05:37:42.400771] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 01:42:50.850 [2024-12-09 05:37:42.400785] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 01:42:50.850 [2024-12-09 05:37:42.400798] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:42:50.850 [2024-12-09 05:37:42.400810] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 01:42:50.850 [2024-12-09 05:37:42.400821] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 01:42:50.850 [2024-12-09 05:37:42.400832] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 01:42:50.850 [2024-12-09 05:37:42.400844] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 01:42:50.850 [2024-12-09 05:37:42.400855] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 01:42:50.850 [2024-12-09 05:37:42.400866] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 01:42:50.850 [2024-12-09 05:37:42.400876] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 01:42:50.850 [2024-12-09 05:37:42.400900] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 01:42:50.850 [2024-12-09 05:37:42.400911] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 01:42:50.850 [2024-12-09 05:37:42.400926] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 01:42:50.850 [2024-12-09 05:37:42.400944] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 01:42:50.850 [2024-12-09 05:37:42.400967] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:42:50.850 [2024-12-09 05:37:42.400978] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 01:42:50.850 [2024-12-09 05:37:42.400988] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 01:42:50.850 [2024-12-09 05:37:42.400998] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:42:50.850 [2024-12-09 05:37:42.401025] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 01:42:50.850 [2024-12-09 05:37:42.401038] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 01:42:50.850 [2024-12-09 05:37:42.401048] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:42:50.850 [2024-12-09 05:37:42.401059] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 01:42:50.850 [2024-12-09 05:37:42.401069] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 01:42:50.850 [2024-12-09 05:37:42.401079] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:42:50.850 [2024-12-09 05:37:42.401089] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 01:42:50.850 [2024-12-09 05:37:42.401099] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 01:42:50.850 [2024-12-09 05:37:42.401110] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:42:50.850 [2024-12-09 05:37:42.401120] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 01:42:50.850 [2024-12-09 05:37:42.401142] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 01:42:50.850 [2024-12-09 05:37:42.401154] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:42:50.850 [2024-12-09 05:37:42.401164] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 01:42:50.850 [2024-12-09 05:37:42.401175] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 01:42:50.850 [2024-12-09 05:37:42.401185] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 01:42:50.850 [2024-12-09 05:37:42.401197] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 01:42:50.850 [2024-12-09 05:37:42.401208] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 01:42:50.850 [2024-12-09 05:37:42.401218] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 01:42:50.850 [2024-12-09 05:37:42.401229] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 01:42:50.850 [2024-12-09 05:37:42.401239] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 01:42:50.850 [2024-12-09 05:37:42.401259] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:42:50.850 [2024-12-09 05:37:42.401272] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 01:42:50.850 [2024-12-09 05:37:42.401283] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 01:42:50.850 [2024-12-09 05:37:42.401293] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:42:50.850 [2024-12-09 05:37:42.401303] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 01:42:50.850 [2024-12-09 05:37:42.401315] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 01:42:50.850 [2024-12-09 05:37:42.401333] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 01:42:50.850 [2024-12-09 05:37:42.401344] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:42:50.850 [2024-12-09 05:37:42.401356] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 01:42:50.850 [2024-12-09 05:37:42.401366] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 01:42:50.850 [2024-12-09 05:37:42.401377] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 01:42:50.850 [2024-12-09 05:37:42.401400] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 01:42:50.850 [2024-12-09 05:37:42.401412] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 01:42:50.850 [2024-12-09 05:37:42.401423] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 01:42:50.850 [2024-12-09 05:37:42.401435] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 01:42:50.850 [2024-12-09 05:37:42.401450] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 01:42:50.850 [2024-12-09 05:37:42.401463] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 01:42:50.850 [2024-12-09 05:37:42.401474] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 01:42:50.850 [2024-12-09 05:37:42.401486] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 01:42:50.850 [2024-12-09 05:37:42.401497] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 01:42:50.850 [2024-12-09 05:37:42.401509] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 01:42:50.850 [2024-12-09 05:37:42.401531] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 01:42:50.850 [2024-12-09 05:37:42.401544] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 01:42:50.850 [2024-12-09 05:37:42.401556] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 01:42:50.850 [2024-12-09 05:37:42.401568] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 01:42:50.850 [2024-12-09 05:37:42.401579] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 01:42:50.850 [2024-12-09 05:37:42.401590] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 01:42:50.850 [2024-12-09 05:37:42.401602] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 01:42:50.850 [2024-12-09 05:37:42.401614] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 01:42:50.850 [2024-12-09 05:37:42.401626] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 01:42:50.850 [2024-12-09 05:37:42.401637] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 01:42:50.850 [2024-12-09 05:37:42.401673] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 01:42:50.850 [2024-12-09 05:37:42.401699] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 01:42:50.850 [2024-12-09 05:37:42.401713] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 01:42:50.850 [2024-12-09 05:37:42.401724] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 01:42:50.850 [2024-12-09 05:37:42.401735] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 01:42:50.850 [2024-12-09 05:37:42.401748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:50.850 [2024-12-09 05:37:42.401767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 01:42:50.850 [2024-12-09 05:37:42.401779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.098 ms 01:42:50.850 [2024-12-09 05:37:42.401790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:50.850 [2024-12-09 05:37:42.443995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:50.850 [2024-12-09 05:37:42.444068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 01:42:50.850 [2024-12-09 05:37:42.444089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.107 ms 01:42:50.850 [2024-12-09 05:37:42.444103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:50.850 [2024-12-09 05:37:42.444339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:50.850 [2024-12-09 05:37:42.444360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 01:42:50.850 [2024-12-09 05:37:42.444374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 01:42:50.850 [2024-12-09 05:37:42.444386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:51.109 [2024-12-09 05:37:42.498378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:51.109 [2024-12-09 05:37:42.498487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 01:42:51.109 [2024-12-09 05:37:42.498515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.957 ms 01:42:51.109 [2024-12-09 05:37:42.498528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:51.109 [2024-12-09 05:37:42.498751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:51.109 [2024-12-09 05:37:42.498774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 01:42:51.109 [2024-12-09 05:37:42.498795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 01:42:51.109 [2024-12-09 05:37:42.498806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:51.109 [2024-12-09 05:37:42.499447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:51.109 [2024-12-09 05:37:42.499821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 01:42:51.109 [2024-12-09 05:37:42.499910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.597 ms 01:42:51.109 [2024-12-09 05:37:42.499940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:51.109 [2024-12-09 05:37:42.500351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:51.109 [2024-12-09 05:37:42.500412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 01:42:51.109 [2024-12-09 05:37:42.500443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.315 ms 01:42:51.109 [2024-12-09 05:37:42.500468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:51.109 [2024-12-09 05:37:42.532141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:51.109 [2024-12-09 05:37:42.532236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 01:42:51.109 [2024-12-09 05:37:42.532268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.605 ms 01:42:51.109 [2024-12-09 05:37:42.532288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:51.109 [2024-12-09 05:37:42.553492] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 01:42:51.109 [2024-12-09 05:37:42.553558] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 01:42:51.109 [2024-12-09 05:37:42.553584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:51.109 [2024-12-09 05:37:42.553599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 01:42:51.109 [2024-12-09 05:37:42.553617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.999 ms 01:42:51.109 [2024-12-09 05:37:42.553631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:51.109 [2024-12-09 05:37:42.590973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:51.109 [2024-12-09 05:37:42.591224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 01:42:51.109 [2024-12-09 05:37:42.591263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.165 ms 01:42:51.109 [2024-12-09 05:37:42.591285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:51.109 [2024-12-09 05:37:42.611135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:51.109 [2024-12-09 05:37:42.611209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 01:42:51.109 [2024-12-09 05:37:42.611232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.698 ms 01:42:51.109 [2024-12-09 05:37:42.611246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:51.109 [2024-12-09 05:37:42.630863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:51.109 [2024-12-09 05:37:42.630918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 01:42:51.109 [2024-12-09 05:37:42.630940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.464 ms 01:42:51.109 [2024-12-09 05:37:42.630954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:51.109 [2024-12-09 05:37:42.632118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:51.109 [2024-12-09 05:37:42.632161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 01:42:51.109 [2024-12-09 05:37:42.632181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.993 ms 01:42:51.109 [2024-12-09 05:37:42.632195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:51.367 [2024-12-09 05:37:42.727824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:51.368 [2024-12-09 05:37:42.727920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 01:42:51.368 [2024-12-09 05:37:42.727972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 95.586 ms 01:42:51.368 [2024-12-09 05:37:42.727992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:51.368 [2024-12-09 05:37:42.743938] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 01:42:51.368 [2024-12-09 05:37:42.768235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:51.368 [2024-12-09 05:37:42.768318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 01:42:51.368 [2024-12-09 05:37:42.768352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.020 ms 01:42:51.368 [2024-12-09 05:37:42.768367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:51.368 [2024-12-09 05:37:42.768538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:51.368 [2024-12-09 05:37:42.768565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 01:42:51.368 [2024-12-09 05:37:42.768583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 01:42:51.368 [2024-12-09 05:37:42.768597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:51.368 [2024-12-09 05:37:42.768728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:51.368 [2024-12-09 05:37:42.768754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 01:42:51.368 [2024-12-09 05:37:42.768778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 01:42:51.368 [2024-12-09 05:37:42.768797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:51.368 [2024-12-09 05:37:42.768854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:51.368 [2024-12-09 05:37:42.768887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 01:42:51.368 [2024-12-09 05:37:42.768902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 01:42:51.368 [2024-12-09 05:37:42.768916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:51.368 [2024-12-09 05:37:42.768980] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 01:42:51.368 [2024-12-09 05:37:42.769000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:51.368 [2024-12-09 05:37:42.769014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 01:42:51.368 [2024-12-09 05:37:42.769029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 01:42:51.368 [2024-12-09 05:37:42.769043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:51.368 [2024-12-09 05:37:42.804663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:51.368 [2024-12-09 05:37:42.804749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 01:42:51.368 [2024-12-09 05:37:42.804785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.576 ms 01:42:51.368 [2024-12-09 05:37:42.804797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:51.368 [2024-12-09 05:37:42.804956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:42:51.368 [2024-12-09 05:37:42.804978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 01:42:51.368 [2024-12-09 05:37:42.804991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 01:42:51.368 [2024-12-09 05:37:42.805007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:42:51.368 [2024-12-09 05:37:42.806206] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 01:42:51.368 [2024-12-09 05:37:42.810404] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 448.710 ms, result 0 01:42:51.368 [2024-12-09 05:37:42.811391] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 01:42:51.368 [2024-12-09 05:37:42.827728] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 01:42:52.303  [2024-12-09T05:37:44.858Z] Copying: 24/256 [MB] (24 MBps) [2024-12-09T05:37:46.235Z] Copying: 45/256 [MB] (20 MBps) [2024-12-09T05:37:47.170Z] Copying: 66/256 [MB] (20 MBps) [2024-12-09T05:37:48.105Z] Copying: 87/256 [MB] (21 MBps) [2024-12-09T05:37:49.041Z] Copying: 109/256 [MB] (21 MBps) [2024-12-09T05:37:49.975Z] Copying: 130/256 [MB] (21 MBps) [2024-12-09T05:37:50.929Z] Copying: 151/256 [MB] (20 MBps) [2024-12-09T05:37:51.862Z] Copying: 171/256 [MB] (20 MBps) [2024-12-09T05:37:53.234Z] Copying: 192/256 [MB] (20 MBps) [2024-12-09T05:37:54.165Z] Copying: 212/256 [MB] (20 MBps) [2024-12-09T05:37:55.098Z] Copying: 233/256 [MB] (20 MBps) [2024-12-09T05:37:55.098Z] Copying: 254/256 [MB] (20 MBps) [2024-12-09T05:37:55.098Z] Copying: 256/256 [MB] (average 21 MBps)[2024-12-09 05:37:54.917418] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 01:43:03.481 [2024-12-09 05:37:54.929599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:03.481 [2024-12-09 05:37:54.929658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 01:43:03.481 [2024-12-09 05:37:54.929736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 01:43:03.481 [2024-12-09 05:37:54.929762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:03.481 [2024-12-09 05:37:54.929809] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 01:43:03.481 [2024-12-09 05:37:54.933371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:03.481 [2024-12-09 05:37:54.933401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 01:43:03.481 [2024-12-09 05:37:54.933414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.540 ms 01:43:03.481 [2024-12-09 05:37:54.933424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:03.481 [2024-12-09 05:37:54.933722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:03.481 [2024-12-09 05:37:54.933757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 01:43:03.481 [2024-12-09 05:37:54.933769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 01:43:03.481 [2024-12-09 05:37:54.933780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:03.481 [2024-12-09 05:37:54.937247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:03.481 [2024-12-09 05:37:54.937274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 01:43:03.481 [2024-12-09 05:37:54.937286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.434 ms 01:43:03.481 [2024-12-09 05:37:54.937296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:03.481 [2024-12-09 05:37:54.943992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:03.481 [2024-12-09 05:37:54.944038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 01:43:03.481 [2024-12-09 05:37:54.944051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.642 ms 01:43:03.481 [2024-12-09 05:37:54.944065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:03.481 [2024-12-09 05:37:54.972985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:03.481 [2024-12-09 05:37:54.973027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 01:43:03.481 [2024-12-09 05:37:54.973058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.842 ms 01:43:03.481 [2024-12-09 05:37:54.973083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:03.481 [2024-12-09 05:37:54.990575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:03.481 [2024-12-09 05:37:54.990641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 01:43:03.481 [2024-12-09 05:37:54.990657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.433 ms 01:43:03.481 [2024-12-09 05:37:54.990690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:03.481 [2024-12-09 05:37:54.990876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:03.481 [2024-12-09 05:37:54.990898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 01:43:03.481 [2024-12-09 05:37:54.990934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 01:43:03.481 [2024-12-09 05:37:54.990953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:03.481 [2024-12-09 05:37:55.021007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:03.481 [2024-12-09 05:37:55.021050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 01:43:03.481 [2024-12-09 05:37:55.021066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.031 ms 01:43:03.481 [2024-12-09 05:37:55.021092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:03.481 [2024-12-09 05:37:55.051379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:03.481 [2024-12-09 05:37:55.051580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 01:43:03.481 [2024-12-09 05:37:55.051606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.241 ms 01:43:03.481 [2024-12-09 05:37:55.051619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:03.481 [2024-12-09 05:37:55.079754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:03.481 [2024-12-09 05:37:55.079792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 01:43:03.481 [2024-12-09 05:37:55.079807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.048 ms 01:43:03.481 [2024-12-09 05:37:55.079816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:03.766 [2024-12-09 05:37:55.106369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:03.766 [2024-12-09 05:37:55.106407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 01:43:03.766 [2024-12-09 05:37:55.106422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.478 ms 01:43:03.766 [2024-12-09 05:37:55.106439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:03.766 [2024-12-09 05:37:55.106484] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 01:43:03.766 [2024-12-09 05:37:55.106505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 01:43:03.766 [2024-12-09 05:37:55.106519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 01:43:03.766 [2024-12-09 05:37:55.106530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 01:43:03.766 [2024-12-09 05:37:55.106541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 01:43:03.766 [2024-12-09 05:37:55.106552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 01:43:03.766 [2024-12-09 05:37:55.106562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 01:43:03.766 [2024-12-09 05:37:55.106573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 01:43:03.766 [2024-12-09 05:37:55.106584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 01:43:03.766 [2024-12-09 05:37:55.106595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 01:43:03.766 [2024-12-09 05:37:55.106606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 01:43:03.766 [2024-12-09 05:37:55.106616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 01:43:03.766 [2024-12-09 05:37:55.106627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 01:43:03.766 [2024-12-09 05:37:55.106638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 01:43:03.766 [2024-12-09 05:37:55.106648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 01:43:03.766 [2024-12-09 05:37:55.106659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 01:43:03.766 [2024-12-09 05:37:55.106708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 01:43:03.766 [2024-12-09 05:37:55.106721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 01:43:03.766 [2024-12-09 05:37:55.106748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 01:43:03.766 [2024-12-09 05:37:55.106759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 01:43:03.766 [2024-12-09 05:37:55.106770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 01:43:03.766 [2024-12-09 05:37:55.106782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 01:43:03.766 [2024-12-09 05:37:55.106809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 01:43:03.766 [2024-12-09 05:37:55.106820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 01:43:03.766 [2024-12-09 05:37:55.106831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 01:43:03.766 [2024-12-09 05:37:55.106843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 01:43:03.766 [2024-12-09 05:37:55.106854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 01:43:03.767 [2024-12-09 05:37:55.106866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 01:43:03.767 [2024-12-09 05:37:55.106877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 01:43:03.767 [2024-12-09 05:37:55.106889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 01:43:03.767 [2024-12-09 05:37:55.106900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 01:43:03.767 [2024-12-09 05:37:55.106912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 01:43:03.767 [2024-12-09 05:37:55.106923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 01:43:03.767 [2024-12-09 05:37:55.106934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 01:43:03.767 [2024-12-09 05:37:55.106945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 01:43:03.767 [2024-12-09 05:37:55.106956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 01:43:03.767 [2024-12-09 05:37:55.106967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 01:43:03.767 [2024-12-09 05:37:55.106978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 01:43:03.767 [2024-12-09 05:37:55.107006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 01:43:03.767 [2024-12-09 05:37:55.107017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 01:43:03.767 [2024-12-09 05:37:55.107035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 01:43:03.767 [2024-12-09 05:37:55.107046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 01:43:03.767 [2024-12-09 05:37:55.107073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 01:43:03.767 [2024-12-09 05:37:55.107099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 01:43:03.767 [2024-12-09 05:37:55.107110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 01:43:03.767 [2024-12-09 05:37:55.107121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 01:43:03.767 [2024-12-09 05:37:55.107133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 01:43:03.767 [2024-12-09 05:37:55.107144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 01:43:03.767 [2024-12-09 05:37:55.107155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 01:43:03.767 [2024-12-09 05:37:55.107167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 01:43:03.767 [2024-12-09 05:37:55.107178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 01:43:03.767 [2024-12-09 05:37:55.107188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 01:43:03.767 [2024-12-09 05:37:55.107200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 01:43:03.767 [2024-12-09 05:37:55.107211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 01:43:03.767 [2024-12-09 05:37:55.107223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 01:43:03.767 [2024-12-09 05:37:55.107235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 01:43:03.767 [2024-12-09 05:37:55.107247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 01:43:03.767 [2024-12-09 05:37:55.107257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 01:43:03.767 [2024-12-09 05:37:55.107269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 01:43:03.767 [2024-12-09 05:37:55.107280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 01:43:03.767 [2024-12-09 05:37:55.107291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 01:43:03.767 [2024-12-09 05:37:55.107303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 01:43:03.767 [2024-12-09 05:37:55.107314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 01:43:03.767 [2024-12-09 05:37:55.107326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 01:43:03.767 [2024-12-09 05:37:55.107339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 01:43:03.767 [2024-12-09 05:37:55.107350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 01:43:03.767 [2024-12-09 05:37:55.107362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 01:43:03.767 [2024-12-09 05:37:55.107373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 01:43:03.767 [2024-12-09 05:37:55.107383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 01:43:03.767 [2024-12-09 05:37:55.107395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 01:43:03.767 [2024-12-09 05:37:55.107406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 01:43:03.767 [2024-12-09 05:37:55.107417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 01:43:03.767 [2024-12-09 05:37:55.107428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 01:43:03.767 [2024-12-09 05:37:55.107440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 01:43:03.767 [2024-12-09 05:37:55.107451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 01:43:03.767 [2024-12-09 05:37:55.107462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 01:43:03.767 [2024-12-09 05:37:55.107473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 01:43:03.767 [2024-12-09 05:37:55.107484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 01:43:03.767 [2024-12-09 05:37:55.107495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 01:43:03.767 [2024-12-09 05:37:55.107506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 01:43:03.767 [2024-12-09 05:37:55.107518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 01:43:03.767 [2024-12-09 05:37:55.107529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 01:43:03.767 [2024-12-09 05:37:55.107541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 01:43:03.767 [2024-12-09 05:37:55.107551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 01:43:03.767 [2024-12-09 05:37:55.107562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 01:43:03.767 [2024-12-09 05:37:55.107574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 01:43:03.767 [2024-12-09 05:37:55.107585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 01:43:03.767 [2024-12-09 05:37:55.107596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 01:43:03.767 [2024-12-09 05:37:55.107607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 01:43:03.767 [2024-12-09 05:37:55.107624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 01:43:03.767 [2024-12-09 05:37:55.107635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 01:43:03.767 [2024-12-09 05:37:55.107646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 01:43:03.767 [2024-12-09 05:37:55.107657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 01:43:03.767 [2024-12-09 05:37:55.107669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 01:43:03.767 [2024-12-09 05:37:55.107696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 01:43:03.767 [2024-12-09 05:37:55.107708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 01:43:03.767 [2024-12-09 05:37:55.107721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 01:43:03.767 [2024-12-09 05:37:55.107733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 01:43:03.767 [2024-12-09 05:37:55.107745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 01:43:03.767 [2024-12-09 05:37:55.107769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 01:43:03.767 [2024-12-09 05:37:55.107782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 01:43:03.767 [2024-12-09 05:37:55.107802] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 01:43:03.767 [2024-12-09 05:37:55.107814] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5177abd3-cafa-411b-b43c-d71befe750fc 01:43:03.767 [2024-12-09 05:37:55.107826] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 01:43:03.767 [2024-12-09 05:37:55.107836] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 01:43:03.767 [2024-12-09 05:37:55.107847] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 01:43:03.767 [2024-12-09 05:37:55.107858] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 01:43:03.767 [2024-12-09 05:37:55.107869] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 01:43:03.767 [2024-12-09 05:37:55.107885] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 01:43:03.767 [2024-12-09 05:37:55.107896] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 01:43:03.767 [2024-12-09 05:37:55.107906] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 01:43:03.767 [2024-12-09 05:37:55.107916] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 01:43:03.767 [2024-12-09 05:37:55.107926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:03.767 [2024-12-09 05:37:55.107938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 01:43:03.767 [2024-12-09 05:37:55.107950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.444 ms 01:43:03.767 [2024-12-09 05:37:55.107961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:03.767 [2024-12-09 05:37:55.124568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:03.767 [2024-12-09 05:37:55.124609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 01:43:03.767 [2024-12-09 05:37:55.124633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.581 ms 01:43:03.767 [2024-12-09 05:37:55.124651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:03.767 [2024-12-09 05:37:55.125253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:03.767 [2024-12-09 05:37:55.125297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 01:43:03.767 [2024-12-09 05:37:55.125323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.502 ms 01:43:03.767 [2024-12-09 05:37:55.125333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:03.767 [2024-12-09 05:37:55.169876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:43:03.767 [2024-12-09 05:37:55.169940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 01:43:03.767 [2024-12-09 05:37:55.169980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:43:03.767 [2024-12-09 05:37:55.169992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:03.767 [2024-12-09 05:37:55.170175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:43:03.767 [2024-12-09 05:37:55.170194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 01:43:03.767 [2024-12-09 05:37:55.170207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:43:03.767 [2024-12-09 05:37:55.170218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:03.767 [2024-12-09 05:37:55.170278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:43:03.767 [2024-12-09 05:37:55.170296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 01:43:03.767 [2024-12-09 05:37:55.170308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:43:03.767 [2024-12-09 05:37:55.170332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:03.767 [2024-12-09 05:37:55.170358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:43:03.767 [2024-12-09 05:37:55.170371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 01:43:03.767 [2024-12-09 05:37:55.170382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:43:03.767 [2024-12-09 05:37:55.170392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:03.767 [2024-12-09 05:37:55.268396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:43:03.767 [2024-12-09 05:37:55.268463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 01:43:03.767 [2024-12-09 05:37:55.268481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:43:03.767 [2024-12-09 05:37:55.268498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:03.767 [2024-12-09 05:37:55.354871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:43:03.767 [2024-12-09 05:37:55.355170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 01:43:03.767 [2024-12-09 05:37:55.355200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:43:03.767 [2024-12-09 05:37:55.355213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:03.767 [2024-12-09 05:37:55.355352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:43:03.767 [2024-12-09 05:37:55.355371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 01:43:03.767 [2024-12-09 05:37:55.355384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:43:03.767 [2024-12-09 05:37:55.355396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:03.767 [2024-12-09 05:37:55.355440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:43:03.767 [2024-12-09 05:37:55.355454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 01:43:03.767 [2024-12-09 05:37:55.355467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:43:03.767 [2024-12-09 05:37:55.355478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:03.767 [2024-12-09 05:37:55.355604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:43:03.767 [2024-12-09 05:37:55.355624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 01:43:03.767 [2024-12-09 05:37:55.355637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:43:03.767 [2024-12-09 05:37:55.355648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:03.767 [2024-12-09 05:37:55.355768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:43:03.767 [2024-12-09 05:37:55.355810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 01:43:03.767 [2024-12-09 05:37:55.355823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:43:03.767 [2024-12-09 05:37:55.355834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:03.767 [2024-12-09 05:37:55.355882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:43:03.767 [2024-12-09 05:37:55.355897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 01:43:03.767 [2024-12-09 05:37:55.355910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:43:03.767 [2024-12-09 05:37:55.355920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:03.767 [2024-12-09 05:37:55.356010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:43:03.767 [2024-12-09 05:37:55.356028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 01:43:03.767 [2024-12-09 05:37:55.356041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:43:03.767 [2024-12-09 05:37:55.356053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:03.767 [2024-12-09 05:37:55.356290] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 426.679 ms, result 0 01:43:05.155 01:43:05.155 01:43:05.155 05:37:56 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 01:43:05.155 05:37:56 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 01:43:05.414 05:37:56 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 01:43:05.414 [2024-12-09 05:37:57.001005] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:43:05.414 [2024-12-09 05:37:57.001403] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78862 ] 01:43:05.673 [2024-12-09 05:37:57.183367] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:43:05.931 [2024-12-09 05:37:57.303014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:43:06.189 [2024-12-09 05:37:57.659422] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 01:43:06.189 [2024-12-09 05:37:57.659774] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 01:43:06.449 [2024-12-09 05:37:57.823008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:06.449 [2024-12-09 05:37:57.823305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 01:43:06.449 [2024-12-09 05:37:57.823336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 01:43:06.449 [2024-12-09 05:37:57.823349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:06.449 [2024-12-09 05:37:57.826538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:06.449 [2024-12-09 05:37:57.826784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 01:43:06.449 [2024-12-09 05:37:57.826814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.155 ms 01:43:06.449 [2024-12-09 05:37:57.826827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:06.449 [2024-12-09 05:37:57.827015] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 01:43:06.449 [2024-12-09 05:37:57.827897] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 01:43:06.449 [2024-12-09 05:37:57.827936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:06.449 [2024-12-09 05:37:57.827966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 01:43:06.449 [2024-12-09 05:37:57.827985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.933 ms 01:43:06.449 [2024-12-09 05:37:57.827995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:06.449 [2024-12-09 05:37:57.829969] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 01:43:06.449 [2024-12-09 05:37:57.844538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:06.449 [2024-12-09 05:37:57.844576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 01:43:06.449 [2024-12-09 05:37:57.844592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.570 ms 01:43:06.449 [2024-12-09 05:37:57.844603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:06.449 [2024-12-09 05:37:57.844741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:06.449 [2024-12-09 05:37:57.844773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 01:43:06.449 [2024-12-09 05:37:57.844794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 01:43:06.449 [2024-12-09 05:37:57.844804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:06.449 [2024-12-09 05:37:57.853391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:06.449 [2024-12-09 05:37:57.853429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 01:43:06.449 [2024-12-09 05:37:57.853444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.532 ms 01:43:06.449 [2024-12-09 05:37:57.853453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:06.449 [2024-12-09 05:37:57.853568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:06.449 [2024-12-09 05:37:57.853587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 01:43:06.449 [2024-12-09 05:37:57.853599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 01:43:06.449 [2024-12-09 05:37:57.853609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:06.449 [2024-12-09 05:37:57.853650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:06.449 [2024-12-09 05:37:57.853702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 01:43:06.449 [2024-12-09 05:37:57.853718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 01:43:06.449 [2024-12-09 05:37:57.853729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:06.449 [2024-12-09 05:37:57.853764] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 01:43:06.449 [2024-12-09 05:37:57.858228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:06.449 [2024-12-09 05:37:57.858261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 01:43:06.449 [2024-12-09 05:37:57.858275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.472 ms 01:43:06.449 [2024-12-09 05:37:57.858286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:06.449 [2024-12-09 05:37:57.858361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:06.449 [2024-12-09 05:37:57.858380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 01:43:06.449 [2024-12-09 05:37:57.858392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 01:43:06.449 [2024-12-09 05:37:57.858402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:06.449 [2024-12-09 05:37:57.858463] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 01:43:06.449 [2024-12-09 05:37:57.858503] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 01:43:06.449 [2024-12-09 05:37:57.858541] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 01:43:06.449 [2024-12-09 05:37:57.858561] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 01:43:06.449 [2024-12-09 05:37:57.858659] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 01:43:06.449 [2024-12-09 05:37:57.858675] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 01:43:06.449 [2024-12-09 05:37:57.858717] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 01:43:06.449 [2024-12-09 05:37:57.858738] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 01:43:06.449 [2024-12-09 05:37:57.858751] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 01:43:06.449 [2024-12-09 05:37:57.858762] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 01:43:06.449 [2024-12-09 05:37:57.858773] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 01:43:06.449 [2024-12-09 05:37:57.858786] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 01:43:06.449 [2024-12-09 05:37:57.858796] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 01:43:06.449 [2024-12-09 05:37:57.858810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:06.449 [2024-12-09 05:37:57.858821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 01:43:06.449 [2024-12-09 05:37:57.858832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.349 ms 01:43:06.450 [2024-12-09 05:37:57.858842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:06.450 [2024-12-09 05:37:57.858939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:06.450 [2024-12-09 05:37:57.858960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 01:43:06.450 [2024-12-09 05:37:57.858972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 01:43:06.450 [2024-12-09 05:37:57.858983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:06.450 [2024-12-09 05:37:57.859117] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 01:43:06.450 [2024-12-09 05:37:57.859149] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 01:43:06.450 [2024-12-09 05:37:57.859160] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 01:43:06.450 [2024-12-09 05:37:57.859171] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:43:06.450 [2024-12-09 05:37:57.859182] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 01:43:06.450 [2024-12-09 05:37:57.859192] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 01:43:06.450 [2024-12-09 05:37:57.859202] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 01:43:06.450 [2024-12-09 05:37:57.859212] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 01:43:06.450 [2024-12-09 05:37:57.859222] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 01:43:06.450 [2024-12-09 05:37:57.859231] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 01:43:06.450 [2024-12-09 05:37:57.859241] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 01:43:06.450 [2024-12-09 05:37:57.859263] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 01:43:06.450 [2024-12-09 05:37:57.859273] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 01:43:06.450 [2024-12-09 05:37:57.859284] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 01:43:06.450 [2024-12-09 05:37:57.859296] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 01:43:06.450 [2024-12-09 05:37:57.859306] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:43:06.450 [2024-12-09 05:37:57.859315] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 01:43:06.450 [2024-12-09 05:37:57.859325] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 01:43:06.450 [2024-12-09 05:37:57.859335] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:43:06.450 [2024-12-09 05:37:57.859344] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 01:43:06.450 [2024-12-09 05:37:57.859354] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 01:43:06.450 [2024-12-09 05:37:57.859363] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:43:06.450 [2024-12-09 05:37:57.859372] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 01:43:06.450 [2024-12-09 05:37:57.859381] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 01:43:06.450 [2024-12-09 05:37:57.859390] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:43:06.450 [2024-12-09 05:37:57.859398] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 01:43:06.450 [2024-12-09 05:37:57.859408] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 01:43:06.450 [2024-12-09 05:37:57.859416] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:43:06.450 [2024-12-09 05:37:57.859426] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 01:43:06.450 [2024-12-09 05:37:57.859435] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 01:43:06.450 [2024-12-09 05:37:57.859445] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:43:06.450 [2024-12-09 05:37:57.859454] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 01:43:06.450 [2024-12-09 05:37:57.859463] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 01:43:06.450 [2024-12-09 05:37:57.859472] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 01:43:06.450 [2024-12-09 05:37:57.859482] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 01:43:06.450 [2024-12-09 05:37:57.859491] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 01:43:06.450 [2024-12-09 05:37:57.859501] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 01:43:06.450 [2024-12-09 05:37:57.859510] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 01:43:06.450 [2024-12-09 05:37:57.859520] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 01:43:06.450 [2024-12-09 05:37:57.859529] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:43:06.450 [2024-12-09 05:37:57.859538] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 01:43:06.450 [2024-12-09 05:37:57.859547] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 01:43:06.450 [2024-12-09 05:37:57.859556] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:43:06.450 [2024-12-09 05:37:57.859565] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 01:43:06.450 [2024-12-09 05:37:57.859577] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 01:43:06.450 [2024-12-09 05:37:57.859592] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 01:43:06.450 [2024-12-09 05:37:57.859602] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:43:06.450 [2024-12-09 05:37:57.859613] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 01:43:06.450 [2024-12-09 05:37:57.859623] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 01:43:06.450 [2024-12-09 05:37:57.859633] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 01:43:06.450 [2024-12-09 05:37:57.859641] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 01:43:06.450 [2024-12-09 05:37:57.859650] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 01:43:06.450 [2024-12-09 05:37:57.859660] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 01:43:06.450 [2024-12-09 05:37:57.859671] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 01:43:06.450 [2024-12-09 05:37:57.859683] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 01:43:06.450 [2024-12-09 05:37:57.859694] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 01:43:06.450 [2024-12-09 05:37:57.859719] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 01:43:06.450 [2024-12-09 05:37:57.859733] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 01:43:06.450 [2024-12-09 05:37:57.859744] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 01:43:06.450 [2024-12-09 05:37:57.859754] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 01:43:06.450 [2024-12-09 05:37:57.859764] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 01:43:06.450 [2024-12-09 05:37:57.859774] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 01:43:06.450 [2024-12-09 05:37:57.859784] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 01:43:06.450 [2024-12-09 05:37:57.859793] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 01:43:06.450 [2024-12-09 05:37:57.859803] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 01:43:06.450 [2024-12-09 05:37:57.859812] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 01:43:06.450 [2024-12-09 05:37:57.859823] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 01:43:06.450 [2024-12-09 05:37:57.859833] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 01:43:06.450 [2024-12-09 05:37:57.859843] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 01:43:06.450 [2024-12-09 05:37:57.859853] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 01:43:06.450 [2024-12-09 05:37:57.859864] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 01:43:06.450 [2024-12-09 05:37:57.859874] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 01:43:06.450 [2024-12-09 05:37:57.859885] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 01:43:06.450 [2024-12-09 05:37:57.859895] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 01:43:06.450 [2024-12-09 05:37:57.859905] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 01:43:06.450 [2024-12-09 05:37:57.859916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:06.450 [2024-12-09 05:37:57.859932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 01:43:06.450 [2024-12-09 05:37:57.859944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.887 ms 01:43:06.450 [2024-12-09 05:37:57.859954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:06.450 [2024-12-09 05:37:57.897284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:06.450 [2024-12-09 05:37:57.897364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 01:43:06.450 [2024-12-09 05:37:57.897383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.251 ms 01:43:06.450 [2024-12-09 05:37:57.897394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:06.450 [2024-12-09 05:37:57.897576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:06.450 [2024-12-09 05:37:57.897596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 01:43:06.450 [2024-12-09 05:37:57.897608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 01:43:06.450 [2024-12-09 05:37:57.897619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:06.450 [2024-12-09 05:37:57.948323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:06.450 [2024-12-09 05:37:57.948382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 01:43:06.450 [2024-12-09 05:37:57.948405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.675 ms 01:43:06.450 [2024-12-09 05:37:57.948416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:06.450 [2024-12-09 05:37:57.948562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:06.450 [2024-12-09 05:37:57.948582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 01:43:06.450 [2024-12-09 05:37:57.948595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 01:43:06.451 [2024-12-09 05:37:57.948605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:06.451 [2024-12-09 05:37:57.949258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:06.451 [2024-12-09 05:37:57.949283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 01:43:06.451 [2024-12-09 05:37:57.949304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.623 ms 01:43:06.451 [2024-12-09 05:37:57.949315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:06.451 [2024-12-09 05:37:57.949489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:06.451 [2024-12-09 05:37:57.949507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 01:43:06.451 [2024-12-09 05:37:57.949518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.142 ms 01:43:06.451 [2024-12-09 05:37:57.949529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:06.451 [2024-12-09 05:37:57.967541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:06.451 [2024-12-09 05:37:57.967823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 01:43:06.451 [2024-12-09 05:37:57.967851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.984 ms 01:43:06.451 [2024-12-09 05:37:57.967864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:06.451 [2024-12-09 05:37:57.982800] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 01:43:06.451 [2024-12-09 05:37:57.982996] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 01:43:06.451 [2024-12-09 05:37:57.983021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:06.451 [2024-12-09 05:37:57.983034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 01:43:06.451 [2024-12-09 05:37:57.983076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.018 ms 01:43:06.451 [2024-12-09 05:37:57.983089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:06.451 [2024-12-09 05:37:58.010124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:06.451 [2024-12-09 05:37:58.010182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 01:43:06.451 [2024-12-09 05:37:58.010200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.944 ms 01:43:06.451 [2024-12-09 05:37:58.010211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:06.451 [2024-12-09 05:37:58.024794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:06.451 [2024-12-09 05:37:58.024831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 01:43:06.451 [2024-12-09 05:37:58.024845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.493 ms 01:43:06.451 [2024-12-09 05:37:58.024856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:06.451 [2024-12-09 05:37:58.038500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:06.451 [2024-12-09 05:37:58.038540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 01:43:06.451 [2024-12-09 05:37:58.038555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.562 ms 01:43:06.451 [2024-12-09 05:37:58.038566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:06.451 [2024-12-09 05:37:58.039443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:06.451 [2024-12-09 05:37:58.039476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 01:43:06.451 [2024-12-09 05:37:58.039490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.712 ms 01:43:06.451 [2024-12-09 05:37:58.039501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:06.710 [2024-12-09 05:37:58.113550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:06.710 [2024-12-09 05:37:58.113642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 01:43:06.710 [2024-12-09 05:37:58.113677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.017 ms 01:43:06.710 [2024-12-09 05:37:58.113708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:06.710 [2024-12-09 05:37:58.125572] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 01:43:06.710 [2024-12-09 05:37:58.145969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:06.710 [2024-12-09 05:37:58.146044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 01:43:06.710 [2024-12-09 05:37:58.146063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.102 ms 01:43:06.710 [2024-12-09 05:37:58.146081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:06.710 [2024-12-09 05:37:58.146226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:06.710 [2024-12-09 05:37:58.146245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 01:43:06.710 [2024-12-09 05:37:58.146258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 01:43:06.710 [2024-12-09 05:37:58.146269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:06.710 [2024-12-09 05:37:58.146338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:06.710 [2024-12-09 05:37:58.146353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 01:43:06.710 [2024-12-09 05:37:58.146365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 01:43:06.710 [2024-12-09 05:37:58.146382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:06.710 [2024-12-09 05:37:58.146425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:06.710 [2024-12-09 05:37:58.146471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 01:43:06.710 [2024-12-09 05:37:58.146485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 01:43:06.710 [2024-12-09 05:37:58.146497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:06.710 [2024-12-09 05:37:58.146550] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 01:43:06.710 [2024-12-09 05:37:58.146569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:06.710 [2024-12-09 05:37:58.146581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 01:43:06.710 [2024-12-09 05:37:58.146594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 01:43:06.710 [2024-12-09 05:37:58.146604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:06.710 [2024-12-09 05:37:58.175930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:06.710 [2024-12-09 05:37:58.176145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 01:43:06.710 [2024-12-09 05:37:58.176172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.298 ms 01:43:06.710 [2024-12-09 05:37:58.176184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:06.710 [2024-12-09 05:37:58.176317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:06.710 [2024-12-09 05:37:58.176338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 01:43:06.710 [2024-12-09 05:37:58.176350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 01:43:06.710 [2024-12-09 05:37:58.176361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:06.710 [2024-12-09 05:37:58.177783] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 01:43:06.710 [2024-12-09 05:37:58.181515] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 354.327 ms, result 0 01:43:06.710 [2024-12-09 05:37:58.182468] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 01:43:06.710 [2024-12-09 05:37:58.197378] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 01:43:06.969  [2024-12-09T05:37:58.586Z] Copying: 4096/4096 [kB] (average 21 MBps)[2024-12-09 05:37:58.382285] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 01:43:06.969 [2024-12-09 05:37:58.392974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:06.969 [2024-12-09 05:37:58.393011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 01:43:06.969 [2024-12-09 05:37:58.393048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 01:43:06.969 [2024-12-09 05:37:58.393058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:06.969 [2024-12-09 05:37:58.393083] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 01:43:06.969 [2024-12-09 05:37:58.396381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:06.969 [2024-12-09 05:37:58.396411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 01:43:06.969 [2024-12-09 05:37:58.396424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.280 ms 01:43:06.969 [2024-12-09 05:37:58.396433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:06.969 [2024-12-09 05:37:58.398494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:06.969 [2024-12-09 05:37:58.398531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 01:43:06.969 [2024-12-09 05:37:58.398546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.036 ms 01:43:06.969 [2024-12-09 05:37:58.398557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:06.969 [2024-12-09 05:37:58.401794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:06.969 [2024-12-09 05:37:58.401827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 01:43:06.969 [2024-12-09 05:37:58.401841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.208 ms 01:43:06.969 [2024-12-09 05:37:58.401850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:06.969 [2024-12-09 05:37:58.407863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:06.969 [2024-12-09 05:37:58.407893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 01:43:06.969 [2024-12-09 05:37:58.407905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.977 ms 01:43:06.969 [2024-12-09 05:37:58.407915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:06.969 [2024-12-09 05:37:58.434020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:06.969 [2024-12-09 05:37:58.434075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 01:43:06.970 [2024-12-09 05:37:58.434090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.043 ms 01:43:06.970 [2024-12-09 05:37:58.434100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:06.970 [2024-12-09 05:37:58.451563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:06.970 [2024-12-09 05:37:58.451607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 01:43:06.970 [2024-12-09 05:37:58.451622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.406 ms 01:43:06.970 [2024-12-09 05:37:58.451632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:06.970 [2024-12-09 05:37:58.451800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:06.970 [2024-12-09 05:37:58.451822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 01:43:06.970 [2024-12-09 05:37:58.451848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 01:43:06.970 [2024-12-09 05:37:58.451859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:06.970 [2024-12-09 05:37:58.477916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:06.970 [2024-12-09 05:37:58.477952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 01:43:06.970 [2024-12-09 05:37:58.477966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.035 ms 01:43:06.970 [2024-12-09 05:37:58.477976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:06.970 [2024-12-09 05:37:58.503712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:06.970 [2024-12-09 05:37:58.503950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 01:43:06.970 [2024-12-09 05:37:58.503977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.679 ms 01:43:06.970 [2024-12-09 05:37:58.503988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:06.970 [2024-12-09 05:37:58.529611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:06.970 [2024-12-09 05:37:58.529648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 01:43:06.970 [2024-12-09 05:37:58.529693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.559 ms 01:43:06.970 [2024-12-09 05:37:58.529707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:06.970 [2024-12-09 05:37:58.558565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:06.970 [2024-12-09 05:37:58.558610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 01:43:06.970 [2024-12-09 05:37:58.558626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.769 ms 01:43:06.970 [2024-12-09 05:37:58.558638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:06.970 [2024-12-09 05:37:58.558731] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 01:43:06.970 [2024-12-09 05:37:58.558764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 01:43:06.970 [2024-12-09 05:37:58.558780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 01:43:06.970 [2024-12-09 05:37:58.558793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 01:43:06.970 [2024-12-09 05:37:58.558805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 01:43:06.970 [2024-12-09 05:37:58.558818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 01:43:06.970 [2024-12-09 05:37:58.558830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 01:43:06.970 [2024-12-09 05:37:58.558857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 01:43:06.970 [2024-12-09 05:37:58.558884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 01:43:06.970 [2024-12-09 05:37:58.558896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 01:43:06.970 [2024-12-09 05:37:58.558908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 01:43:06.970 [2024-12-09 05:37:58.558924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 01:43:06.970 [2024-12-09 05:37:58.558935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 01:43:06.970 [2024-12-09 05:37:58.558958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 01:43:06.970 [2024-12-09 05:37:58.558970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 01:43:06.970 [2024-12-09 05:37:58.558982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 01:43:06.970 [2024-12-09 05:37:58.559009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 01:43:06.970 [2024-12-09 05:37:58.559019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 01:43:06.970 [2024-12-09 05:37:58.559030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 01:43:06.970 [2024-12-09 05:37:58.559057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 01:43:06.970 [2024-12-09 05:37:58.559078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 01:43:06.970 [2024-12-09 05:37:58.559104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 01:43:06.970 [2024-12-09 05:37:58.559132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 01:43:06.970 [2024-12-09 05:37:58.559143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 01:43:06.970 [2024-12-09 05:37:58.559154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 01:43:06.970 [2024-12-09 05:37:58.559165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 01:43:06.970 [2024-12-09 05:37:58.559180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 01:43:06.970 [2024-12-09 05:37:58.559190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 01:43:06.970 [2024-12-09 05:37:58.559200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 01:43:06.970 [2024-12-09 05:37:58.559211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 01:43:06.970 [2024-12-09 05:37:58.559223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 01:43:06.970 [2024-12-09 05:37:58.559234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 01:43:06.970 [2024-12-09 05:37:58.559245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 01:43:06.970 [2024-12-09 05:37:58.559256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 01:43:06.970 [2024-12-09 05:37:58.559267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 01:43:06.970 [2024-12-09 05:37:58.559280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 01:43:06.970 [2024-12-09 05:37:58.559291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 01:43:06.970 [2024-12-09 05:37:58.559303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 01:43:06.970 [2024-12-09 05:37:58.559314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 01:43:06.970 [2024-12-09 05:37:58.559325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 01:43:06.970 [2024-12-09 05:37:58.559335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 01:43:06.970 [2024-12-09 05:37:58.559347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 01:43:06.970 [2024-12-09 05:37:58.559357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 01:43:06.970 [2024-12-09 05:37:58.559367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 01:43:06.970 [2024-12-09 05:37:58.559393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 01:43:06.970 [2024-12-09 05:37:58.559404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 01:43:06.970 [2024-12-09 05:37:58.559415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 01:43:06.970 [2024-12-09 05:37:58.559425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 01:43:06.970 [2024-12-09 05:37:58.559436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 01:43:06.970 [2024-12-09 05:37:58.559446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 01:43:06.970 [2024-12-09 05:37:58.559456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 01:43:06.970 [2024-12-09 05:37:58.559466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 01:43:06.970 [2024-12-09 05:37:58.559476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 01:43:06.970 [2024-12-09 05:37:58.559486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 01:43:06.970 [2024-12-09 05:37:58.559497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 01:43:06.970 [2024-12-09 05:37:58.559507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 01:43:06.970 [2024-12-09 05:37:58.559517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 01:43:06.970 [2024-12-09 05:37:58.559527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 01:43:06.970 [2024-12-09 05:37:58.559538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 01:43:06.970 [2024-12-09 05:37:58.559548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 01:43:06.970 [2024-12-09 05:37:58.559559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 01:43:06.970 [2024-12-09 05:37:58.559570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 01:43:06.970 [2024-12-09 05:37:58.559580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 01:43:06.970 [2024-12-09 05:37:58.559591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 01:43:06.970 [2024-12-09 05:37:58.559602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 01:43:06.970 [2024-12-09 05:37:58.559612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 01:43:06.970 [2024-12-09 05:37:58.559623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 01:43:06.970 [2024-12-09 05:37:58.559636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 01:43:06.970 [2024-12-09 05:37:58.559647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 01:43:06.970 [2024-12-09 05:37:58.559658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 01:43:06.971 [2024-12-09 05:37:58.559668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 01:43:06.971 [2024-12-09 05:37:58.559679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 01:43:06.971 [2024-12-09 05:37:58.559689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 01:43:06.971 [2024-12-09 05:37:58.559717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 01:43:06.971 [2024-12-09 05:37:58.559743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 01:43:06.971 [2024-12-09 05:37:58.559754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 01:43:06.971 [2024-12-09 05:37:58.559765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 01:43:06.971 [2024-12-09 05:37:58.559776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 01:43:06.971 [2024-12-09 05:37:58.559787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 01:43:06.971 [2024-12-09 05:37:58.559798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 01:43:06.971 [2024-12-09 05:37:58.559823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 01:43:06.971 [2024-12-09 05:37:58.559837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 01:43:06.971 [2024-12-09 05:37:58.559848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 01:43:06.971 [2024-12-09 05:37:58.559859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 01:43:06.971 [2024-12-09 05:37:58.559870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 01:43:06.971 [2024-12-09 05:37:58.559881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 01:43:06.971 [2024-12-09 05:37:58.559892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 01:43:06.971 [2024-12-09 05:37:58.559903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 01:43:06.971 [2024-12-09 05:37:58.559913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 01:43:06.971 [2024-12-09 05:37:58.559924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 01:43:06.971 [2024-12-09 05:37:58.559935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 01:43:06.971 [2024-12-09 05:37:58.559946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 01:43:06.971 [2024-12-09 05:37:58.559958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 01:43:06.971 [2024-12-09 05:37:58.559969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 01:43:06.971 [2024-12-09 05:37:58.559995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 01:43:06.971 [2024-12-09 05:37:58.560006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 01:43:06.971 [2024-12-09 05:37:58.560018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 01:43:06.971 [2024-12-09 05:37:58.560045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 01:43:06.971 [2024-12-09 05:37:58.560056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 01:43:06.971 [2024-12-09 05:37:58.560068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 01:43:06.971 [2024-12-09 05:37:58.560080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 01:43:06.971 [2024-12-09 05:37:58.560114] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 01:43:06.971 [2024-12-09 05:37:58.560124] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5177abd3-cafa-411b-b43c-d71befe750fc 01:43:06.971 [2024-12-09 05:37:58.560136] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 01:43:06.971 [2024-12-09 05:37:58.560146] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 01:43:06.971 [2024-12-09 05:37:58.560155] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 01:43:06.971 [2024-12-09 05:37:58.560166] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 01:43:06.971 [2024-12-09 05:37:58.560192] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 01:43:06.971 [2024-12-09 05:37:58.560203] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 01:43:06.971 [2024-12-09 05:37:58.560219] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 01:43:06.971 [2024-12-09 05:37:58.560229] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 01:43:06.971 [2024-12-09 05:37:58.560239] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 01:43:06.971 [2024-12-09 05:37:58.560249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:06.971 [2024-12-09 05:37:58.560260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 01:43:06.971 [2024-12-09 05:37:58.560272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.520 ms 01:43:06.971 [2024-12-09 05:37:58.560283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:06.971 [2024-12-09 05:37:58.577653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:06.971 [2024-12-09 05:37:58.577741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 01:43:06.971 [2024-12-09 05:37:58.577760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.345 ms 01:43:06.971 [2024-12-09 05:37:58.577773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:06.971 [2024-12-09 05:37:58.578310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:06.971 [2024-12-09 05:37:58.578342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 01:43:06.971 [2024-12-09 05:37:58.578356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.454 ms 01:43:06.971 [2024-12-09 05:37:58.578367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:07.230 [2024-12-09 05:37:58.622326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:43:07.230 [2024-12-09 05:37:58.622388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 01:43:07.230 [2024-12-09 05:37:58.622409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:43:07.230 [2024-12-09 05:37:58.622424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:07.230 [2024-12-09 05:37:58.622549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:43:07.230 [2024-12-09 05:37:58.622568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 01:43:07.230 [2024-12-09 05:37:58.622579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:43:07.230 [2024-12-09 05:37:58.622592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:07.230 [2024-12-09 05:37:58.622669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:43:07.230 [2024-12-09 05:37:58.622716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 01:43:07.230 [2024-12-09 05:37:58.622731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:43:07.230 [2024-12-09 05:37:58.622758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:07.230 [2024-12-09 05:37:58.622813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:43:07.230 [2024-12-09 05:37:58.622828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 01:43:07.230 [2024-12-09 05:37:58.622838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:43:07.230 [2024-12-09 05:37:58.622849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:07.230 [2024-12-09 05:37:58.726767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:43:07.230 [2024-12-09 05:37:58.726822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 01:43:07.230 [2024-12-09 05:37:58.726841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:43:07.230 [2024-12-09 05:37:58.726861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:07.230 [2024-12-09 05:37:58.808817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:43:07.230 [2024-12-09 05:37:58.808887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 01:43:07.230 [2024-12-09 05:37:58.808907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:43:07.230 [2024-12-09 05:37:58.808921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:07.230 [2024-12-09 05:37:58.809008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:43:07.230 [2024-12-09 05:37:58.809026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 01:43:07.230 [2024-12-09 05:37:58.809038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:43:07.230 [2024-12-09 05:37:58.809050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:07.230 [2024-12-09 05:37:58.809121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:43:07.230 [2024-12-09 05:37:58.809143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 01:43:07.230 [2024-12-09 05:37:58.809154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:43:07.230 [2024-12-09 05:37:58.809164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:07.230 [2024-12-09 05:37:58.809279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:43:07.230 [2024-12-09 05:37:58.809298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 01:43:07.230 [2024-12-09 05:37:58.809310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:43:07.230 [2024-12-09 05:37:58.809320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:07.230 [2024-12-09 05:37:58.809368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:43:07.230 [2024-12-09 05:37:58.809386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 01:43:07.230 [2024-12-09 05:37:58.809411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:43:07.230 [2024-12-09 05:37:58.809422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:07.230 [2024-12-09 05:37:58.809478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:43:07.230 [2024-12-09 05:37:58.809493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 01:43:07.230 [2024-12-09 05:37:58.809503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:43:07.230 [2024-12-09 05:37:58.809514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:07.230 [2024-12-09 05:37:58.809565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:43:07.230 [2024-12-09 05:37:58.809586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 01:43:07.230 [2024-12-09 05:37:58.809597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:43:07.230 [2024-12-09 05:37:58.809607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:07.230 [2024-12-09 05:37:58.809870] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 416.875 ms, result 0 01:43:08.605 01:43:08.605 01:43:08.605 05:37:59 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=78898 01:43:08.605 05:37:59 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 01:43:08.605 05:37:59 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 78898 01:43:08.605 05:37:59 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 78898 ']' 01:43:08.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:43:08.605 05:37:59 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:43:08.605 05:37:59 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 01:43:08.605 05:37:59 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:43:08.605 05:37:59 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 01:43:08.605 05:37:59 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 01:43:08.605 [2024-12-09 05:37:59.947600] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:43:08.605 [2024-12-09 05:37:59.947810] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78898 ] 01:43:08.605 [2024-12-09 05:38:00.123229] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:43:08.862 [2024-12-09 05:38:00.245227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:43:09.794 05:38:01 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:43:09.794 05:38:01 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 01:43:09.794 05:38:01 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 01:43:09.794 [2024-12-09 05:38:01.285930] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 01:43:09.794 [2024-12-09 05:38:01.286010] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 01:43:10.053 [2024-12-09 05:38:01.468449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:10.053 [2024-12-09 05:38:01.468512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 01:43:10.053 [2024-12-09 05:38:01.468537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 01:43:10.053 [2024-12-09 05:38:01.468550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:10.053 [2024-12-09 05:38:01.472037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:10.053 [2024-12-09 05:38:01.472094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 01:43:10.053 [2024-12-09 05:38:01.472113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.445 ms 01:43:10.053 [2024-12-09 05:38:01.472125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:10.053 [2024-12-09 05:38:01.472257] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 01:43:10.053 [2024-12-09 05:38:01.473162] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 01:43:10.053 [2024-12-09 05:38:01.473425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:10.053 [2024-12-09 05:38:01.473447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 01:43:10.053 [2024-12-09 05:38:01.473463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.176 ms 01:43:10.053 [2024-12-09 05:38:01.473478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:10.053 [2024-12-09 05:38:01.475766] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 01:43:10.053 [2024-12-09 05:38:01.491530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:10.053 [2024-12-09 05:38:01.491809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 01:43:10.053 [2024-12-09 05:38:01.491840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.770 ms 01:43:10.053 [2024-12-09 05:38:01.491858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:10.053 [2024-12-09 05:38:01.491985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:10.053 [2024-12-09 05:38:01.492011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 01:43:10.053 [2024-12-09 05:38:01.492026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 01:43:10.053 [2024-12-09 05:38:01.492042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:10.053 [2024-12-09 05:38:01.501179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:10.053 [2024-12-09 05:38:01.501237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 01:43:10.053 [2024-12-09 05:38:01.501253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.066 ms 01:43:10.053 [2024-12-09 05:38:01.501267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:10.053 [2024-12-09 05:38:01.501447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:10.053 [2024-12-09 05:38:01.501476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 01:43:10.053 [2024-12-09 05:38:01.501491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 01:43:10.053 [2024-12-09 05:38:01.501517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:10.053 [2024-12-09 05:38:01.501557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:10.053 [2024-12-09 05:38:01.501580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 01:43:10.053 [2024-12-09 05:38:01.501594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 01:43:10.053 [2024-12-09 05:38:01.501609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:10.053 [2024-12-09 05:38:01.501643] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 01:43:10.053 [2024-12-09 05:38:01.506489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:10.053 [2024-12-09 05:38:01.506527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 01:43:10.053 [2024-12-09 05:38:01.506551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.848 ms 01:43:10.053 [2024-12-09 05:38:01.506564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:10.053 [2024-12-09 05:38:01.506708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:10.053 [2024-12-09 05:38:01.506731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 01:43:10.053 [2024-12-09 05:38:01.506775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 01:43:10.053 [2024-12-09 05:38:01.506805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:10.053 [2024-12-09 05:38:01.506843] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 01:43:10.053 [2024-12-09 05:38:01.506875] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 01:43:10.053 [2024-12-09 05:38:01.506931] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 01:43:10.053 [2024-12-09 05:38:01.506956] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 01:43:10.053 [2024-12-09 05:38:01.507080] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 01:43:10.053 [2024-12-09 05:38:01.507097] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 01:43:10.053 [2024-12-09 05:38:01.507124] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 01:43:10.053 [2024-12-09 05:38:01.507140] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 01:43:10.053 [2024-12-09 05:38:01.507156] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 01:43:10.053 [2024-12-09 05:38:01.507169] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 01:43:10.053 [2024-12-09 05:38:01.507182] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 01:43:10.053 [2024-12-09 05:38:01.507193] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 01:43:10.053 [2024-12-09 05:38:01.507208] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 01:43:10.053 [2024-12-09 05:38:01.507220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:10.053 [2024-12-09 05:38:01.507234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 01:43:10.053 [2024-12-09 05:38:01.507246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.387 ms 01:43:10.053 [2024-12-09 05:38:01.507262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:10.053 [2024-12-09 05:38:01.507349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:10.053 [2024-12-09 05:38:01.507367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 01:43:10.053 [2024-12-09 05:38:01.507379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 01:43:10.053 [2024-12-09 05:38:01.507392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:10.053 [2024-12-09 05:38:01.507493] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 01:43:10.053 [2024-12-09 05:38:01.507511] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 01:43:10.053 [2024-12-09 05:38:01.507523] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 01:43:10.053 [2024-12-09 05:38:01.507537] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:43:10.053 [2024-12-09 05:38:01.507548] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 01:43:10.053 [2024-12-09 05:38:01.507561] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 01:43:10.053 [2024-12-09 05:38:01.507571] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 01:43:10.053 [2024-12-09 05:38:01.507588] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 01:43:10.053 [2024-12-09 05:38:01.507599] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 01:43:10.053 [2024-12-09 05:38:01.507612] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 01:43:10.053 [2024-12-09 05:38:01.507622] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 01:43:10.053 [2024-12-09 05:38:01.507635] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 01:43:10.053 [2024-12-09 05:38:01.507645] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 01:43:10.053 [2024-12-09 05:38:01.507657] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 01:43:10.053 [2024-12-09 05:38:01.507939] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 01:43:10.053 [2024-12-09 05:38:01.508008] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:43:10.053 [2024-12-09 05:38:01.508130] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 01:43:10.053 [2024-12-09 05:38:01.508189] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 01:43:10.053 [2024-12-09 05:38:01.508245] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:43:10.053 [2024-12-09 05:38:01.508398] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 01:43:10.053 [2024-12-09 05:38:01.508441] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 01:43:10.053 [2024-12-09 05:38:01.508550] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:43:10.053 [2024-12-09 05:38:01.508599] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 01:43:10.053 [2024-12-09 05:38:01.508803] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 01:43:10.053 [2024-12-09 05:38:01.508855] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:43:10.053 [2024-12-09 05:38:01.508900] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 01:43:10.053 [2024-12-09 05:38:01.509044] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 01:43:10.053 [2024-12-09 05:38:01.509101] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:43:10.053 [2024-12-09 05:38:01.509231] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 01:43:10.053 [2024-12-09 05:38:01.509346] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 01:43:10.053 [2024-12-09 05:38:01.509369] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:43:10.053 [2024-12-09 05:38:01.509384] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 01:43:10.053 [2024-12-09 05:38:01.509396] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 01:43:10.053 [2024-12-09 05:38:01.509412] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 01:43:10.053 [2024-12-09 05:38:01.509424] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 01:43:10.053 [2024-12-09 05:38:01.509437] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 01:43:10.053 [2024-12-09 05:38:01.509449] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 01:43:10.054 [2024-12-09 05:38:01.509462] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 01:43:10.054 [2024-12-09 05:38:01.509473] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 01:43:10.054 [2024-12-09 05:38:01.509500] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:43:10.054 [2024-12-09 05:38:01.509514] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 01:43:10.054 [2024-12-09 05:38:01.509531] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 01:43:10.054 [2024-12-09 05:38:01.509542] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:43:10.054 [2024-12-09 05:38:01.509558] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 01:43:10.054 [2024-12-09 05:38:01.509577] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 01:43:10.054 [2024-12-09 05:38:01.509595] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 01:43:10.054 [2024-12-09 05:38:01.509607] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:43:10.054 [2024-12-09 05:38:01.509625] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 01:43:10.054 [2024-12-09 05:38:01.509654] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 01:43:10.054 [2024-12-09 05:38:01.509697] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 01:43:10.054 [2024-12-09 05:38:01.509713] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 01:43:10.054 [2024-12-09 05:38:01.509731] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 01:43:10.054 [2024-12-09 05:38:01.509744] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 01:43:10.054 [2024-12-09 05:38:01.509762] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 01:43:10.054 [2024-12-09 05:38:01.509778] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 01:43:10.054 [2024-12-09 05:38:01.509801] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 01:43:10.054 [2024-12-09 05:38:01.509814] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 01:43:10.054 [2024-12-09 05:38:01.509833] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 01:43:10.054 [2024-12-09 05:38:01.509846] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 01:43:10.054 [2024-12-09 05:38:01.509863] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 01:43:10.054 [2024-12-09 05:38:01.509875] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 01:43:10.054 [2024-12-09 05:38:01.509892] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 01:43:10.054 [2024-12-09 05:38:01.509905] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 01:43:10.054 [2024-12-09 05:38:01.509922] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 01:43:10.054 [2024-12-09 05:38:01.509935] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 01:43:10.054 [2024-12-09 05:38:01.509949] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 01:43:10.054 [2024-12-09 05:38:01.509961] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 01:43:10.054 [2024-12-09 05:38:01.509975] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 01:43:10.054 [2024-12-09 05:38:01.509987] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 01:43:10.054 [2024-12-09 05:38:01.510000] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 01:43:10.054 [2024-12-09 05:38:01.510013] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 01:43:10.054 [2024-12-09 05:38:01.510031] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 01:43:10.054 [2024-12-09 05:38:01.510043] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 01:43:10.054 [2024-12-09 05:38:01.510057] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 01:43:10.054 [2024-12-09 05:38:01.510069] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 01:43:10.054 [2024-12-09 05:38:01.510100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:10.054 [2024-12-09 05:38:01.510112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 01:43:10.054 [2024-12-09 05:38:01.510126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.662 ms 01:43:10.054 [2024-12-09 05:38:01.510139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:10.054 [2024-12-09 05:38:01.552267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:10.054 [2024-12-09 05:38:01.552348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 01:43:10.054 [2024-12-09 05:38:01.552372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.039 ms 01:43:10.054 [2024-12-09 05:38:01.552387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:10.054 [2024-12-09 05:38:01.552573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:10.054 [2024-12-09 05:38:01.552593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 01:43:10.054 [2024-12-09 05:38:01.552609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 01:43:10.054 [2024-12-09 05:38:01.552620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:10.054 [2024-12-09 05:38:01.598696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:10.054 [2024-12-09 05:38:01.598766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 01:43:10.054 [2024-12-09 05:38:01.598796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.039 ms 01:43:10.054 [2024-12-09 05:38:01.598811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:10.054 [2024-12-09 05:38:01.598989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:10.054 [2024-12-09 05:38:01.599040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 01:43:10.054 [2024-12-09 05:38:01.599091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 01:43:10.054 [2024-12-09 05:38:01.599103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:10.054 [2024-12-09 05:38:01.599722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:10.054 [2024-12-09 05:38:01.599768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 01:43:10.054 [2024-12-09 05:38:01.599789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.567 ms 01:43:10.054 [2024-12-09 05:38:01.599802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:10.054 [2024-12-09 05:38:01.600041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:10.054 [2024-12-09 05:38:01.600082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 01:43:10.054 [2024-12-09 05:38:01.600132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.203 ms 01:43:10.054 [2024-12-09 05:38:01.600146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:10.054 [2024-12-09 05:38:01.623076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:10.054 [2024-12-09 05:38:01.623360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 01:43:10.054 [2024-12-09 05:38:01.623406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.891 ms 01:43:10.054 [2024-12-09 05:38:01.623422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:10.054 [2024-12-09 05:38:01.655055] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 01:43:10.054 [2024-12-09 05:38:01.655295] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 01:43:10.054 [2024-12-09 05:38:01.655350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:10.054 [2024-12-09 05:38:01.655365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 01:43:10.054 [2024-12-09 05:38:01.655385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.758 ms 01:43:10.054 [2024-12-09 05:38:01.655414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:10.313 [2024-12-09 05:38:01.683573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:10.313 [2024-12-09 05:38:01.683615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 01:43:10.313 [2024-12-09 05:38:01.683640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.047 ms 01:43:10.313 [2024-12-09 05:38:01.683654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:10.313 [2024-12-09 05:38:01.699186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:10.313 [2024-12-09 05:38:01.699391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 01:43:10.313 [2024-12-09 05:38:01.699439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.374 ms 01:43:10.313 [2024-12-09 05:38:01.699454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:10.313 [2024-12-09 05:38:01.713929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:10.313 [2024-12-09 05:38:01.713969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 01:43:10.313 [2024-12-09 05:38:01.713993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.302 ms 01:43:10.313 [2024-12-09 05:38:01.714007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:10.313 [2024-12-09 05:38:01.715018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:10.313 [2024-12-09 05:38:01.715056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 01:43:10.313 [2024-12-09 05:38:01.715079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.860 ms 01:43:10.313 [2024-12-09 05:38:01.715109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:10.313 [2024-12-09 05:38:01.793791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:10.313 [2024-12-09 05:38:01.793852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 01:43:10.313 [2024-12-09 05:38:01.793882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 78.638 ms 01:43:10.313 [2024-12-09 05:38:01.793896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:10.313 [2024-12-09 05:38:01.806874] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 01:43:10.313 [2024-12-09 05:38:01.828685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:10.313 [2024-12-09 05:38:01.828785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 01:43:10.313 [2024-12-09 05:38:01.828808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.640 ms 01:43:10.313 [2024-12-09 05:38:01.828835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:10.313 [2024-12-09 05:38:01.829027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:10.313 [2024-12-09 05:38:01.829056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 01:43:10.313 [2024-12-09 05:38:01.829071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 01:43:10.313 [2024-12-09 05:38:01.829086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:10.313 [2024-12-09 05:38:01.829231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:10.313 [2024-12-09 05:38:01.829260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 01:43:10.313 [2024-12-09 05:38:01.829276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 01:43:10.313 [2024-12-09 05:38:01.829294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:10.313 [2024-12-09 05:38:01.829330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:10.313 [2024-12-09 05:38:01.829351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 01:43:10.313 [2024-12-09 05:38:01.829364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 01:43:10.313 [2024-12-09 05:38:01.829379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:10.313 [2024-12-09 05:38:01.829437] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 01:43:10.313 [2024-12-09 05:38:01.829462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:10.313 [2024-12-09 05:38:01.829477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 01:43:10.313 [2024-12-09 05:38:01.829492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 01:43:10.313 [2024-12-09 05:38:01.829507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:10.313 [2024-12-09 05:38:01.861528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:10.313 [2024-12-09 05:38:01.861572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 01:43:10.313 [2024-12-09 05:38:01.861593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.983 ms 01:43:10.313 [2024-12-09 05:38:01.861605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:10.313 [2024-12-09 05:38:01.861809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:10.313 [2024-12-09 05:38:01.861832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 01:43:10.313 [2024-12-09 05:38:01.861853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 01:43:10.313 [2024-12-09 05:38:01.861866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:10.313 [2024-12-09 05:38:01.863273] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 01:43:10.313 [2024-12-09 05:38:01.867749] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 394.293 ms, result 0 01:43:10.313 [2024-12-09 05:38:01.869033] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 01:43:10.313 Some configs were skipped because the RPC state that can call them passed over. 01:43:10.313 05:38:01 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 01:43:10.878 [2024-12-09 05:38:02.198514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:10.878 [2024-12-09 05:38:02.198791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 01:43:10.878 [2024-12-09 05:38:02.198930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.711 ms 01:43:10.878 [2024-12-09 05:38:02.199112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:10.878 [2024-12-09 05:38:02.199212] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.409 ms, result 0 01:43:10.878 true 01:43:10.878 05:38:02 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 01:43:11.136 [2024-12-09 05:38:02.502475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:11.136 [2024-12-09 05:38:02.502757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 01:43:11.136 [2024-12-09 05:38:02.502911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.209 ms 01:43:11.136 [2024-12-09 05:38:02.503099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:11.136 [2024-12-09 05:38:02.503201] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.932 ms, result 0 01:43:11.136 true 01:43:11.136 05:38:02 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 78898 01:43:11.136 05:38:02 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78898 ']' 01:43:11.136 05:38:02 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78898 01:43:11.136 05:38:02 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 01:43:11.136 05:38:02 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:43:11.136 05:38:02 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78898 01:43:11.136 killing process with pid 78898 01:43:11.136 05:38:02 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:43:11.136 05:38:02 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:43:11.136 05:38:02 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78898' 01:43:11.136 05:38:02 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 78898 01:43:11.136 05:38:02 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 78898 01:43:12.070 [2024-12-09 05:38:03.555360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:12.070 [2024-12-09 05:38:03.555441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 01:43:12.070 [2024-12-09 05:38:03.555462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 01:43:12.070 [2024-12-09 05:38:03.555476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:12.070 [2024-12-09 05:38:03.555509] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 01:43:12.070 [2024-12-09 05:38:03.559803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:12.070 [2024-12-09 05:38:03.559870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 01:43:12.070 [2024-12-09 05:38:03.559896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.268 ms 01:43:12.070 [2024-12-09 05:38:03.559908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:12.070 [2024-12-09 05:38:03.560311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:12.070 [2024-12-09 05:38:03.560338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 01:43:12.070 [2024-12-09 05:38:03.560369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.337 ms 01:43:12.070 [2024-12-09 05:38:03.560380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:12.070 [2024-12-09 05:38:03.564792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:12.070 [2024-12-09 05:38:03.564835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 01:43:12.070 [2024-12-09 05:38:03.564856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.383 ms 01:43:12.070 [2024-12-09 05:38:03.564877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:12.070 [2024-12-09 05:38:03.572628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:12.070 [2024-12-09 05:38:03.572674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 01:43:12.070 [2024-12-09 05:38:03.572726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.693 ms 01:43:12.070 [2024-12-09 05:38:03.572738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:12.070 [2024-12-09 05:38:03.584665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:12.070 [2024-12-09 05:38:03.584740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 01:43:12.070 [2024-12-09 05:38:03.584780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.847 ms 01:43:12.070 [2024-12-09 05:38:03.584792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:12.070 [2024-12-09 05:38:03.593395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:12.070 [2024-12-09 05:38:03.593437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 01:43:12.070 [2024-12-09 05:38:03.593462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.549 ms 01:43:12.070 [2024-12-09 05:38:03.593473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:12.070 [2024-12-09 05:38:03.593614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:12.070 [2024-12-09 05:38:03.593631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 01:43:12.070 [2024-12-09 05:38:03.593645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 01:43:12.070 [2024-12-09 05:38:03.593656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:12.070 [2024-12-09 05:38:03.605310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:12.070 [2024-12-09 05:38:03.605345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 01:43:12.070 [2024-12-09 05:38:03.605367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.587 ms 01:43:12.070 [2024-12-09 05:38:03.605377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:12.070 [2024-12-09 05:38:03.616580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:12.070 [2024-12-09 05:38:03.616820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 01:43:12.070 [2024-12-09 05:38:03.616858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.158 ms 01:43:12.070 [2024-12-09 05:38:03.616871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:12.070 [2024-12-09 05:38:03.627805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:12.070 [2024-12-09 05:38:03.627987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 01:43:12.070 [2024-12-09 05:38:03.628025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.878 ms 01:43:12.070 [2024-12-09 05:38:03.628038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:12.071 [2024-12-09 05:38:03.639697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:12.071 [2024-12-09 05:38:03.639972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 01:43:12.071 [2024-12-09 05:38:03.640010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.562 ms 01:43:12.071 [2024-12-09 05:38:03.640024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:12.071 [2024-12-09 05:38:03.640081] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 01:43:12.071 [2024-12-09 05:38:03.640116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 01:43:12.071 [2024-12-09 05:38:03.640139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 01:43:12.071 [2024-12-09 05:38:03.640153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 01:43:12.071 [2024-12-09 05:38:03.640168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 01:43:12.071 [2024-12-09 05:38:03.640182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 01:43:12.071 [2024-12-09 05:38:03.640201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 01:43:12.071 [2024-12-09 05:38:03.640215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 01:43:12.071 [2024-12-09 05:38:03.640232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 01:43:12.071 [2024-12-09 05:38:03.640245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 01:43:12.071 [2024-12-09 05:38:03.640276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 01:43:12.071 [2024-12-09 05:38:03.640290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 01:43:12.071 [2024-12-09 05:38:03.640319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 01:43:12.071 [2024-12-09 05:38:03.640362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 01:43:12.071 [2024-12-09 05:38:03.640377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 01:43:12.071 [2024-12-09 05:38:03.640388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 01:43:12.071 [2024-12-09 05:38:03.640405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 01:43:12.071 [2024-12-09 05:38:03.640417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 01:43:12.071 [2024-12-09 05:38:03.640431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 01:43:12.071 [2024-12-09 05:38:03.640443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 01:43:12.071 [2024-12-09 05:38:03.640472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 01:43:12.071 [2024-12-09 05:38:03.640484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 01:43:12.071 [2024-12-09 05:38:03.640501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 01:43:12.071 [2024-12-09 05:38:03.640512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 01:43:12.071 [2024-12-09 05:38:03.640526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 01:43:12.071 [2024-12-09 05:38:03.640538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 01:43:12.071 [2024-12-09 05:38:03.640567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 01:43:12.071 [2024-12-09 05:38:03.640595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 01:43:12.071 [2024-12-09 05:38:03.640611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 01:43:12.071 [2024-12-09 05:38:03.640623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 01:43:12.071 [2024-12-09 05:38:03.640638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 01:43:12.071 [2024-12-09 05:38:03.640650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 01:43:12.071 [2024-12-09 05:38:03.640665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 01:43:12.071 [2024-12-09 05:38:03.640693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 01:43:12.071 [2024-12-09 05:38:03.640725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 01:43:12.071 [2024-12-09 05:38:03.640739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 01:43:12.071 [2024-12-09 05:38:03.640755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 01:43:12.071 [2024-12-09 05:38:03.640770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 01:43:12.071 [2024-12-09 05:38:03.640788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 01:43:12.071 [2024-12-09 05:38:03.640814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 01:43:12.071 [2024-12-09 05:38:03.640841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 01:43:12.071 [2024-12-09 05:38:03.640855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 01:43:12.071 [2024-12-09 05:38:03.640873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 01:43:12.071 [2024-12-09 05:38:03.640886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 01:43:12.071 [2024-12-09 05:38:03.640902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 01:43:12.071 [2024-12-09 05:38:03.640915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 01:43:12.071 [2024-12-09 05:38:03.640929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 01:43:12.071 [2024-12-09 05:38:03.640943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 01:43:12.071 [2024-12-09 05:38:03.640971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 01:43:12.071 [2024-12-09 05:38:03.640998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 01:43:12.071 [2024-12-09 05:38:03.641013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 01:43:12.071 [2024-12-09 05:38:03.641041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 01:43:12.071 [2024-12-09 05:38:03.641073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 01:43:12.071 [2024-12-09 05:38:03.641086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 01:43:12.071 [2024-12-09 05:38:03.641104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 01:43:12.071 [2024-12-09 05:38:03.641117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 01:43:12.071 [2024-12-09 05:38:03.641137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 01:43:12.071 [2024-12-09 05:38:03.641150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 01:43:12.071 [2024-12-09 05:38:03.641174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 01:43:12.071 [2024-12-09 05:38:03.641187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 01:43:12.071 [2024-12-09 05:38:03.641202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 01:43:12.071 [2024-12-09 05:38:03.641215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 01:43:12.071 [2024-12-09 05:38:03.641231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 01:43:12.071 [2024-12-09 05:38:03.641244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 01:43:12.071 [2024-12-09 05:38:03.641275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 01:43:12.071 [2024-12-09 05:38:03.641288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 01:43:12.071 [2024-12-09 05:38:03.641307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 01:43:12.071 [2024-12-09 05:38:03.641322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 01:43:12.071 [2024-12-09 05:38:03.641338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 01:43:12.071 [2024-12-09 05:38:03.641351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 01:43:12.071 [2024-12-09 05:38:03.641376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 01:43:12.071 [2024-12-09 05:38:03.641389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 01:43:12.071 [2024-12-09 05:38:03.641414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 01:43:12.071 [2024-12-09 05:38:03.641457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 01:43:12.071 [2024-12-09 05:38:03.641472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 01:43:12.071 [2024-12-09 05:38:03.641501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 01:43:12.071 [2024-12-09 05:38:03.641515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 01:43:12.071 [2024-12-09 05:38:03.641527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 01:43:12.071 [2024-12-09 05:38:03.641542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 01:43:12.071 [2024-12-09 05:38:03.641554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 01:43:12.071 [2024-12-09 05:38:03.641570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 01:43:12.071 [2024-12-09 05:38:03.641582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 01:43:12.071 [2024-12-09 05:38:03.641597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 01:43:12.071 [2024-12-09 05:38:03.641609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 01:43:12.071 [2024-12-09 05:38:03.641625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 01:43:12.071 [2024-12-09 05:38:03.641637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 01:43:12.071 [2024-12-09 05:38:03.641654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 01:43:12.072 [2024-12-09 05:38:03.641666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 01:43:12.072 [2024-12-09 05:38:03.641681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 01:43:12.072 [2024-12-09 05:38:03.641693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 01:43:12.072 [2024-12-09 05:38:03.641708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 01:43:12.072 [2024-12-09 05:38:03.641720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 01:43:12.072 [2024-12-09 05:38:03.641735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 01:43:12.072 [2024-12-09 05:38:03.641753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 01:43:12.072 [2024-12-09 05:38:03.641779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 01:43:12.072 [2024-12-09 05:38:03.641810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 01:43:12.072 [2024-12-09 05:38:03.641828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 01:43:12.072 [2024-12-09 05:38:03.641841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 01:43:12.072 [2024-12-09 05:38:03.641856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 01:43:12.072 [2024-12-09 05:38:03.641869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 01:43:12.072 [2024-12-09 05:38:03.641884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 01:43:12.072 [2024-12-09 05:38:03.641919] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 01:43:12.072 [2024-12-09 05:38:03.641939] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5177abd3-cafa-411b-b43c-d71befe750fc 01:43:12.072 [2024-12-09 05:38:03.641971] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 01:43:12.072 [2024-12-09 05:38:03.641996] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 01:43:12.072 [2024-12-09 05:38:03.642009] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 01:43:12.072 [2024-12-09 05:38:03.642024] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 01:43:12.072 [2024-12-09 05:38:03.642036] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 01:43:12.072 [2024-12-09 05:38:03.642050] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 01:43:12.072 [2024-12-09 05:38:03.642062] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 01:43:12.072 [2024-12-09 05:38:03.642075] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 01:43:12.072 [2024-12-09 05:38:03.642086] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 01:43:12.072 [2024-12-09 05:38:03.642101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:12.072 [2024-12-09 05:38:03.642113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 01:43:12.072 [2024-12-09 05:38:03.642128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.024 ms 01:43:12.072 [2024-12-09 05:38:03.642143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:12.072 [2024-12-09 05:38:03.659386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:12.072 [2024-12-09 05:38:03.659573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 01:43:12.072 [2024-12-09 05:38:03.659625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.193 ms 01:43:12.072 [2024-12-09 05:38:03.659639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:12.072 [2024-12-09 05:38:03.660281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:12.072 [2024-12-09 05:38:03.660346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 01:43:12.072 [2024-12-09 05:38:03.660367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.501 ms 01:43:12.072 [2024-12-09 05:38:03.660378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:12.329 [2024-12-09 05:38:03.720089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:43:12.330 [2024-12-09 05:38:03.720155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 01:43:12.330 [2024-12-09 05:38:03.720178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:43:12.330 [2024-12-09 05:38:03.720190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:12.330 [2024-12-09 05:38:03.720340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:43:12.330 [2024-12-09 05:38:03.720358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 01:43:12.330 [2024-12-09 05:38:03.720386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:43:12.330 [2024-12-09 05:38:03.720397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:12.330 [2024-12-09 05:38:03.720467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:43:12.330 [2024-12-09 05:38:03.720484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 01:43:12.330 [2024-12-09 05:38:03.720502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:43:12.330 [2024-12-09 05:38:03.720513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:12.330 [2024-12-09 05:38:03.720541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:43:12.330 [2024-12-09 05:38:03.720562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 01:43:12.330 [2024-12-09 05:38:03.720589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:43:12.330 [2024-12-09 05:38:03.720608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:12.330 [2024-12-09 05:38:03.817925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:43:12.330 [2024-12-09 05:38:03.818004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 01:43:12.330 [2024-12-09 05:38:03.818037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:43:12.330 [2024-12-09 05:38:03.818048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:12.330 [2024-12-09 05:38:03.894918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:43:12.330 [2024-12-09 05:38:03.895118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 01:43:12.330 [2024-12-09 05:38:03.895154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:43:12.330 [2024-12-09 05:38:03.895167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:12.330 [2024-12-09 05:38:03.895268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:43:12.330 [2024-12-09 05:38:03.895286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 01:43:12.330 [2024-12-09 05:38:03.895303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:43:12.330 [2024-12-09 05:38:03.895314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:12.330 [2024-12-09 05:38:03.895351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:43:12.330 [2024-12-09 05:38:03.895363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 01:43:12.330 [2024-12-09 05:38:03.895377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:43:12.330 [2024-12-09 05:38:03.895387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:12.330 [2024-12-09 05:38:03.895528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:43:12.330 [2024-12-09 05:38:03.895546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 01:43:12.330 [2024-12-09 05:38:03.895560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:43:12.330 [2024-12-09 05:38:03.895571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:12.330 [2024-12-09 05:38:03.895623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:43:12.330 [2024-12-09 05:38:03.895649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 01:43:12.330 [2024-12-09 05:38:03.895670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:43:12.330 [2024-12-09 05:38:03.895746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:12.330 [2024-12-09 05:38:03.895801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:43:12.330 [2024-12-09 05:38:03.895826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 01:43:12.330 [2024-12-09 05:38:03.895843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:43:12.330 [2024-12-09 05:38:03.895854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:12.330 [2024-12-09 05:38:03.895911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:43:12.330 [2024-12-09 05:38:03.895927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 01:43:12.330 [2024-12-09 05:38:03.895941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:43:12.330 [2024-12-09 05:38:03.895951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:12.330 [2024-12-09 05:38:03.896151] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 340.778 ms, result 0 01:43:13.285 05:38:04 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 01:43:13.544 [2024-12-09 05:38:04.974529] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:43:13.544 [2024-12-09 05:38:04.974741] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78959 ] 01:43:13.802 [2024-12-09 05:38:05.164013] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:43:13.802 [2024-12-09 05:38:05.313174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:43:14.060 [2024-12-09 05:38:05.651891] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 01:43:14.060 [2024-12-09 05:38:05.651973] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 01:43:14.319 [2024-12-09 05:38:05.817454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:14.319 [2024-12-09 05:38:05.817506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 01:43:14.319 [2024-12-09 05:38:05.817525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 01:43:14.319 [2024-12-09 05:38:05.817536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:14.319 [2024-12-09 05:38:05.820728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:14.319 [2024-12-09 05:38:05.820766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 01:43:14.319 [2024-12-09 05:38:05.820780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.167 ms 01:43:14.319 [2024-12-09 05:38:05.820790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:14.319 [2024-12-09 05:38:05.820921] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 01:43:14.319 [2024-12-09 05:38:05.821811] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 01:43:14.319 [2024-12-09 05:38:05.821912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:14.319 [2024-12-09 05:38:05.821927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 01:43:14.319 [2024-12-09 05:38:05.821938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.005 ms 01:43:14.319 [2024-12-09 05:38:05.821948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:14.319 [2024-12-09 05:38:05.824001] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 01:43:14.319 [2024-12-09 05:38:05.838264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:14.319 [2024-12-09 05:38:05.838302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 01:43:14.319 [2024-12-09 05:38:05.838317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.264 ms 01:43:14.319 [2024-12-09 05:38:05.838328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:14.319 [2024-12-09 05:38:05.838455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:14.319 [2024-12-09 05:38:05.838483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 01:43:14.319 [2024-12-09 05:38:05.838495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 01:43:14.319 [2024-12-09 05:38:05.838505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:14.319 [2024-12-09 05:38:05.847232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:14.319 [2024-12-09 05:38:05.847431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 01:43:14.320 [2024-12-09 05:38:05.847457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.677 ms 01:43:14.320 [2024-12-09 05:38:05.847469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:14.320 [2024-12-09 05:38:05.847590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:14.320 [2024-12-09 05:38:05.847610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 01:43:14.320 [2024-12-09 05:38:05.847623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 01:43:14.320 [2024-12-09 05:38:05.847633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:14.320 [2024-12-09 05:38:05.847721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:14.320 [2024-12-09 05:38:05.847739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 01:43:14.320 [2024-12-09 05:38:05.847751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 01:43:14.320 [2024-12-09 05:38:05.847777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:14.320 [2024-12-09 05:38:05.847809] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 01:43:14.320 [2024-12-09 05:38:05.852430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:14.320 [2024-12-09 05:38:05.852479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 01:43:14.320 [2024-12-09 05:38:05.852493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.627 ms 01:43:14.320 [2024-12-09 05:38:05.852503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:14.320 [2024-12-09 05:38:05.852576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:14.320 [2024-12-09 05:38:05.852594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 01:43:14.320 [2024-12-09 05:38:05.852606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 01:43:14.320 [2024-12-09 05:38:05.852615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:14.320 [2024-12-09 05:38:05.852648] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 01:43:14.320 [2024-12-09 05:38:05.852712] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 01:43:14.320 [2024-12-09 05:38:05.852754] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 01:43:14.320 [2024-12-09 05:38:05.852773] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 01:43:14.320 [2024-12-09 05:38:05.852869] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 01:43:14.320 [2024-12-09 05:38:05.852884] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 01:43:14.320 [2024-12-09 05:38:05.852898] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 01:43:14.320 [2024-12-09 05:38:05.852915] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 01:43:14.320 [2024-12-09 05:38:05.852927] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 01:43:14.320 [2024-12-09 05:38:05.852937] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 01:43:14.320 [2024-12-09 05:38:05.852947] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 01:43:14.320 [2024-12-09 05:38:05.852956] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 01:43:14.320 [2024-12-09 05:38:05.852965] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 01:43:14.320 [2024-12-09 05:38:05.852977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:14.320 [2024-12-09 05:38:05.853004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 01:43:14.320 [2024-12-09 05:38:05.853031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.330 ms 01:43:14.320 [2024-12-09 05:38:05.853042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:14.320 [2024-12-09 05:38:05.853143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:14.320 [2024-12-09 05:38:05.853163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 01:43:14.320 [2024-12-09 05:38:05.853175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 01:43:14.320 [2024-12-09 05:38:05.853185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:14.320 [2024-12-09 05:38:05.853299] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 01:43:14.320 [2024-12-09 05:38:05.853321] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 01:43:14.320 [2024-12-09 05:38:05.853334] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 01:43:14.320 [2024-12-09 05:38:05.853346] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:43:14.320 [2024-12-09 05:38:05.853356] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 01:43:14.320 [2024-12-09 05:38:05.853366] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 01:43:14.320 [2024-12-09 05:38:05.853376] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 01:43:14.320 [2024-12-09 05:38:05.853385] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 01:43:14.320 [2024-12-09 05:38:05.853395] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 01:43:14.320 [2024-12-09 05:38:05.853405] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 01:43:14.320 [2024-12-09 05:38:05.853415] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 01:43:14.320 [2024-12-09 05:38:05.853438] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 01:43:14.320 [2024-12-09 05:38:05.853447] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 01:43:14.320 [2024-12-09 05:38:05.853462] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 01:43:14.320 [2024-12-09 05:38:05.853473] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 01:43:14.320 [2024-12-09 05:38:05.853483] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:43:14.320 [2024-12-09 05:38:05.853493] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 01:43:14.320 [2024-12-09 05:38:05.853502] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 01:43:14.320 [2024-12-09 05:38:05.853527] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:43:14.320 [2024-12-09 05:38:05.853536] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 01:43:14.320 [2024-12-09 05:38:05.853545] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 01:43:14.320 [2024-12-09 05:38:05.853554] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:43:14.320 [2024-12-09 05:38:05.853577] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 01:43:14.320 [2024-12-09 05:38:05.853586] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 01:43:14.320 [2024-12-09 05:38:05.853595] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:43:14.320 [2024-12-09 05:38:05.853604] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 01:43:14.320 [2024-12-09 05:38:05.853612] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 01:43:14.320 [2024-12-09 05:38:05.853621] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:43:14.320 [2024-12-09 05:38:05.853631] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 01:43:14.320 [2024-12-09 05:38:05.853640] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 01:43:14.320 [2024-12-09 05:38:05.853648] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:43:14.320 [2024-12-09 05:38:05.853657] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 01:43:14.320 [2024-12-09 05:38:05.853666] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 01:43:14.320 [2024-12-09 05:38:05.853675] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 01:43:14.320 [2024-12-09 05:38:05.853684] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 01:43:14.320 [2024-12-09 05:38:05.853693] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 01:43:14.320 [2024-12-09 05:38:05.853702] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 01:43:14.320 [2024-12-09 05:38:05.853712] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 01:43:14.320 [2024-12-09 05:38:05.853721] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 01:43:14.320 [2024-12-09 05:38:05.853731] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:43:14.320 [2024-12-09 05:38:05.853986] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 01:43:14.320 [2024-12-09 05:38:05.854028] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 01:43:14.320 [2024-12-09 05:38:05.854152] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:43:14.320 [2024-12-09 05:38:05.854197] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 01:43:14.320 [2024-12-09 05:38:05.854232] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 01:43:14.320 [2024-12-09 05:38:05.854275] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 01:43:14.320 [2024-12-09 05:38:05.854388] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:43:14.320 [2024-12-09 05:38:05.854405] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 01:43:14.320 [2024-12-09 05:38:05.854416] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 01:43:14.320 [2024-12-09 05:38:05.854468] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 01:43:14.320 [2024-12-09 05:38:05.854478] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 01:43:14.320 [2024-12-09 05:38:05.854488] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 01:43:14.320 [2024-12-09 05:38:05.854498] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 01:43:14.320 [2024-12-09 05:38:05.854511] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 01:43:14.320 [2024-12-09 05:38:05.854525] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 01:43:14.320 [2024-12-09 05:38:05.854537] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 01:43:14.320 [2024-12-09 05:38:05.854547] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 01:43:14.320 [2024-12-09 05:38:05.854558] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 01:43:14.320 [2024-12-09 05:38:05.854568] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 01:43:14.320 [2024-12-09 05:38:05.854578] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 01:43:14.320 [2024-12-09 05:38:05.854589] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 01:43:14.320 [2024-12-09 05:38:05.854599] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 01:43:14.321 [2024-12-09 05:38:05.854609] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 01:43:14.321 [2024-12-09 05:38:05.854620] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 01:43:14.321 [2024-12-09 05:38:05.854630] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 01:43:14.321 [2024-12-09 05:38:05.854641] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 01:43:14.321 [2024-12-09 05:38:05.854651] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 01:43:14.321 [2024-12-09 05:38:05.854661] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 01:43:14.321 [2024-12-09 05:38:05.854672] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 01:43:14.321 [2024-12-09 05:38:05.854699] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 01:43:14.321 [2024-12-09 05:38:05.854712] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 01:43:14.321 [2024-12-09 05:38:05.854724] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 01:43:14.321 [2024-12-09 05:38:05.854734] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 01:43:14.321 [2024-12-09 05:38:05.854745] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 01:43:14.321 [2024-12-09 05:38:05.854756] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 01:43:14.321 [2024-12-09 05:38:05.854799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:14.321 [2024-12-09 05:38:05.854829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 01:43:14.321 [2024-12-09 05:38:05.854840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.552 ms 01:43:14.321 [2024-12-09 05:38:05.854852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:14.321 [2024-12-09 05:38:05.891601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:14.321 [2024-12-09 05:38:05.891649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 01:43:14.321 [2024-12-09 05:38:05.891699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.670 ms 01:43:14.321 [2024-12-09 05:38:05.891718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:14.321 [2024-12-09 05:38:05.891934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:14.321 [2024-12-09 05:38:05.891953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 01:43:14.321 [2024-12-09 05:38:05.891982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 01:43:14.321 [2024-12-09 05:38:05.891992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:14.580 [2024-12-09 05:38:05.949736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:14.580 [2024-12-09 05:38:05.949786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 01:43:14.580 [2024-12-09 05:38:05.949808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.714 ms 01:43:14.580 [2024-12-09 05:38:05.949819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:14.580 [2024-12-09 05:38:05.949969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:14.580 [2024-12-09 05:38:05.949988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 01:43:14.580 [2024-12-09 05:38:05.950001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 01:43:14.580 [2024-12-09 05:38:05.950012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:14.580 [2024-12-09 05:38:05.950614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:14.580 [2024-12-09 05:38:05.950633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 01:43:14.580 [2024-12-09 05:38:05.950653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.571 ms 01:43:14.580 [2024-12-09 05:38:05.950665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:14.580 [2024-12-09 05:38:05.950928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:14.580 [2024-12-09 05:38:05.950960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 01:43:14.580 [2024-12-09 05:38:05.950976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.210 ms 01:43:14.580 [2024-12-09 05:38:05.950986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:14.580 [2024-12-09 05:38:05.969732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:14.580 [2024-12-09 05:38:05.969772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 01:43:14.580 [2024-12-09 05:38:05.969787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.715 ms 01:43:14.580 [2024-12-09 05:38:05.969798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:14.580 [2024-12-09 05:38:05.984840] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 01:43:14.580 [2024-12-09 05:38:05.984880] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 01:43:14.580 [2024-12-09 05:38:05.984897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:14.580 [2024-12-09 05:38:05.984909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 01:43:14.580 [2024-12-09 05:38:05.984921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.966 ms 01:43:14.580 [2024-12-09 05:38:05.984931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:14.580 [2024-12-09 05:38:06.011181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:14.580 [2024-12-09 05:38:06.011238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 01:43:14.580 [2024-12-09 05:38:06.011255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.161 ms 01:43:14.580 [2024-12-09 05:38:06.011266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:14.580 [2024-12-09 05:38:06.025159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:14.580 [2024-12-09 05:38:06.025341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 01:43:14.580 [2024-12-09 05:38:06.025374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.805 ms 01:43:14.580 [2024-12-09 05:38:06.025385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:14.580 [2024-12-09 05:38:06.038877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:14.580 [2024-12-09 05:38:06.038915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 01:43:14.580 [2024-12-09 05:38:06.038939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.402 ms 01:43:14.580 [2024-12-09 05:38:06.038950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:14.580 [2024-12-09 05:38:06.039774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:14.580 [2024-12-09 05:38:06.039811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 01:43:14.580 [2024-12-09 05:38:06.039825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.691 ms 01:43:14.580 [2024-12-09 05:38:06.039837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:14.580 [2024-12-09 05:38:06.117315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:14.580 [2024-12-09 05:38:06.117409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 01:43:14.580 [2024-12-09 05:38:06.117432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 77.444 ms 01:43:14.580 [2024-12-09 05:38:06.117444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:14.580 [2024-12-09 05:38:06.128862] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 01:43:14.580 [2024-12-09 05:38:06.148248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:14.580 [2024-12-09 05:38:06.148313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 01:43:14.580 [2024-12-09 05:38:06.148333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.656 ms 01:43:14.580 [2024-12-09 05:38:06.148352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:14.580 [2024-12-09 05:38:06.148494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:14.580 [2024-12-09 05:38:06.148514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 01:43:14.580 [2024-12-09 05:38:06.148527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 01:43:14.580 [2024-12-09 05:38:06.148538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:14.580 [2024-12-09 05:38:06.148609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:14.580 [2024-12-09 05:38:06.148624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 01:43:14.580 [2024-12-09 05:38:06.148636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 01:43:14.580 [2024-12-09 05:38:06.148652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:14.580 [2024-12-09 05:38:06.148740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:14.580 [2024-12-09 05:38:06.148759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 01:43:14.580 [2024-12-09 05:38:06.148771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 01:43:14.580 [2024-12-09 05:38:06.148782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:14.580 [2024-12-09 05:38:06.148828] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 01:43:14.580 [2024-12-09 05:38:06.148844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:14.580 [2024-12-09 05:38:06.148855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 01:43:14.580 [2024-12-09 05:38:06.148866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 01:43:14.580 [2024-12-09 05:38:06.148876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:14.580 [2024-12-09 05:38:06.177237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:14.580 [2024-12-09 05:38:06.177280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 01:43:14.580 [2024-12-09 05:38:06.177298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.334 ms 01:43:14.580 [2024-12-09 05:38:06.177310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:14.580 [2024-12-09 05:38:06.177473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:14.580 [2024-12-09 05:38:06.177494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 01:43:14.580 [2024-12-09 05:38:06.177507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 01:43:14.580 [2024-12-09 05:38:06.177518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:14.580 [2024-12-09 05:38:06.179052] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 01:43:14.580 [2024-12-09 05:38:06.182898] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 361.132 ms, result 0 01:43:14.580 [2024-12-09 05:38:06.183667] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 01:43:14.839 [2024-12-09 05:38:06.198128] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 01:43:15.774  [2024-12-09T05:38:08.332Z] Copying: 25/256 [MB] (25 MBps) [2024-12-09T05:38:09.273Z] Copying: 47/256 [MB] (22 MBps) [2024-12-09T05:38:10.646Z] Copying: 69/256 [MB] (21 MBps) [2024-12-09T05:38:11.348Z] Copying: 91/256 [MB] (22 MBps) [2024-12-09T05:38:12.280Z] Copying: 113/256 [MB] (21 MBps) [2024-12-09T05:38:13.653Z] Copying: 134/256 [MB] (21 MBps) [2024-12-09T05:38:14.589Z] Copying: 155/256 [MB] (21 MBps) [2024-12-09T05:38:15.527Z] Copying: 177/256 [MB] (21 MBps) [2024-12-09T05:38:16.463Z] Copying: 200/256 [MB] (22 MBps) [2024-12-09T05:38:17.399Z] Copying: 221/256 [MB] (21 MBps) [2024-12-09T05:38:17.967Z] Copying: 244/256 [MB] (22 MBps) [2024-12-09T05:38:18.226Z] Copying: 256/256 [MB] (average 22 MBps)[2024-12-09 05:38:18.018301] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 01:43:26.609 [2024-12-09 05:38:18.030887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:26.609 [2024-12-09 05:38:18.030938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 01:43:26.609 [2024-12-09 05:38:18.030990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 01:43:26.609 [2024-12-09 05:38:18.031018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:26.609 [2024-12-09 05:38:18.031058] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 01:43:26.609 [2024-12-09 05:38:18.035377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:26.609 [2024-12-09 05:38:18.035411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 01:43:26.609 [2024-12-09 05:38:18.035426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.298 ms 01:43:26.609 [2024-12-09 05:38:18.035437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:26.609 [2024-12-09 05:38:18.035762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:26.609 [2024-12-09 05:38:18.035785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 01:43:26.609 [2024-12-09 05:38:18.035799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.296 ms 01:43:26.609 [2024-12-09 05:38:18.035811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:26.609 [2024-12-09 05:38:18.039223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:26.609 [2024-12-09 05:38:18.039254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 01:43:26.609 [2024-12-09 05:38:18.039267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.385 ms 01:43:26.609 [2024-12-09 05:38:18.039279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:26.609 [2024-12-09 05:38:18.046629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:26.609 [2024-12-09 05:38:18.046679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 01:43:26.609 [2024-12-09 05:38:18.046704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.324 ms 01:43:26.609 [2024-12-09 05:38:18.046716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:26.609 [2024-12-09 05:38:18.075450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:26.609 [2024-12-09 05:38:18.075513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 01:43:26.609 [2024-12-09 05:38:18.075530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.638 ms 01:43:26.609 [2024-12-09 05:38:18.075542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:26.609 [2024-12-09 05:38:18.094818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:26.609 [2024-12-09 05:38:18.095091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 01:43:26.609 [2024-12-09 05:38:18.095130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.214 ms 01:43:26.609 [2024-12-09 05:38:18.095144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:26.609 [2024-12-09 05:38:18.095393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:26.609 [2024-12-09 05:38:18.095423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 01:43:26.609 [2024-12-09 05:38:18.095483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.133 ms 01:43:26.609 [2024-12-09 05:38:18.095506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:26.609 [2024-12-09 05:38:18.126287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:26.609 [2024-12-09 05:38:18.126328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 01:43:26.609 [2024-12-09 05:38:18.126344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.755 ms 01:43:26.609 [2024-12-09 05:38:18.126355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:26.609 [2024-12-09 05:38:18.152675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:26.609 [2024-12-09 05:38:18.152746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 01:43:26.609 [2024-12-09 05:38:18.152762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.203 ms 01:43:26.609 [2024-12-09 05:38:18.152774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:26.609 [2024-12-09 05:38:18.177977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:26.609 [2024-12-09 05:38:18.178314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 01:43:26.609 [2024-12-09 05:38:18.178342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.127 ms 01:43:26.609 [2024-12-09 05:38:18.178355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:26.609 [2024-12-09 05:38:18.204952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:26.609 [2024-12-09 05:38:18.205165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 01:43:26.609 [2024-12-09 05:38:18.205303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.435 ms 01:43:26.609 [2024-12-09 05:38:18.205431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:26.609 [2024-12-09 05:38:18.205547] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 01:43:26.609 [2024-12-09 05:38:18.205730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 01:43:26.610 [2024-12-09 05:38:18.205801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 01:43:26.610 [2024-12-09 05:38:18.206030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 01:43:26.610 [2024-12-09 05:38:18.206250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 01:43:26.610 [2024-12-09 05:38:18.206396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 01:43:26.610 [2024-12-09 05:38:18.206505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 01:43:26.610 [2024-12-09 05:38:18.206624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 01:43:26.610 [2024-12-09 05:38:18.206800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 01:43:26.610 [2024-12-09 05:38:18.206982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 01:43:26.610 [2024-12-09 05:38:18.207052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 01:43:26.610 [2024-12-09 05:38:18.207224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 01:43:26.610 [2024-12-09 05:38:18.207373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 01:43:26.610 [2024-12-09 05:38:18.207553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 01:43:26.610 [2024-12-09 05:38:18.207820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 01:43:26.610 [2024-12-09 05:38:18.208134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 01:43:26.610 [2024-12-09 05:38:18.208347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 01:43:26.610 [2024-12-09 05:38:18.208530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 01:43:26.610 [2024-12-09 05:38:18.208709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 01:43:26.610 [2024-12-09 05:38:18.208838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 01:43:26.610 [2024-12-09 05:38:18.208944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 01:43:26.610 [2024-12-09 05:38:18.209071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 01:43:26.610 [2024-12-09 05:38:18.209197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 01:43:26.610 [2024-12-09 05:38:18.209366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 01:43:26.610 [2024-12-09 05:38:18.209545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 01:43:26.610 [2024-12-09 05:38:18.209578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 01:43:26.610 [2024-12-09 05:38:18.209591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 01:43:26.610 [2024-12-09 05:38:18.209605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 01:43:26.610 [2024-12-09 05:38:18.209619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 01:43:26.610 [2024-12-09 05:38:18.209632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 01:43:26.610 [2024-12-09 05:38:18.209662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 01:43:26.610 [2024-12-09 05:38:18.209709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 01:43:26.610 [2024-12-09 05:38:18.209726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 01:43:26.610 [2024-12-09 05:38:18.209740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 01:43:26.610 [2024-12-09 05:38:18.209753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 01:43:26.610 [2024-12-09 05:38:18.209767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 01:43:26.610 [2024-12-09 05:38:18.209780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 01:43:26.610 [2024-12-09 05:38:18.209793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 01:43:26.610 [2024-12-09 05:38:18.209808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 01:43:26.610 [2024-12-09 05:38:18.209836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 01:43:26.610 [2024-12-09 05:38:18.209880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 01:43:26.610 [2024-12-09 05:38:18.209911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 01:43:26.610 [2024-12-09 05:38:18.209925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 01:43:26.610 [2024-12-09 05:38:18.209938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 01:43:26.610 [2024-12-09 05:38:18.209952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 01:43:26.610 [2024-12-09 05:38:18.209965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 01:43:26.610 [2024-12-09 05:38:18.209978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 01:43:26.610 [2024-12-09 05:38:18.209990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 01:43:26.610 [2024-12-09 05:38:18.210013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 01:43:26.610 [2024-12-09 05:38:18.210026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 01:43:26.610 [2024-12-09 05:38:18.210040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 01:43:26.610 [2024-12-09 05:38:18.210059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 01:43:26.610 [2024-12-09 05:38:18.210083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 01:43:26.610 [2024-12-09 05:38:18.210096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 01:43:26.610 [2024-12-09 05:38:18.210112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 01:43:26.610 [2024-12-09 05:38:18.210125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 01:43:26.610 [2024-12-09 05:38:18.210139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 01:43:26.610 [2024-12-09 05:38:18.210153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 01:43:26.610 [2024-12-09 05:38:18.210166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 01:43:26.610 [2024-12-09 05:38:18.210180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 01:43:26.610 [2024-12-09 05:38:18.210194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 01:43:26.610 [2024-12-09 05:38:18.210208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 01:43:26.610 [2024-12-09 05:38:18.210222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 01:43:26.610 [2024-12-09 05:38:18.210235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 01:43:26.610 [2024-12-09 05:38:18.210249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 01:43:26.610 [2024-12-09 05:38:18.210263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 01:43:26.610 [2024-12-09 05:38:18.210277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 01:43:26.610 [2024-12-09 05:38:18.210291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 01:43:26.610 [2024-12-09 05:38:18.210305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 01:43:26.610 [2024-12-09 05:38:18.210318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 01:43:26.610 [2024-12-09 05:38:18.210331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 01:43:26.610 [2024-12-09 05:38:18.210358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 01:43:26.610 [2024-12-09 05:38:18.210385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 01:43:26.610 [2024-12-09 05:38:18.210398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 01:43:26.610 [2024-12-09 05:38:18.210410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 01:43:26.610 [2024-12-09 05:38:18.210424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 01:43:26.610 [2024-12-09 05:38:18.210466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 01:43:26.610 [2024-12-09 05:38:18.210480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 01:43:26.610 [2024-12-09 05:38:18.210494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 01:43:26.610 [2024-12-09 05:38:18.210507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 01:43:26.610 [2024-12-09 05:38:18.210527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 01:43:26.610 [2024-12-09 05:38:18.210541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 01:43:26.610 [2024-12-09 05:38:18.210555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 01:43:26.610 [2024-12-09 05:38:18.210569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 01:43:26.610 [2024-12-09 05:38:18.210583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 01:43:26.610 [2024-12-09 05:38:18.210596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 01:43:26.610 [2024-12-09 05:38:18.210610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 01:43:26.610 [2024-12-09 05:38:18.210623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 01:43:26.610 [2024-12-09 05:38:18.210637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 01:43:26.610 [2024-12-09 05:38:18.210651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 01:43:26.610 [2024-12-09 05:38:18.210681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 01:43:26.610 [2024-12-09 05:38:18.210697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 01:43:26.611 [2024-12-09 05:38:18.210712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 01:43:26.611 [2024-12-09 05:38:18.210741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 01:43:26.611 [2024-12-09 05:38:18.210819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 01:43:26.611 [2024-12-09 05:38:18.210832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 01:43:26.611 [2024-12-09 05:38:18.210844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 01:43:26.611 [2024-12-09 05:38:18.210856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 01:43:26.611 [2024-12-09 05:38:18.210868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 01:43:26.611 [2024-12-09 05:38:18.210880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 01:43:26.611 [2024-12-09 05:38:18.210892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 01:43:26.611 [2024-12-09 05:38:18.210912] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 01:43:26.611 [2024-12-09 05:38:18.210939] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5177abd3-cafa-411b-b43c-d71befe750fc 01:43:26.611 [2024-12-09 05:38:18.210951] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 01:43:26.611 [2024-12-09 05:38:18.210961] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 01:43:26.611 [2024-12-09 05:38:18.210972] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 01:43:26.611 [2024-12-09 05:38:18.210982] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 01:43:26.611 [2024-12-09 05:38:18.210992] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 01:43:26.611 [2024-12-09 05:38:18.211003] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 01:43:26.611 [2024-12-09 05:38:18.211020] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 01:43:26.611 [2024-12-09 05:38:18.211029] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 01:43:26.611 [2024-12-09 05:38:18.211038] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 01:43:26.611 [2024-12-09 05:38:18.211049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:26.611 [2024-12-09 05:38:18.211059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 01:43:26.611 [2024-12-09 05:38:18.211071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.504 ms 01:43:26.611 [2024-12-09 05:38:18.211082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:26.869 [2024-12-09 05:38:18.227455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:26.869 [2024-12-09 05:38:18.227615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 01:43:26.869 [2024-12-09 05:38:18.227640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.344 ms 01:43:26.869 [2024-12-09 05:38:18.227652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:26.869 [2024-12-09 05:38:18.228225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:26.869 [2024-12-09 05:38:18.228251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 01:43:26.869 [2024-12-09 05:38:18.228264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.459 ms 01:43:26.869 [2024-12-09 05:38:18.228275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:26.869 [2024-12-09 05:38:18.272191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:43:26.869 [2024-12-09 05:38:18.272241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 01:43:26.869 [2024-12-09 05:38:18.272256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:43:26.869 [2024-12-09 05:38:18.272274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:26.869 [2024-12-09 05:38:18.272379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:43:26.869 [2024-12-09 05:38:18.272395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 01:43:26.869 [2024-12-09 05:38:18.272407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:43:26.869 [2024-12-09 05:38:18.272418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:26.869 [2024-12-09 05:38:18.272481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:43:26.869 [2024-12-09 05:38:18.272499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 01:43:26.869 [2024-12-09 05:38:18.272511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:43:26.869 [2024-12-09 05:38:18.272521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:26.869 [2024-12-09 05:38:18.272552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:43:26.869 [2024-12-09 05:38:18.272566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 01:43:26.869 [2024-12-09 05:38:18.272577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:43:26.869 [2024-12-09 05:38:18.272588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:26.869 [2024-12-09 05:38:18.377757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:43:26.869 [2024-12-09 05:38:18.378092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 01:43:26.869 [2024-12-09 05:38:18.378121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:43:26.869 [2024-12-09 05:38:18.378134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:26.869 [2024-12-09 05:38:18.459144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:43:26.869 [2024-12-09 05:38:18.459518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 01:43:26.869 [2024-12-09 05:38:18.459548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:43:26.869 [2024-12-09 05:38:18.459560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:26.869 [2024-12-09 05:38:18.459717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:43:26.869 [2024-12-09 05:38:18.459737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 01:43:26.869 [2024-12-09 05:38:18.459752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:43:26.869 [2024-12-09 05:38:18.459763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:26.869 [2024-12-09 05:38:18.459802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:43:26.869 [2024-12-09 05:38:18.459826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 01:43:26.869 [2024-12-09 05:38:18.459838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:43:26.869 [2024-12-09 05:38:18.459850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:26.869 [2024-12-09 05:38:18.459993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:43:26.869 [2024-12-09 05:38:18.460012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 01:43:26.869 [2024-12-09 05:38:18.460040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:43:26.869 [2024-12-09 05:38:18.460062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:26.869 [2024-12-09 05:38:18.460109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:43:26.869 [2024-12-09 05:38:18.460132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 01:43:26.869 [2024-12-09 05:38:18.460149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:43:26.869 [2024-12-09 05:38:18.460159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:26.869 [2024-12-09 05:38:18.460209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:43:26.869 [2024-12-09 05:38:18.460223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 01:43:26.869 [2024-12-09 05:38:18.460235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:43:26.869 [2024-12-09 05:38:18.460244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:26.869 [2024-12-09 05:38:18.460301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:43:26.869 [2024-12-09 05:38:18.460321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 01:43:26.869 [2024-12-09 05:38:18.460332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:43:26.869 [2024-12-09 05:38:18.460343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:26.869 [2024-12-09 05:38:18.460530] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 429.640 ms, result 0 01:43:28.241 01:43:28.241 01:43:28.241 05:38:19 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 01:43:28.500 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 01:43:28.500 05:38:20 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 01:43:28.500 05:38:20 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 01:43:28.500 05:38:20 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 01:43:28.500 05:38:20 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 01:43:28.500 05:38:20 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 01:43:28.759 05:38:20 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 01:43:28.759 05:38:20 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 78898 01:43:28.759 Process with pid 78898 is not found 01:43:28.759 05:38:20 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78898 ']' 01:43:28.759 05:38:20 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78898 01:43:28.759 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (78898) - No such process 01:43:28.759 05:38:20 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 78898 is not found' 01:43:28.759 ************************************ 01:43:28.759 END TEST ftl_trim 01:43:28.759 ************************************ 01:43:28.759 01:43:28.759 real 1m16.948s 01:43:28.759 user 1m44.095s 01:43:28.759 sys 0m7.992s 01:43:28.759 05:38:20 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 01:43:28.759 05:38:20 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 01:43:28.759 05:38:20 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 01:43:28.759 05:38:20 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 01:43:28.759 05:38:20 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 01:43:28.759 05:38:20 ftl -- common/autotest_common.sh@10 -- # set +x 01:43:28.759 ************************************ 01:43:28.759 START TEST ftl_restore 01:43:28.759 ************************************ 01:43:28.759 05:38:20 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 01:43:28.759 * Looking for test storage... 01:43:28.759 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 01:43:28.759 05:38:20 ftl.ftl_restore -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:43:28.759 05:38:20 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lcov --version 01:43:28.759 05:38:20 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:43:29.017 05:38:20 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:43:29.017 05:38:20 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:43:29.017 05:38:20 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 01:43:29.017 05:38:20 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 01:43:29.017 05:38:20 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 01:43:29.017 05:38:20 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 01:43:29.017 05:38:20 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 01:43:29.017 05:38:20 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 01:43:29.017 05:38:20 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 01:43:29.017 05:38:20 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 01:43:29.017 05:38:20 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 01:43:29.017 05:38:20 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:43:29.017 05:38:20 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 01:43:29.017 05:38:20 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 01:43:29.017 05:38:20 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 01:43:29.017 05:38:20 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:43:29.017 05:38:20 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 01:43:29.017 05:38:20 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 01:43:29.017 05:38:20 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:43:29.017 05:38:20 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 01:43:29.017 05:38:20 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 01:43:29.017 05:38:20 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 01:43:29.017 05:38:20 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 01:43:29.017 05:38:20 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:43:29.017 05:38:20 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 01:43:29.017 05:38:20 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 01:43:29.017 05:38:20 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:43:29.017 05:38:20 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:43:29.017 05:38:20 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 01:43:29.017 05:38:20 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:43:29.017 05:38:20 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:43:29.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:43:29.017 --rc genhtml_branch_coverage=1 01:43:29.017 --rc genhtml_function_coverage=1 01:43:29.017 --rc genhtml_legend=1 01:43:29.017 --rc geninfo_all_blocks=1 01:43:29.017 --rc geninfo_unexecuted_blocks=1 01:43:29.017 01:43:29.017 ' 01:43:29.017 05:38:20 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:43:29.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:43:29.017 --rc genhtml_branch_coverage=1 01:43:29.017 --rc genhtml_function_coverage=1 01:43:29.017 --rc genhtml_legend=1 01:43:29.017 --rc geninfo_all_blocks=1 01:43:29.017 --rc geninfo_unexecuted_blocks=1 01:43:29.017 01:43:29.017 ' 01:43:29.017 05:38:20 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:43:29.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:43:29.017 --rc genhtml_branch_coverage=1 01:43:29.017 --rc genhtml_function_coverage=1 01:43:29.017 --rc genhtml_legend=1 01:43:29.017 --rc geninfo_all_blocks=1 01:43:29.017 --rc geninfo_unexecuted_blocks=1 01:43:29.017 01:43:29.017 ' 01:43:29.017 05:38:20 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:43:29.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:43:29.017 --rc genhtml_branch_coverage=1 01:43:29.017 --rc genhtml_function_coverage=1 01:43:29.017 --rc genhtml_legend=1 01:43:29.017 --rc geninfo_all_blocks=1 01:43:29.017 --rc geninfo_unexecuted_blocks=1 01:43:29.017 01:43:29.017 ' 01:43:29.017 05:38:20 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 01:43:29.017 05:38:20 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 01:43:29.017 05:38:20 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 01:43:29.017 05:38:20 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 01:43:29.017 05:38:20 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 01:43:29.017 05:38:20 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 01:43:29.017 05:38:20 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:43:29.017 05:38:20 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 01:43:29.017 05:38:20 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 01:43:29.017 05:38:20 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:43:29.017 05:38:20 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:43:29.017 05:38:20 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 01:43:29.017 05:38:20 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 01:43:29.017 05:38:20 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 01:43:29.017 05:38:20 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 01:43:29.017 05:38:20 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 01:43:29.017 05:38:20 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 01:43:29.017 05:38:20 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:43:29.017 05:38:20 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:43:29.017 05:38:20 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 01:43:29.017 05:38:20 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 01:43:29.017 05:38:20 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 01:43:29.017 05:38:20 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 01:43:29.017 05:38:20 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 01:43:29.017 05:38:20 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 01:43:29.017 05:38:20 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 01:43:29.017 05:38:20 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 01:43:29.017 05:38:20 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:43:29.017 05:38:20 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:43:29.017 05:38:20 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:43:29.017 05:38:20 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 01:43:29.017 05:38:20 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.qIX1GyXE3A 01:43:29.017 05:38:20 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 01:43:29.017 05:38:20 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 01:43:29.017 05:38:20 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 01:43:29.017 05:38:20 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 01:43:29.017 05:38:20 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 01:43:29.017 05:38:20 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 01:43:29.018 05:38:20 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 01:43:29.018 05:38:20 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 01:43:29.018 05:38:20 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=79182 01:43:29.018 05:38:20 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 79182 01:43:29.018 05:38:20 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 79182 ']' 01:43:29.018 05:38:20 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:43:29.018 05:38:20 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 01:43:29.018 05:38:20 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:43:29.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:43:29.018 05:38:20 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 01:43:29.018 05:38:20 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 01:43:29.018 05:38:20 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:43:29.018 [2024-12-09 05:38:20.600523] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:43:29.018 [2024-12-09 05:38:20.601882] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79182 ] 01:43:29.275 [2024-12-09 05:38:20.805082] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:43:29.532 [2024-12-09 05:38:20.981501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:43:30.465 05:38:21 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:43:30.465 05:38:21 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 01:43:30.465 05:38:21 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 01:43:30.465 05:38:21 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 01:43:30.465 05:38:21 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 01:43:30.465 05:38:21 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 01:43:30.465 05:38:21 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 01:43:30.465 05:38:21 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 01:43:30.722 05:38:22 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 01:43:30.722 05:38:22 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 01:43:30.722 05:38:22 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 01:43:30.722 05:38:22 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 01:43:30.722 05:38:22 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 01:43:30.722 05:38:22 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 01:43:30.722 05:38:22 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 01:43:30.722 05:38:22 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 01:43:30.981 05:38:22 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 01:43:30.981 { 01:43:30.981 "name": "nvme0n1", 01:43:30.981 "aliases": [ 01:43:30.981 "0416cd8b-b6c7-42ba-95bb-14e1b5048600" 01:43:30.981 ], 01:43:30.981 "product_name": "NVMe disk", 01:43:30.981 "block_size": 4096, 01:43:30.981 "num_blocks": 1310720, 01:43:30.981 "uuid": "0416cd8b-b6c7-42ba-95bb-14e1b5048600", 01:43:30.981 "numa_id": -1, 01:43:30.981 "assigned_rate_limits": { 01:43:30.981 "rw_ios_per_sec": 0, 01:43:30.981 "rw_mbytes_per_sec": 0, 01:43:30.981 "r_mbytes_per_sec": 0, 01:43:30.981 "w_mbytes_per_sec": 0 01:43:30.981 }, 01:43:30.981 "claimed": true, 01:43:30.981 "claim_type": "read_many_write_one", 01:43:30.981 "zoned": false, 01:43:30.981 "supported_io_types": { 01:43:30.981 "read": true, 01:43:30.981 "write": true, 01:43:30.981 "unmap": true, 01:43:30.981 "flush": true, 01:43:30.981 "reset": true, 01:43:30.981 "nvme_admin": true, 01:43:30.981 "nvme_io": true, 01:43:30.981 "nvme_io_md": false, 01:43:30.981 "write_zeroes": true, 01:43:30.981 "zcopy": false, 01:43:30.981 "get_zone_info": false, 01:43:30.981 "zone_management": false, 01:43:30.981 "zone_append": false, 01:43:30.981 "compare": true, 01:43:30.981 "compare_and_write": false, 01:43:30.981 "abort": true, 01:43:30.981 "seek_hole": false, 01:43:30.981 "seek_data": false, 01:43:30.981 "copy": true, 01:43:30.981 "nvme_iov_md": false 01:43:30.981 }, 01:43:30.981 "driver_specific": { 01:43:30.981 "nvme": [ 01:43:30.981 { 01:43:30.981 "pci_address": "0000:00:11.0", 01:43:30.981 "trid": { 01:43:30.981 "trtype": "PCIe", 01:43:30.981 "traddr": "0000:00:11.0" 01:43:30.981 }, 01:43:30.981 "ctrlr_data": { 01:43:30.981 "cntlid": 0, 01:43:30.981 "vendor_id": "0x1b36", 01:43:30.981 "model_number": "QEMU NVMe Ctrl", 01:43:30.981 "serial_number": "12341", 01:43:30.981 "firmware_revision": "8.0.0", 01:43:30.981 "subnqn": "nqn.2019-08.org.qemu:12341", 01:43:30.981 "oacs": { 01:43:30.981 "security": 0, 01:43:30.981 "format": 1, 01:43:30.981 "firmware": 0, 01:43:30.981 "ns_manage": 1 01:43:30.981 }, 01:43:30.981 "multi_ctrlr": false, 01:43:30.981 "ana_reporting": false 01:43:30.981 }, 01:43:30.981 "vs": { 01:43:30.981 "nvme_version": "1.4" 01:43:30.981 }, 01:43:30.981 "ns_data": { 01:43:30.981 "id": 1, 01:43:30.981 "can_share": false 01:43:30.981 } 01:43:30.981 } 01:43:30.981 ], 01:43:30.981 "mp_policy": "active_passive" 01:43:30.981 } 01:43:30.981 } 01:43:30.981 ]' 01:43:30.981 05:38:22 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 01:43:30.981 05:38:22 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 01:43:30.981 05:38:22 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 01:43:31.240 05:38:22 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 01:43:31.240 05:38:22 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 01:43:31.240 05:38:22 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 01:43:31.240 05:38:22 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 01:43:31.240 05:38:22 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 01:43:31.240 05:38:22 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 01:43:31.240 05:38:22 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 01:43:31.240 05:38:22 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 01:43:31.498 05:38:22 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=c9760a6f-d727-43f2-aa49-541fb916ec32 01:43:31.498 05:38:22 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 01:43:31.498 05:38:22 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c9760a6f-d727-43f2-aa49-541fb916ec32 01:43:31.756 05:38:23 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 01:43:32.019 05:38:23 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=366a8497-de20-4d4a-afe1-1d22a8c3c95d 01:43:32.019 05:38:23 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 366a8497-de20-4d4a-afe1-1d22a8c3c95d 01:43:32.287 05:38:23 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=4418b751-6eb8-4119-b733-aafa2215302c 01:43:32.287 05:38:23 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 01:43:32.287 05:38:23 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 4418b751-6eb8-4119-b733-aafa2215302c 01:43:32.287 05:38:23 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 01:43:32.287 05:38:23 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 01:43:32.287 05:38:23 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=4418b751-6eb8-4119-b733-aafa2215302c 01:43:32.287 05:38:23 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 01:43:32.287 05:38:23 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 4418b751-6eb8-4119-b733-aafa2215302c 01:43:32.287 05:38:23 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=4418b751-6eb8-4119-b733-aafa2215302c 01:43:32.287 05:38:23 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 01:43:32.287 05:38:23 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 01:43:32.287 05:38:23 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 01:43:32.287 05:38:23 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4418b751-6eb8-4119-b733-aafa2215302c 01:43:32.566 05:38:23 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 01:43:32.566 { 01:43:32.566 "name": "4418b751-6eb8-4119-b733-aafa2215302c", 01:43:32.566 "aliases": [ 01:43:32.566 "lvs/nvme0n1p0" 01:43:32.566 ], 01:43:32.566 "product_name": "Logical Volume", 01:43:32.566 "block_size": 4096, 01:43:32.566 "num_blocks": 26476544, 01:43:32.566 "uuid": "4418b751-6eb8-4119-b733-aafa2215302c", 01:43:32.566 "assigned_rate_limits": { 01:43:32.566 "rw_ios_per_sec": 0, 01:43:32.566 "rw_mbytes_per_sec": 0, 01:43:32.566 "r_mbytes_per_sec": 0, 01:43:32.566 "w_mbytes_per_sec": 0 01:43:32.566 }, 01:43:32.566 "claimed": false, 01:43:32.566 "zoned": false, 01:43:32.566 "supported_io_types": { 01:43:32.566 "read": true, 01:43:32.566 "write": true, 01:43:32.566 "unmap": true, 01:43:32.566 "flush": false, 01:43:32.566 "reset": true, 01:43:32.566 "nvme_admin": false, 01:43:32.566 "nvme_io": false, 01:43:32.566 "nvme_io_md": false, 01:43:32.566 "write_zeroes": true, 01:43:32.566 "zcopy": false, 01:43:32.566 "get_zone_info": false, 01:43:32.566 "zone_management": false, 01:43:32.566 "zone_append": false, 01:43:32.566 "compare": false, 01:43:32.566 "compare_and_write": false, 01:43:32.566 "abort": false, 01:43:32.566 "seek_hole": true, 01:43:32.566 "seek_data": true, 01:43:32.566 "copy": false, 01:43:32.566 "nvme_iov_md": false 01:43:32.566 }, 01:43:32.566 "driver_specific": { 01:43:32.566 "lvol": { 01:43:32.566 "lvol_store_uuid": "366a8497-de20-4d4a-afe1-1d22a8c3c95d", 01:43:32.566 "base_bdev": "nvme0n1", 01:43:32.566 "thin_provision": true, 01:43:32.566 "num_allocated_clusters": 0, 01:43:32.566 "snapshot": false, 01:43:32.566 "clone": false, 01:43:32.566 "esnap_clone": false 01:43:32.566 } 01:43:32.566 } 01:43:32.566 } 01:43:32.566 ]' 01:43:32.566 05:38:23 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 01:43:32.566 05:38:24 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 01:43:32.566 05:38:24 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 01:43:32.566 05:38:24 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 01:43:32.566 05:38:24 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 01:43:32.566 05:38:24 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 01:43:32.566 05:38:24 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 01:43:32.566 05:38:24 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 01:43:32.566 05:38:24 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 01:43:33.133 05:38:24 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 01:43:33.133 05:38:24 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 01:43:33.133 05:38:24 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 4418b751-6eb8-4119-b733-aafa2215302c 01:43:33.133 05:38:24 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=4418b751-6eb8-4119-b733-aafa2215302c 01:43:33.133 05:38:24 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 01:43:33.133 05:38:24 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 01:43:33.133 05:38:24 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 01:43:33.133 05:38:24 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4418b751-6eb8-4119-b733-aafa2215302c 01:43:33.133 05:38:24 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 01:43:33.133 { 01:43:33.133 "name": "4418b751-6eb8-4119-b733-aafa2215302c", 01:43:33.133 "aliases": [ 01:43:33.133 "lvs/nvme0n1p0" 01:43:33.133 ], 01:43:33.133 "product_name": "Logical Volume", 01:43:33.133 "block_size": 4096, 01:43:33.133 "num_blocks": 26476544, 01:43:33.133 "uuid": "4418b751-6eb8-4119-b733-aafa2215302c", 01:43:33.133 "assigned_rate_limits": { 01:43:33.133 "rw_ios_per_sec": 0, 01:43:33.133 "rw_mbytes_per_sec": 0, 01:43:33.133 "r_mbytes_per_sec": 0, 01:43:33.133 "w_mbytes_per_sec": 0 01:43:33.133 }, 01:43:33.133 "claimed": false, 01:43:33.133 "zoned": false, 01:43:33.133 "supported_io_types": { 01:43:33.133 "read": true, 01:43:33.133 "write": true, 01:43:33.133 "unmap": true, 01:43:33.133 "flush": false, 01:43:33.133 "reset": true, 01:43:33.133 "nvme_admin": false, 01:43:33.133 "nvme_io": false, 01:43:33.133 "nvme_io_md": false, 01:43:33.133 "write_zeroes": true, 01:43:33.133 "zcopy": false, 01:43:33.133 "get_zone_info": false, 01:43:33.133 "zone_management": false, 01:43:33.133 "zone_append": false, 01:43:33.133 "compare": false, 01:43:33.133 "compare_and_write": false, 01:43:33.133 "abort": false, 01:43:33.133 "seek_hole": true, 01:43:33.133 "seek_data": true, 01:43:33.133 "copy": false, 01:43:33.133 "nvme_iov_md": false 01:43:33.133 }, 01:43:33.133 "driver_specific": { 01:43:33.133 "lvol": { 01:43:33.133 "lvol_store_uuid": "366a8497-de20-4d4a-afe1-1d22a8c3c95d", 01:43:33.133 "base_bdev": "nvme0n1", 01:43:33.133 "thin_provision": true, 01:43:33.133 "num_allocated_clusters": 0, 01:43:33.133 "snapshot": false, 01:43:33.133 "clone": false, 01:43:33.133 "esnap_clone": false 01:43:33.133 } 01:43:33.133 } 01:43:33.133 } 01:43:33.133 ]' 01:43:33.133 05:38:24 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 01:43:33.392 05:38:24 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 01:43:33.392 05:38:24 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 01:43:33.392 05:38:24 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 01:43:33.392 05:38:24 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 01:43:33.392 05:38:24 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 01:43:33.392 05:38:24 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 01:43:33.392 05:38:24 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 01:43:33.663 05:38:25 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 01:43:33.663 05:38:25 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 4418b751-6eb8-4119-b733-aafa2215302c 01:43:33.663 05:38:25 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=4418b751-6eb8-4119-b733-aafa2215302c 01:43:33.663 05:38:25 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 01:43:33.663 05:38:25 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 01:43:33.663 05:38:25 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 01:43:33.663 05:38:25 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4418b751-6eb8-4119-b733-aafa2215302c 01:43:33.924 05:38:25 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 01:43:33.924 { 01:43:33.924 "name": "4418b751-6eb8-4119-b733-aafa2215302c", 01:43:33.924 "aliases": [ 01:43:33.924 "lvs/nvme0n1p0" 01:43:33.924 ], 01:43:33.924 "product_name": "Logical Volume", 01:43:33.924 "block_size": 4096, 01:43:33.924 "num_blocks": 26476544, 01:43:33.924 "uuid": "4418b751-6eb8-4119-b733-aafa2215302c", 01:43:33.924 "assigned_rate_limits": { 01:43:33.924 "rw_ios_per_sec": 0, 01:43:33.924 "rw_mbytes_per_sec": 0, 01:43:33.924 "r_mbytes_per_sec": 0, 01:43:33.924 "w_mbytes_per_sec": 0 01:43:33.924 }, 01:43:33.924 "claimed": false, 01:43:33.924 "zoned": false, 01:43:33.924 "supported_io_types": { 01:43:33.924 "read": true, 01:43:33.924 "write": true, 01:43:33.924 "unmap": true, 01:43:33.924 "flush": false, 01:43:33.924 "reset": true, 01:43:33.924 "nvme_admin": false, 01:43:33.924 "nvme_io": false, 01:43:33.924 "nvme_io_md": false, 01:43:33.924 "write_zeroes": true, 01:43:33.924 "zcopy": false, 01:43:33.924 "get_zone_info": false, 01:43:33.924 "zone_management": false, 01:43:33.924 "zone_append": false, 01:43:33.924 "compare": false, 01:43:33.924 "compare_and_write": false, 01:43:33.924 "abort": false, 01:43:33.924 "seek_hole": true, 01:43:33.924 "seek_data": true, 01:43:33.924 "copy": false, 01:43:33.924 "nvme_iov_md": false 01:43:33.924 }, 01:43:33.924 "driver_specific": { 01:43:33.924 "lvol": { 01:43:33.924 "lvol_store_uuid": "366a8497-de20-4d4a-afe1-1d22a8c3c95d", 01:43:33.924 "base_bdev": "nvme0n1", 01:43:33.924 "thin_provision": true, 01:43:33.924 "num_allocated_clusters": 0, 01:43:33.924 "snapshot": false, 01:43:33.924 "clone": false, 01:43:33.924 "esnap_clone": false 01:43:33.924 } 01:43:33.924 } 01:43:33.924 } 01:43:33.924 ]' 01:43:33.924 05:38:25 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 01:43:33.924 05:38:25 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 01:43:33.924 05:38:25 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 01:43:33.924 05:38:25 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 01:43:33.924 05:38:25 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 01:43:33.924 05:38:25 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 01:43:33.924 05:38:25 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 01:43:33.924 05:38:25 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 4418b751-6eb8-4119-b733-aafa2215302c --l2p_dram_limit 10' 01:43:33.924 05:38:25 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 01:43:33.924 05:38:25 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 01:43:33.924 05:38:25 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 01:43:33.924 05:38:25 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 01:43:33.924 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 01:43:33.924 05:38:25 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 4418b751-6eb8-4119-b733-aafa2215302c --l2p_dram_limit 10 -c nvc0n1p0 01:43:34.183 [2024-12-09 05:38:25.606520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:34.183 [2024-12-09 05:38:25.606590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 01:43:34.183 [2024-12-09 05:38:25.606626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 01:43:34.183 [2024-12-09 05:38:25.606640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:34.183 [2024-12-09 05:38:25.606827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:34.183 [2024-12-09 05:38:25.606850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 01:43:34.183 [2024-12-09 05:38:25.606867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.132 ms 01:43:34.183 [2024-12-09 05:38:25.606879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:34.183 [2024-12-09 05:38:25.606913] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 01:43:34.183 [2024-12-09 05:38:25.607984] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 01:43:34.183 [2024-12-09 05:38:25.608031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:34.183 [2024-12-09 05:38:25.608045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 01:43:34.183 [2024-12-09 05:38:25.608061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.122 ms 01:43:34.183 [2024-12-09 05:38:25.608078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:34.183 [2024-12-09 05:38:25.608259] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 2fe6c684-ea00-40ee-a25d-c4c960459442 01:43:34.183 [2024-12-09 05:38:25.610757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:34.183 [2024-12-09 05:38:25.610815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 01:43:34.183 [2024-12-09 05:38:25.610832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 01:43:34.183 [2024-12-09 05:38:25.610848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:34.183 [2024-12-09 05:38:25.624807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:34.183 [2024-12-09 05:38:25.625150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 01:43:34.183 [2024-12-09 05:38:25.625182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.855 ms 01:43:34.183 [2024-12-09 05:38:25.625199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:34.183 [2024-12-09 05:38:25.625354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:34.183 [2024-12-09 05:38:25.625379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 01:43:34.183 [2024-12-09 05:38:25.625394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 01:43:34.183 [2024-12-09 05:38:25.625413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:34.183 [2024-12-09 05:38:25.625534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:34.183 [2024-12-09 05:38:25.625561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 01:43:34.183 [2024-12-09 05:38:25.625574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 01:43:34.183 [2024-12-09 05:38:25.625589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:34.183 [2024-12-09 05:38:25.625640] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 01:43:34.183 [2024-12-09 05:38:25.631498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:34.183 [2024-12-09 05:38:25.631537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 01:43:34.183 [2024-12-09 05:38:25.631564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.866 ms 01:43:34.183 [2024-12-09 05:38:25.631576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:34.183 [2024-12-09 05:38:25.631620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:34.183 [2024-12-09 05:38:25.631636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 01:43:34.183 [2024-12-09 05:38:25.631650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 01:43:34.183 [2024-12-09 05:38:25.631680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:34.183 [2024-12-09 05:38:25.631746] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 01:43:34.183 [2024-12-09 05:38:25.631927] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 01:43:34.183 [2024-12-09 05:38:25.631953] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 01:43:34.183 [2024-12-09 05:38:25.631969] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 01:43:34.183 [2024-12-09 05:38:25.631988] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 01:43:34.183 [2024-12-09 05:38:25.632008] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 01:43:34.183 [2024-12-09 05:38:25.632027] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 01:43:34.183 [2024-12-09 05:38:25.632039] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 01:43:34.183 [2024-12-09 05:38:25.632053] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 01:43:34.183 [2024-12-09 05:38:25.632064] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 01:43:34.183 [2024-12-09 05:38:25.632079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:34.183 [2024-12-09 05:38:25.632101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 01:43:34.183 [2024-12-09 05:38:25.632134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.336 ms 01:43:34.183 [2024-12-09 05:38:25.632145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:34.183 [2024-12-09 05:38:25.632248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:34.183 [2024-12-09 05:38:25.632268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 01:43:34.183 [2024-12-09 05:38:25.632283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 01:43:34.183 [2024-12-09 05:38:25.632296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:34.183 [2024-12-09 05:38:25.632402] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 01:43:34.183 [2024-12-09 05:38:25.632419] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 01:43:34.183 [2024-12-09 05:38:25.632434] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 01:43:34.183 [2024-12-09 05:38:25.632445] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:43:34.183 [2024-12-09 05:38:25.632458] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 01:43:34.183 [2024-12-09 05:38:25.632468] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 01:43:34.183 [2024-12-09 05:38:25.632488] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 01:43:34.183 [2024-12-09 05:38:25.632498] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 01:43:34.183 [2024-12-09 05:38:25.632510] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 01:43:34.183 [2024-12-09 05:38:25.632520] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 01:43:34.183 [2024-12-09 05:38:25.632532] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 01:43:34.183 [2024-12-09 05:38:25.632544] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 01:43:34.183 [2024-12-09 05:38:25.632557] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 01:43:34.183 [2024-12-09 05:38:25.632567] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 01:43:34.183 [2024-12-09 05:38:25.632579] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 01:43:34.183 [2024-12-09 05:38:25.632589] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:43:34.183 [2024-12-09 05:38:25.632603] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 01:43:34.183 [2024-12-09 05:38:25.632614] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 01:43:34.183 [2024-12-09 05:38:25.632628] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:43:34.183 [2024-12-09 05:38:25.632637] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 01:43:34.184 [2024-12-09 05:38:25.632650] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 01:43:34.184 [2024-12-09 05:38:25.632660] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:43:34.184 [2024-12-09 05:38:25.632672] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 01:43:34.184 [2024-12-09 05:38:25.632682] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 01:43:34.184 [2024-12-09 05:38:25.632694] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:43:34.184 [2024-12-09 05:38:25.632704] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 01:43:34.184 [2024-12-09 05:38:25.632732] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 01:43:34.184 [2024-12-09 05:38:25.632743] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:43:34.184 [2024-12-09 05:38:25.632755] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 01:43:34.184 [2024-12-09 05:38:25.632765] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 01:43:34.184 [2024-12-09 05:38:25.632777] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:43:34.184 [2024-12-09 05:38:25.632787] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 01:43:34.184 [2024-12-09 05:38:25.632802] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 01:43:34.184 [2024-12-09 05:38:25.632813] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 01:43:34.184 [2024-12-09 05:38:25.632825] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 01:43:34.184 [2024-12-09 05:38:25.632834] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 01:43:34.184 [2024-12-09 05:38:25.632846] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 01:43:34.184 [2024-12-09 05:38:25.632856] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 01:43:34.184 [2024-12-09 05:38:25.632869] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 01:43:34.184 [2024-12-09 05:38:25.632899] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:43:34.184 [2024-12-09 05:38:25.632922] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 01:43:34.184 [2024-12-09 05:38:25.632942] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 01:43:34.184 [2024-12-09 05:38:25.632959] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:43:34.184 [2024-12-09 05:38:25.632972] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 01:43:34.184 [2024-12-09 05:38:25.632987] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 01:43:34.184 [2024-12-09 05:38:25.632999] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 01:43:34.184 [2024-12-09 05:38:25.633015] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:43:34.184 [2024-12-09 05:38:25.633030] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 01:43:34.184 [2024-12-09 05:38:25.633046] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 01:43:34.184 [2024-12-09 05:38:25.633057] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 01:43:34.184 [2024-12-09 05:38:25.633070] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 01:43:34.184 [2024-12-09 05:38:25.633081] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 01:43:34.184 [2024-12-09 05:38:25.633094] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 01:43:34.184 [2024-12-09 05:38:25.633110] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 01:43:34.184 [2024-12-09 05:38:25.633127] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 01:43:34.184 [2024-12-09 05:38:25.633140] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 01:43:34.184 [2024-12-09 05:38:25.633153] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 01:43:34.184 [2024-12-09 05:38:25.633165] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 01:43:34.184 [2024-12-09 05:38:25.633178] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 01:43:34.184 [2024-12-09 05:38:25.633189] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 01:43:34.184 [2024-12-09 05:38:25.633201] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 01:43:34.184 [2024-12-09 05:38:25.633229] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 01:43:34.184 [2024-12-09 05:38:25.633242] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 01:43:34.184 [2024-12-09 05:38:25.633252] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 01:43:34.184 [2024-12-09 05:38:25.633268] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 01:43:34.184 [2024-12-09 05:38:25.633279] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 01:43:34.184 [2024-12-09 05:38:25.633291] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 01:43:34.184 [2024-12-09 05:38:25.633302] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 01:43:34.184 [2024-12-09 05:38:25.633317] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 01:43:34.184 [2024-12-09 05:38:25.633328] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 01:43:34.184 [2024-12-09 05:38:25.633343] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 01:43:34.184 [2024-12-09 05:38:25.633354] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 01:43:34.184 [2024-12-09 05:38:25.633369] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 01:43:34.184 [2024-12-09 05:38:25.633379] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 01:43:34.184 [2024-12-09 05:38:25.633392] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 01:43:34.184 [2024-12-09 05:38:25.633404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:34.184 [2024-12-09 05:38:25.633417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 01:43:34.184 [2024-12-09 05:38:25.633429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.061 ms 01:43:34.184 [2024-12-09 05:38:25.633442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:34.184 [2024-12-09 05:38:25.633499] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 01:43:34.184 [2024-12-09 05:38:25.633522] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 01:43:36.725 [2024-12-09 05:38:28.290061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:36.725 [2024-12-09 05:38:28.290183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 01:43:36.725 [2024-12-09 05:38:28.290206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2656.576 ms 01:43:36.725 [2024-12-09 05:38:28.290221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:36.725 [2024-12-09 05:38:28.337421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:36.725 [2024-12-09 05:38:28.337578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 01:43:36.725 [2024-12-09 05:38:28.337602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.862 ms 01:43:36.725 [2024-12-09 05:38:28.337619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:36.725 [2024-12-09 05:38:28.337927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:36.725 [2024-12-09 05:38:28.337964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 01:43:36.725 [2024-12-09 05:38:28.337985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 01:43:36.725 [2024-12-09 05:38:28.338005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:36.984 [2024-12-09 05:38:28.386962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:36.984 [2024-12-09 05:38:28.387266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 01:43:36.984 [2024-12-09 05:38:28.387296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.844 ms 01:43:36.984 [2024-12-09 05:38:28.387313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:36.984 [2024-12-09 05:38:28.387391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:36.984 [2024-12-09 05:38:28.387411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 01:43:36.984 [2024-12-09 05:38:28.387423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 01:43:36.984 [2024-12-09 05:38:28.387452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:36.984 [2024-12-09 05:38:28.388445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:36.984 [2024-12-09 05:38:28.388507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 01:43:36.984 [2024-12-09 05:38:28.388522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.871 ms 01:43:36.984 [2024-12-09 05:38:28.388536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:36.984 [2024-12-09 05:38:28.388699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:36.984 [2024-12-09 05:38:28.388718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 01:43:36.984 [2024-12-09 05:38:28.388731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.135 ms 01:43:36.984 [2024-12-09 05:38:28.388747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:36.984 [2024-12-09 05:38:28.412632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:36.984 [2024-12-09 05:38:28.413011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 01:43:36.984 [2024-12-09 05:38:28.413047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.828 ms 01:43:36.984 [2024-12-09 05:38:28.413070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:36.984 [2024-12-09 05:38:28.438958] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 01:43:36.984 [2024-12-09 05:38:28.444208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:36.984 [2024-12-09 05:38:28.444250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 01:43:36.984 [2024-12-09 05:38:28.444280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.940 ms 01:43:36.984 [2024-12-09 05:38:28.444292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:36.984 [2024-12-09 05:38:28.520869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:36.984 [2024-12-09 05:38:28.520936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 01:43:36.984 [2024-12-09 05:38:28.520958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 76.480 ms 01:43:36.984 [2024-12-09 05:38:28.520971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:36.984 [2024-12-09 05:38:28.521185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:36.984 [2024-12-09 05:38:28.521204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 01:43:36.984 [2024-12-09 05:38:28.521223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.176 ms 01:43:36.984 [2024-12-09 05:38:28.521235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:36.984 [2024-12-09 05:38:28.553767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:36.984 [2024-12-09 05:38:28.553825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 01:43:36.984 [2024-12-09 05:38:28.553846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.452 ms 01:43:36.984 [2024-12-09 05:38:28.553862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:36.984 [2024-12-09 05:38:28.584250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:36.984 [2024-12-09 05:38:28.584319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 01:43:36.984 [2024-12-09 05:38:28.584346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.300 ms 01:43:36.984 [2024-12-09 05:38:28.584357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:36.984 [2024-12-09 05:38:28.585243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:36.984 [2024-12-09 05:38:28.585291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 01:43:36.984 [2024-12-09 05:38:28.585330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.854 ms 01:43:36.984 [2024-12-09 05:38:28.585343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:37.242 [2024-12-09 05:38:28.673959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:37.242 [2024-12-09 05:38:28.674044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 01:43:37.242 [2024-12-09 05:38:28.674074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 88.539 ms 01:43:37.242 [2024-12-09 05:38:28.674096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:37.242 [2024-12-09 05:38:28.703144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:37.242 [2024-12-09 05:38:28.703185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 01:43:37.242 [2024-12-09 05:38:28.703206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.904 ms 01:43:37.242 [2024-12-09 05:38:28.703218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:37.242 [2024-12-09 05:38:28.730777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:37.242 [2024-12-09 05:38:28.730817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 01:43:37.242 [2024-12-09 05:38:28.730843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.491 ms 01:43:37.242 [2024-12-09 05:38:28.730854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:37.242 [2024-12-09 05:38:28.760394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:37.242 [2024-12-09 05:38:28.760433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 01:43:37.242 [2024-12-09 05:38:28.760453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.471 ms 01:43:37.242 [2024-12-09 05:38:28.760465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:37.242 [2024-12-09 05:38:28.760521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:37.242 [2024-12-09 05:38:28.760539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 01:43:37.242 [2024-12-09 05:38:28.760558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 01:43:37.242 [2024-12-09 05:38:28.760569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:37.242 [2024-12-09 05:38:28.760741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:37.242 [2024-12-09 05:38:28.760761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 01:43:37.242 [2024-12-09 05:38:28.760777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 01:43:37.242 [2024-12-09 05:38:28.760804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:37.242 [2024-12-09 05:38:28.762598] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3155.367 ms, result 0 01:43:37.242 { 01:43:37.242 "name": "ftl0", 01:43:37.242 "uuid": "2fe6c684-ea00-40ee-a25d-c4c960459442" 01:43:37.242 } 01:43:37.242 05:38:28 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 01:43:37.242 05:38:28 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 01:43:37.501 05:38:29 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 01:43:37.501 05:38:29 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 01:43:37.759 [2024-12-09 05:38:29.277388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:37.759 [2024-12-09 05:38:29.277498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 01:43:37.759 [2024-12-09 05:38:29.277530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 01:43:37.759 [2024-12-09 05:38:29.277545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:37.759 [2024-12-09 05:38:29.277581] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 01:43:37.759 [2024-12-09 05:38:29.281371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:37.759 [2024-12-09 05:38:29.281568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 01:43:37.759 [2024-12-09 05:38:29.281601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.762 ms 01:43:37.759 [2024-12-09 05:38:29.281615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:37.759 [2024-12-09 05:38:29.282024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:37.759 [2024-12-09 05:38:29.282045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 01:43:37.759 [2024-12-09 05:38:29.282088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.328 ms 01:43:37.759 [2024-12-09 05:38:29.282100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:37.759 [2024-12-09 05:38:29.284989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:37.759 [2024-12-09 05:38:29.285022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 01:43:37.759 [2024-12-09 05:38:29.285043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.859 ms 01:43:37.759 [2024-12-09 05:38:29.285055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:37.759 [2024-12-09 05:38:29.290960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:37.759 [2024-12-09 05:38:29.291181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 01:43:37.759 [2024-12-09 05:38:29.291211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.875 ms 01:43:37.759 [2024-12-09 05:38:29.291226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:37.759 [2024-12-09 05:38:29.324318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:37.759 [2024-12-09 05:38:29.324358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 01:43:37.759 [2024-12-09 05:38:29.324379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.019 ms 01:43:37.759 [2024-12-09 05:38:29.324391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:37.759 [2024-12-09 05:38:29.343144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:37.759 [2024-12-09 05:38:29.343184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 01:43:37.759 [2024-12-09 05:38:29.343205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.701 ms 01:43:37.759 [2024-12-09 05:38:29.343217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:37.759 [2024-12-09 05:38:29.343416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:37.759 [2024-12-09 05:38:29.343437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 01:43:37.759 [2024-12-09 05:38:29.343453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.146 ms 01:43:37.759 [2024-12-09 05:38:29.343467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:38.019 [2024-12-09 05:38:29.375725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:38.019 [2024-12-09 05:38:29.375950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 01:43:38.019 [2024-12-09 05:38:29.375987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.229 ms 01:43:38.019 [2024-12-09 05:38:29.376002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:38.019 [2024-12-09 05:38:29.403776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:38.019 [2024-12-09 05:38:29.403815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 01:43:38.019 [2024-12-09 05:38:29.403833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.715 ms 01:43:38.019 [2024-12-09 05:38:29.403844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:38.019 [2024-12-09 05:38:29.431161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:38.019 [2024-12-09 05:38:29.431200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 01:43:38.019 [2024-12-09 05:38:29.431226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.264 ms 01:43:38.019 [2024-12-09 05:38:29.431238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:38.019 [2024-12-09 05:38:29.458678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:38.019 [2024-12-09 05:38:29.458747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 01:43:38.019 [2024-12-09 05:38:29.458777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.321 ms 01:43:38.019 [2024-12-09 05:38:29.458789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:38.019 [2024-12-09 05:38:29.458857] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 01:43:38.019 [2024-12-09 05:38:29.458884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 01:43:38.019 [2024-12-09 05:38:29.458902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 01:43:38.019 [2024-12-09 05:38:29.458915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 01:43:38.019 [2024-12-09 05:38:29.458929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 01:43:38.019 [2024-12-09 05:38:29.458941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 01:43:38.019 [2024-12-09 05:38:29.458964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 01:43:38.019 [2024-12-09 05:38:29.458975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 01:43:38.019 [2024-12-09 05:38:29.459022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 01:43:38.019 [2024-12-09 05:38:29.459034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 01:43:38.019 [2024-12-09 05:38:29.459048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 01:43:38.019 [2024-12-09 05:38:29.459076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 01:43:38.019 [2024-12-09 05:38:29.459090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 01:43:38.019 [2024-12-09 05:38:29.459101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 01:43:38.019 [2024-12-09 05:38:29.459115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 01:43:38.019 [2024-12-09 05:38:29.459126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 01:43:38.019 [2024-12-09 05:38:29.459140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 01:43:38.019 [2024-12-09 05:38:29.459151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 01:43:38.019 [2024-12-09 05:38:29.459165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 01:43:38.019 [2024-12-09 05:38:29.459176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 01:43:38.019 [2024-12-09 05:38:29.459192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 01:43:38.019 [2024-12-09 05:38:29.459204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 01:43:38.019 [2024-12-09 05:38:29.459218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 01:43:38.019 [2024-12-09 05:38:29.459229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 01:43:38.019 [2024-12-09 05:38:29.459245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 01:43:38.019 [2024-12-09 05:38:29.459257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 01:43:38.019 [2024-12-09 05:38:29.459270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 01:43:38.019 [2024-12-09 05:38:29.459281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 01:43:38.019 [2024-12-09 05:38:29.459295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 01:43:38.020 [2024-12-09 05:38:29.459307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 01:43:38.020 [2024-12-09 05:38:29.459322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 01:43:38.020 [2024-12-09 05:38:29.459334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 01:43:38.020 [2024-12-09 05:38:29.459348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 01:43:38.020 [2024-12-09 05:38:29.459360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 01:43:38.020 [2024-12-09 05:38:29.459374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 01:43:38.020 [2024-12-09 05:38:29.459386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 01:43:38.020 [2024-12-09 05:38:29.459399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 01:43:38.020 [2024-12-09 05:38:29.459410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 01:43:38.020 [2024-12-09 05:38:29.459423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 01:43:38.020 [2024-12-09 05:38:29.459435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 01:43:38.020 [2024-12-09 05:38:29.459451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 01:43:38.020 [2024-12-09 05:38:29.459463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 01:43:38.020 [2024-12-09 05:38:29.459477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 01:43:38.020 [2024-12-09 05:38:29.459488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 01:43:38.020 [2024-12-09 05:38:29.459502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 01:43:38.020 [2024-12-09 05:38:29.459514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 01:43:38.020 [2024-12-09 05:38:29.459530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 01:43:38.020 [2024-12-09 05:38:29.459541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 01:43:38.020 [2024-12-09 05:38:29.459555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 01:43:38.020 [2024-12-09 05:38:29.459566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 01:43:38.020 [2024-12-09 05:38:29.459580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 01:43:38.020 [2024-12-09 05:38:29.459591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 01:43:38.020 [2024-12-09 05:38:29.459605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 01:43:38.020 [2024-12-09 05:38:29.459616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 01:43:38.020 [2024-12-09 05:38:29.459629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 01:43:38.020 [2024-12-09 05:38:29.459641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 01:43:38.020 [2024-12-09 05:38:29.459686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 01:43:38.020 [2024-12-09 05:38:29.459699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 01:43:38.020 [2024-12-09 05:38:29.459731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 01:43:38.020 [2024-12-09 05:38:29.459745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 01:43:38.020 [2024-12-09 05:38:29.459759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 01:43:38.020 [2024-12-09 05:38:29.459772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 01:43:38.020 [2024-12-09 05:38:29.459794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 01:43:38.020 [2024-12-09 05:38:29.459806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 01:43:38.020 [2024-12-09 05:38:29.459820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 01:43:38.020 [2024-12-09 05:38:29.459832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 01:43:38.020 [2024-12-09 05:38:29.459846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 01:43:38.020 [2024-12-09 05:38:29.459858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 01:43:38.020 [2024-12-09 05:38:29.459872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 01:43:38.020 [2024-12-09 05:38:29.459884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 01:43:38.020 [2024-12-09 05:38:29.459898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 01:43:38.020 [2024-12-09 05:38:29.459910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 01:43:38.020 [2024-12-09 05:38:29.459928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 01:43:38.020 [2024-12-09 05:38:29.459956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 01:43:38.020 [2024-12-09 05:38:29.459970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 01:43:38.020 [2024-12-09 05:38:29.459982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 01:43:38.020 [2024-12-09 05:38:29.459997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 01:43:38.020 [2024-12-09 05:38:29.460008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 01:43:38.020 [2024-12-09 05:38:29.460022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 01:43:38.020 [2024-12-09 05:38:29.460035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 01:43:38.020 [2024-12-09 05:38:29.460070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 01:43:38.020 [2024-12-09 05:38:29.460082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 01:43:38.020 [2024-12-09 05:38:29.460097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 01:43:38.020 [2024-12-09 05:38:29.460109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 01:43:38.020 [2024-12-09 05:38:29.460125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 01:43:38.020 [2024-12-09 05:38:29.460136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 01:43:38.020 [2024-12-09 05:38:29.460150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 01:43:38.020 [2024-12-09 05:38:29.460161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 01:43:38.020 [2024-12-09 05:38:29.460178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 01:43:38.020 [2024-12-09 05:38:29.460189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 01:43:38.020 [2024-12-09 05:38:29.460204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 01:43:38.020 [2024-12-09 05:38:29.460215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 01:43:38.020 [2024-12-09 05:38:29.460229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 01:43:38.020 [2024-12-09 05:38:29.460247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 01:43:38.020 [2024-12-09 05:38:29.460267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 01:43:38.020 [2024-12-09 05:38:29.460279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 01:43:38.020 [2024-12-09 05:38:29.460293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 01:43:38.020 [2024-12-09 05:38:29.460305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 01:43:38.020 [2024-12-09 05:38:29.460337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 01:43:38.020 [2024-12-09 05:38:29.460349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 01:43:38.020 [2024-12-09 05:38:29.460362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 01:43:38.020 [2024-12-09 05:38:29.460382] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 01:43:38.020 [2024-12-09 05:38:29.460396] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2fe6c684-ea00-40ee-a25d-c4c960459442 01:43:38.020 [2024-12-09 05:38:29.460407] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 01:43:38.020 [2024-12-09 05:38:29.460427] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 01:43:38.020 [2024-12-09 05:38:29.460439] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 01:43:38.020 [2024-12-09 05:38:29.460452] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 01:43:38.020 [2024-12-09 05:38:29.460463] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 01:43:38.020 [2024-12-09 05:38:29.460476] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 01:43:38.020 [2024-12-09 05:38:29.460487] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 01:43:38.020 [2024-12-09 05:38:29.460499] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 01:43:38.020 [2024-12-09 05:38:29.460508] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 01:43:38.020 [2024-12-09 05:38:29.460522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:38.020 [2024-12-09 05:38:29.460533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 01:43:38.020 [2024-12-09 05:38:29.460552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.669 ms 01:43:38.020 [2024-12-09 05:38:29.460563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:38.020 [2024-12-09 05:38:29.476415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:38.020 [2024-12-09 05:38:29.476451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 01:43:38.020 [2024-12-09 05:38:29.476471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.789 ms 01:43:38.020 [2024-12-09 05:38:29.476482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:38.020 [2024-12-09 05:38:29.476984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:38.020 [2024-12-09 05:38:29.477009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 01:43:38.020 [2024-12-09 05:38:29.477026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.467 ms 01:43:38.021 [2024-12-09 05:38:29.477037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:38.021 [2024-12-09 05:38:29.527152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:43:38.021 [2024-12-09 05:38:29.527550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 01:43:38.021 [2024-12-09 05:38:29.527587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:43:38.021 [2024-12-09 05:38:29.527601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:38.021 [2024-12-09 05:38:29.527751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:43:38.021 [2024-12-09 05:38:29.527773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 01:43:38.021 [2024-12-09 05:38:29.527789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:43:38.021 [2024-12-09 05:38:29.527801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:38.021 [2024-12-09 05:38:29.527989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:43:38.021 [2024-12-09 05:38:29.528009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 01:43:38.021 [2024-12-09 05:38:29.528040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:43:38.021 [2024-12-09 05:38:29.528052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:38.021 [2024-12-09 05:38:29.528109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:43:38.021 [2024-12-09 05:38:29.528130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 01:43:38.021 [2024-12-09 05:38:29.528149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:43:38.021 [2024-12-09 05:38:29.528160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:38.279 [2024-12-09 05:38:29.642357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:43:38.279 [2024-12-09 05:38:29.642485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 01:43:38.279 [2024-12-09 05:38:29.642516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:43:38.279 [2024-12-09 05:38:29.642529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:38.279 [2024-12-09 05:38:29.718418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:43:38.279 [2024-12-09 05:38:29.718825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 01:43:38.279 [2024-12-09 05:38:29.718870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:43:38.279 [2024-12-09 05:38:29.718885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:38.279 [2024-12-09 05:38:29.719076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:43:38.279 [2024-12-09 05:38:29.719096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 01:43:38.279 [2024-12-09 05:38:29.719114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:43:38.279 [2024-12-09 05:38:29.719127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:38.279 [2024-12-09 05:38:29.719217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:43:38.279 [2024-12-09 05:38:29.719233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 01:43:38.279 [2024-12-09 05:38:29.719249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:43:38.279 [2024-12-09 05:38:29.719263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:38.279 [2024-12-09 05:38:29.719408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:43:38.279 [2024-12-09 05:38:29.719426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 01:43:38.279 [2024-12-09 05:38:29.719441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:43:38.279 [2024-12-09 05:38:29.719452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:38.279 [2024-12-09 05:38:29.719513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:43:38.279 [2024-12-09 05:38:29.719531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 01:43:38.279 [2024-12-09 05:38:29.719545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:43:38.279 [2024-12-09 05:38:29.719556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:38.279 [2024-12-09 05:38:29.719617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:43:38.280 [2024-12-09 05:38:29.719632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 01:43:38.280 [2024-12-09 05:38:29.719645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:43:38.280 [2024-12-09 05:38:29.719657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:38.280 [2024-12-09 05:38:29.719739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:43:38.280 [2024-12-09 05:38:29.719757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 01:43:38.280 [2024-12-09 05:38:29.719771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:43:38.280 [2024-12-09 05:38:29.719785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:38.280 [2024-12-09 05:38:29.719994] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 442.563 ms, result 0 01:43:38.280 true 01:43:38.280 05:38:29 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 79182 01:43:38.280 05:38:29 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 79182 ']' 01:43:38.280 05:38:29 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 79182 01:43:38.280 05:38:29 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 01:43:38.280 05:38:29 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:43:38.280 05:38:29 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79182 01:43:38.280 killing process with pid 79182 01:43:38.280 05:38:29 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:43:38.280 05:38:29 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:43:38.280 05:38:29 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79182' 01:43:38.280 05:38:29 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 79182 01:43:38.280 05:38:29 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 79182 01:43:43.613 05:38:34 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 01:43:47.799 262144+0 records in 01:43:47.799 262144+0 records out 01:43:47.799 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.5843 s, 234 MB/s 01:43:47.799 05:38:38 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 01:43:49.700 05:38:40 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 01:43:49.700 [2024-12-09 05:38:41.059554] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:43:49.700 [2024-12-09 05:38:41.059775] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79424 ] 01:43:49.700 [2024-12-09 05:38:41.242780] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:43:49.957 [2024-12-09 05:38:41.393982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:43:50.215 [2024-12-09 05:38:41.766575] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 01:43:50.215 [2024-12-09 05:38:41.766947] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 01:43:50.473 [2024-12-09 05:38:41.938120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:50.473 [2024-12-09 05:38:41.938185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 01:43:50.473 [2024-12-09 05:38:41.938223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 01:43:50.473 [2024-12-09 05:38:41.938235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:50.473 [2024-12-09 05:38:41.938306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:50.473 [2024-12-09 05:38:41.938332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 01:43:50.473 [2024-12-09 05:38:41.938346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 01:43:50.473 [2024-12-09 05:38:41.938357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:50.473 [2024-12-09 05:38:41.938388] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 01:43:50.473 [2024-12-09 05:38:41.939365] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 01:43:50.473 [2024-12-09 05:38:41.939547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:50.473 [2024-12-09 05:38:41.939567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 01:43:50.473 [2024-12-09 05:38:41.939582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.165 ms 01:43:50.473 [2024-12-09 05:38:41.939593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:50.473 [2024-12-09 05:38:41.941556] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 01:43:50.473 [2024-12-09 05:38:41.958416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:50.473 [2024-12-09 05:38:41.958481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 01:43:50.473 [2024-12-09 05:38:41.958516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.861 ms 01:43:50.473 [2024-12-09 05:38:41.958528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:50.473 [2024-12-09 05:38:41.958617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:50.473 [2024-12-09 05:38:41.958636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 01:43:50.473 [2024-12-09 05:38:41.958649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 01:43:50.473 [2024-12-09 05:38:41.958680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:50.473 [2024-12-09 05:38:41.967838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:50.473 [2024-12-09 05:38:41.967891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 01:43:50.473 [2024-12-09 05:38:41.967924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.014 ms 01:43:50.473 [2024-12-09 05:38:41.967952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:50.473 [2024-12-09 05:38:41.968077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:50.473 [2024-12-09 05:38:41.968097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 01:43:50.473 [2024-12-09 05:38:41.968110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 01:43:50.473 [2024-12-09 05:38:41.968122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:50.473 [2024-12-09 05:38:41.968186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:50.473 [2024-12-09 05:38:41.968204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 01:43:50.473 [2024-12-09 05:38:41.968216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 01:43:50.473 [2024-12-09 05:38:41.968227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:50.473 [2024-12-09 05:38:41.968279] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 01:43:50.473 [2024-12-09 05:38:41.973274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:50.473 [2024-12-09 05:38:41.973313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 01:43:50.473 [2024-12-09 05:38:41.973355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.005 ms 01:43:50.473 [2024-12-09 05:38:41.973367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:50.473 [2024-12-09 05:38:41.973405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:50.473 [2024-12-09 05:38:41.973421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 01:43:50.473 [2024-12-09 05:38:41.973433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 01:43:50.473 [2024-12-09 05:38:41.973444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:50.473 [2024-12-09 05:38:41.973510] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 01:43:50.473 [2024-12-09 05:38:41.973548] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 01:43:50.474 [2024-12-09 05:38:41.973592] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 01:43:50.474 [2024-12-09 05:38:41.973622] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 01:43:50.474 [2024-12-09 05:38:41.973764] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 01:43:50.474 [2024-12-09 05:38:41.973785] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 01:43:50.474 [2024-12-09 05:38:41.973818] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 01:43:50.474 [2024-12-09 05:38:41.973834] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 01:43:50.474 [2024-12-09 05:38:41.973848] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 01:43:50.474 [2024-12-09 05:38:41.973860] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 01:43:50.474 [2024-12-09 05:38:41.973872] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 01:43:50.474 [2024-12-09 05:38:41.973893] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 01:43:50.474 [2024-12-09 05:38:41.973904] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 01:43:50.474 [2024-12-09 05:38:41.973917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:50.474 [2024-12-09 05:38:41.973929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 01:43:50.474 [2024-12-09 05:38:41.973942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.410 ms 01:43:50.474 [2024-12-09 05:38:41.973953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:50.474 [2024-12-09 05:38:41.974068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:50.474 [2024-12-09 05:38:41.974084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 01:43:50.474 [2024-12-09 05:38:41.974096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 01:43:50.474 [2024-12-09 05:38:41.974107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:50.474 [2024-12-09 05:38:41.974235] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 01:43:50.474 [2024-12-09 05:38:41.974256] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 01:43:50.474 [2024-12-09 05:38:41.974269] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 01:43:50.474 [2024-12-09 05:38:41.974280] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:43:50.474 [2024-12-09 05:38:41.974292] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 01:43:50.474 [2024-12-09 05:38:41.974303] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 01:43:50.474 [2024-12-09 05:38:41.974313] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 01:43:50.474 [2024-12-09 05:38:41.974324] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 01:43:50.474 [2024-12-09 05:38:41.974335] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 01:43:50.474 [2024-12-09 05:38:41.974345] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 01:43:50.474 [2024-12-09 05:38:41.974355] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 01:43:50.474 [2024-12-09 05:38:41.974365] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 01:43:50.474 [2024-12-09 05:38:41.974375] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 01:43:50.474 [2024-12-09 05:38:41.974407] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 01:43:50.474 [2024-12-09 05:38:41.974418] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 01:43:50.474 [2024-12-09 05:38:41.974456] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:43:50.474 [2024-12-09 05:38:41.974469] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 01:43:50.474 [2024-12-09 05:38:41.974483] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 01:43:50.474 [2024-12-09 05:38:41.974494] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:43:50.474 [2024-12-09 05:38:41.974505] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 01:43:50.474 [2024-12-09 05:38:41.974516] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 01:43:50.474 [2024-12-09 05:38:41.974527] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:43:50.474 [2024-12-09 05:38:41.974537] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 01:43:50.474 [2024-12-09 05:38:41.974548] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 01:43:50.474 [2024-12-09 05:38:41.974558] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:43:50.474 [2024-12-09 05:38:41.974568] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 01:43:50.474 [2024-12-09 05:38:41.974579] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 01:43:50.474 [2024-12-09 05:38:41.974590] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:43:50.474 [2024-12-09 05:38:41.974600] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 01:43:50.474 [2024-12-09 05:38:41.974611] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 01:43:50.474 [2024-12-09 05:38:41.974621] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:43:50.474 [2024-12-09 05:38:41.974631] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 01:43:50.474 [2024-12-09 05:38:41.974642] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 01:43:50.474 [2024-12-09 05:38:41.974652] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 01:43:50.474 [2024-12-09 05:38:41.974663] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 01:43:50.474 [2024-12-09 05:38:41.974674] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 01:43:50.474 [2024-12-09 05:38:41.974709] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 01:43:50.474 [2024-12-09 05:38:41.974722] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 01:43:50.474 [2024-12-09 05:38:41.974733] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 01:43:50.474 [2024-12-09 05:38:41.974743] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:43:50.474 [2024-12-09 05:38:41.974754] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 01:43:50.474 [2024-12-09 05:38:41.974774] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 01:43:50.474 [2024-12-09 05:38:41.974787] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:43:50.474 [2024-12-09 05:38:41.974797] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 01:43:50.474 [2024-12-09 05:38:41.974809] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 01:43:50.474 [2024-12-09 05:38:41.974821] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 01:43:50.474 [2024-12-09 05:38:41.974834] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:43:50.474 [2024-12-09 05:38:41.974846] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 01:43:50.474 [2024-12-09 05:38:41.974857] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 01:43:50.474 [2024-12-09 05:38:41.974869] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 01:43:50.474 [2024-12-09 05:38:41.974880] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 01:43:50.474 [2024-12-09 05:38:41.974899] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 01:43:50.474 [2024-12-09 05:38:41.974910] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 01:43:50.474 [2024-12-09 05:38:41.974923] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 01:43:50.474 [2024-12-09 05:38:41.974941] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 01:43:50.474 [2024-12-09 05:38:41.974965] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 01:43:50.474 [2024-12-09 05:38:41.974977] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 01:43:50.474 [2024-12-09 05:38:41.974988] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 01:43:50.474 [2024-12-09 05:38:41.975000] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 01:43:50.474 [2024-12-09 05:38:41.975011] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 01:43:50.474 [2024-12-09 05:38:41.975022] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 01:43:50.474 [2024-12-09 05:38:41.975033] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 01:43:50.474 [2024-12-09 05:38:41.975044] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 01:43:50.474 [2024-12-09 05:38:41.975055] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 01:43:50.474 [2024-12-09 05:38:41.975066] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 01:43:50.474 [2024-12-09 05:38:41.975076] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 01:43:50.474 [2024-12-09 05:38:41.975087] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 01:43:50.474 [2024-12-09 05:38:41.975112] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 01:43:50.474 [2024-12-09 05:38:41.975123] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 01:43:50.474 [2024-12-09 05:38:41.975133] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 01:43:50.474 [2024-12-09 05:38:41.975146] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 01:43:50.474 [2024-12-09 05:38:41.975157] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 01:43:50.474 [2024-12-09 05:38:41.975168] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 01:43:50.474 [2024-12-09 05:38:41.975179] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 01:43:50.474 [2024-12-09 05:38:41.975190] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 01:43:50.474 [2024-12-09 05:38:41.975202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:50.474 [2024-12-09 05:38:41.975213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 01:43:50.474 [2024-12-09 05:38:41.975225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.033 ms 01:43:50.475 [2024-12-09 05:38:41.975236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:50.475 [2024-12-09 05:38:42.017238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:50.475 [2024-12-09 05:38:42.017300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 01:43:50.475 [2024-12-09 05:38:42.017337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.926 ms 01:43:50.475 [2024-12-09 05:38:42.017361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:50.475 [2024-12-09 05:38:42.017482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:50.475 [2024-12-09 05:38:42.017499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 01:43:50.475 [2024-12-09 05:38:42.017513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 01:43:50.475 [2024-12-09 05:38:42.017524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:50.475 [2024-12-09 05:38:42.085583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:50.475 [2024-12-09 05:38:42.085647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 01:43:50.475 [2024-12-09 05:38:42.085713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.949 ms 01:43:50.475 [2024-12-09 05:38:42.085744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:50.475 [2024-12-09 05:38:42.085822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:50.475 [2024-12-09 05:38:42.085840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 01:43:50.475 [2024-12-09 05:38:42.085859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 01:43:50.475 [2024-12-09 05:38:42.085870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:50.475 [2024-12-09 05:38:42.086603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:50.475 [2024-12-09 05:38:42.086629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 01:43:50.475 [2024-12-09 05:38:42.086644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.624 ms 01:43:50.475 [2024-12-09 05:38:42.086655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:50.475 [2024-12-09 05:38:42.086850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:50.475 [2024-12-09 05:38:42.086886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 01:43:50.475 [2024-12-09 05:38:42.086908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.142 ms 01:43:50.475 [2024-12-09 05:38:42.086919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:50.750 [2024-12-09 05:38:42.107893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:50.750 [2024-12-09 05:38:42.107946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 01:43:50.750 [2024-12-09 05:38:42.107981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.944 ms 01:43:50.750 [2024-12-09 05:38:42.107994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:50.750 [2024-12-09 05:38:42.124349] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 01:43:50.750 [2024-12-09 05:38:42.124392] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 01:43:50.750 [2024-12-09 05:38:42.124427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:50.750 [2024-12-09 05:38:42.124442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 01:43:50.750 [2024-12-09 05:38:42.124455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.243 ms 01:43:50.750 [2024-12-09 05:38:42.124465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:50.750 [2024-12-09 05:38:42.151652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:50.750 [2024-12-09 05:38:42.151711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 01:43:50.750 [2024-12-09 05:38:42.151744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.145 ms 01:43:50.750 [2024-12-09 05:38:42.151756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:50.750 [2024-12-09 05:38:42.165944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:50.750 [2024-12-09 05:38:42.165984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 01:43:50.750 [2024-12-09 05:38:42.166017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.144 ms 01:43:50.750 [2024-12-09 05:38:42.166043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:50.750 [2024-12-09 05:38:42.179818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:50.750 [2024-12-09 05:38:42.179857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 01:43:50.750 [2024-12-09 05:38:42.179888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.735 ms 01:43:50.750 [2024-12-09 05:38:42.179898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:50.750 [2024-12-09 05:38:42.180697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:50.750 [2024-12-09 05:38:42.180724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 01:43:50.750 [2024-12-09 05:38:42.180737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.677 ms 01:43:50.750 [2024-12-09 05:38:42.180773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:50.750 [2024-12-09 05:38:42.253739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:50.750 [2024-12-09 05:38:42.253815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 01:43:50.750 [2024-12-09 05:38:42.253853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 72.938 ms 01:43:50.750 [2024-12-09 05:38:42.253871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:50.750 [2024-12-09 05:38:42.265021] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 01:43:50.750 [2024-12-09 05:38:42.267972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:50.750 [2024-12-09 05:38:42.268007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 01:43:50.750 [2024-12-09 05:38:42.268039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.034 ms 01:43:50.750 [2024-12-09 05:38:42.268050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:50.750 [2024-12-09 05:38:42.268165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:50.750 [2024-12-09 05:38:42.268185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 01:43:50.750 [2024-12-09 05:38:42.268197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 01:43:50.750 [2024-12-09 05:38:42.268208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:50.750 [2024-12-09 05:38:42.268304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:50.750 [2024-12-09 05:38:42.268322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 01:43:50.750 [2024-12-09 05:38:42.268334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 01:43:50.750 [2024-12-09 05:38:42.268344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:50.750 [2024-12-09 05:38:42.268375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:50.750 [2024-12-09 05:38:42.268389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 01:43:50.750 [2024-12-09 05:38:42.268400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 01:43:50.750 [2024-12-09 05:38:42.268411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:50.750 [2024-12-09 05:38:42.268452] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 01:43:50.750 [2024-12-09 05:38:42.268472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:50.750 [2024-12-09 05:38:42.268483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 01:43:50.750 [2024-12-09 05:38:42.268494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 01:43:50.750 [2024-12-09 05:38:42.268504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:50.750 [2024-12-09 05:38:42.297280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:50.750 [2024-12-09 05:38:42.297322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 01:43:50.750 [2024-12-09 05:38:42.297355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.751 ms 01:43:50.750 [2024-12-09 05:38:42.297371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:50.750 [2024-12-09 05:38:42.297455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:43:50.750 [2024-12-09 05:38:42.297473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 01:43:50.750 [2024-12-09 05:38:42.297485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 01:43:50.750 [2024-12-09 05:38:42.297495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:43:50.750 [2024-12-09 05:38:42.299191] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 360.494 ms, result 0 01:43:52.123  [2024-12-09T05:38:44.674Z] Copying: 24/1024 [MB] (24 MBps) [2024-12-09T05:38:45.627Z] Copying: 48/1024 [MB] (23 MBps) [2024-12-09T05:38:46.576Z] Copying: 73/1024 [MB] (24 MBps) [2024-12-09T05:38:47.511Z] Copying: 97/1024 [MB] (23 MBps) [2024-12-09T05:38:48.444Z] Copying: 121/1024 [MB] (24 MBps) [2024-12-09T05:38:49.378Z] Copying: 146/1024 [MB] (25 MBps) [2024-12-09T05:38:50.313Z] Copying: 170/1024 [MB] (23 MBps) [2024-12-09T05:38:51.687Z] Copying: 194/1024 [MB] (24 MBps) [2024-12-09T05:38:52.623Z] Copying: 218/1024 [MB] (23 MBps) [2024-12-09T05:38:53.559Z] Copying: 241/1024 [MB] (23 MBps) [2024-12-09T05:38:54.496Z] Copying: 264/1024 [MB] (22 MBps) [2024-12-09T05:38:55.466Z] Copying: 287/1024 [MB] (23 MBps) [2024-12-09T05:38:56.402Z] Copying: 311/1024 [MB] (23 MBps) [2024-12-09T05:38:57.339Z] Copying: 334/1024 [MB] (23 MBps) [2024-12-09T05:38:58.717Z] Copying: 357/1024 [MB] (23 MBps) [2024-12-09T05:38:59.665Z] Copying: 380/1024 [MB] (22 MBps) [2024-12-09T05:39:00.600Z] Copying: 403/1024 [MB] (22 MBps) [2024-12-09T05:39:01.533Z] Copying: 426/1024 [MB] (22 MBps) [2024-12-09T05:39:02.469Z] Copying: 449/1024 [MB] (23 MBps) [2024-12-09T05:39:03.412Z] Copying: 472/1024 [MB] (23 MBps) [2024-12-09T05:39:04.352Z] Copying: 495/1024 [MB] (23 MBps) [2024-12-09T05:39:05.736Z] Copying: 518/1024 [MB] (23 MBps) [2024-12-09T05:39:06.672Z] Copying: 541/1024 [MB] (22 MBps) [2024-12-09T05:39:07.609Z] Copying: 563/1024 [MB] (22 MBps) [2024-12-09T05:39:08.546Z] Copying: 586/1024 [MB] (22 MBps) [2024-12-09T05:39:09.481Z] Copying: 610/1024 [MB] (23 MBps) [2024-12-09T05:39:10.418Z] Copying: 633/1024 [MB] (23 MBps) [2024-12-09T05:39:11.353Z] Copying: 656/1024 [MB] (23 MBps) [2024-12-09T05:39:12.739Z] Copying: 679/1024 [MB] (22 MBps) [2024-12-09T05:39:13.672Z] Copying: 703/1024 [MB] (23 MBps) [2024-12-09T05:39:14.606Z] Copying: 725/1024 [MB] (22 MBps) [2024-12-09T05:39:15.543Z] Copying: 748/1024 [MB] (22 MBps) [2024-12-09T05:39:16.478Z] Copying: 772/1024 [MB] (23 MBps) [2024-12-09T05:39:17.412Z] Copying: 794/1024 [MB] (22 MBps) [2024-12-09T05:39:18.347Z] Copying: 817/1024 [MB] (23 MBps) [2024-12-09T05:39:19.750Z] Copying: 841/1024 [MB] (23 MBps) [2024-12-09T05:39:20.317Z] Copying: 864/1024 [MB] (23 MBps) [2024-12-09T05:39:21.696Z] Copying: 888/1024 [MB] (23 MBps) [2024-12-09T05:39:22.632Z] Copying: 911/1024 [MB] (22 MBps) [2024-12-09T05:39:23.604Z] Copying: 934/1024 [MB] (23 MBps) [2024-12-09T05:39:24.540Z] Copying: 957/1024 [MB] (23 MBps) [2024-12-09T05:39:25.474Z] Copying: 979/1024 [MB] (21 MBps) [2024-12-09T05:39:26.413Z] Copying: 1003/1024 [MB] (23 MBps) [2024-12-09T05:39:26.413Z] Copying: 1024/1024 [MB] (average 23 MBps)[2024-12-09 05:39:26.212807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:44:34.796 [2024-12-09 05:39:26.212863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 01:44:34.796 [2024-12-09 05:39:26.212911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 01:44:34.796 [2024-12-09 05:39:26.212932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:44:34.796 [2024-12-09 05:39:26.212976] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 01:44:34.796 [2024-12-09 05:39:26.217544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:44:34.796 [2024-12-09 05:39:26.217602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 01:44:34.796 [2024-12-09 05:39:26.217652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.534 ms 01:44:34.796 [2024-12-09 05:39:26.217669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:44:34.796 [2024-12-09 05:39:26.219590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:44:34.796 [2024-12-09 05:39:26.219869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 01:44:34.796 [2024-12-09 05:39:26.219911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.834 ms 01:44:34.796 [2024-12-09 05:39:26.219934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:44:34.796 [2024-12-09 05:39:26.236386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:44:34.796 [2024-12-09 05:39:26.236432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 01:44:34.796 [2024-12-09 05:39:26.236460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.413 ms 01:44:34.796 [2024-12-09 05:39:26.236480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:44:34.796 [2024-12-09 05:39:26.242690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:44:34.796 [2024-12-09 05:39:26.242931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 01:44:34.796 [2024-12-09 05:39:26.242972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.101 ms 01:44:34.796 [2024-12-09 05:39:26.242993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:44:34.796 [2024-12-09 05:39:26.269776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:44:34.796 [2024-12-09 05:39:26.269827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 01:44:34.796 [2024-12-09 05:39:26.269851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.689 ms 01:44:34.796 [2024-12-09 05:39:26.269868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:44:34.796 [2024-12-09 05:39:26.286080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:44:34.796 [2024-12-09 05:39:26.286127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 01:44:34.796 [2024-12-09 05:39:26.286150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.162 ms 01:44:34.796 [2024-12-09 05:39:26.286168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:44:34.797 [2024-12-09 05:39:26.286326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:44:34.797 [2024-12-09 05:39:26.286359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 01:44:34.797 [2024-12-09 05:39:26.286379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 01:44:34.797 [2024-12-09 05:39:26.286395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:44:34.797 [2024-12-09 05:39:26.312296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:44:34.797 [2024-12-09 05:39:26.312336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 01:44:34.797 [2024-12-09 05:39:26.312358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.874 ms 01:44:34.797 [2024-12-09 05:39:26.312375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:44:34.797 [2024-12-09 05:39:26.337604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:44:34.797 [2024-12-09 05:39:26.337643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 01:44:34.797 [2024-12-09 05:39:26.337707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.179 ms 01:44:34.797 [2024-12-09 05:39:26.337730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:44:34.797 [2024-12-09 05:39:26.362560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:44:34.797 [2024-12-09 05:39:26.362599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 01:44:34.797 [2024-12-09 05:39:26.362622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.763 ms 01:44:34.797 [2024-12-09 05:39:26.362639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:44:34.797 [2024-12-09 05:39:26.387784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:44:34.797 [2024-12-09 05:39:26.387822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 01:44:34.797 [2024-12-09 05:39:26.387845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.019 ms 01:44:34.797 [2024-12-09 05:39:26.387861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:44:34.797 [2024-12-09 05:39:26.387912] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 01:44:34.797 [2024-12-09 05:39:26.387942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 01:44:34.797 [2024-12-09 05:39:26.387971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 01:44:34.797 [2024-12-09 05:39:26.387987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 01:44:34.797 [2024-12-09 05:39:26.388004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 01:44:34.797 [2024-12-09 05:39:26.388022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 01:44:34.797 [2024-12-09 05:39:26.388038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 01:44:34.797 [2024-12-09 05:39:26.388054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 01:44:34.797 [2024-12-09 05:39:26.388071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 01:44:34.797 [2024-12-09 05:39:26.388086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 01:44:34.797 [2024-12-09 05:39:26.388102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 01:44:34.797 [2024-12-09 05:39:26.388119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 01:44:34.797 [2024-12-09 05:39:26.388136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 01:44:34.797 [2024-12-09 05:39:26.388152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 01:44:34.797 [2024-12-09 05:39:26.388169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 01:44:34.797 [2024-12-09 05:39:26.388185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 01:44:34.797 [2024-12-09 05:39:26.388202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 01:44:34.797 [2024-12-09 05:39:26.388219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 01:44:34.797 [2024-12-09 05:39:26.388235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 01:44:34.797 [2024-12-09 05:39:26.388251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 01:44:34.797 [2024-12-09 05:39:26.388267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 01:44:34.797 [2024-12-09 05:39:26.388284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 01:44:34.797 [2024-12-09 05:39:26.388301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 01:44:34.797 [2024-12-09 05:39:26.388316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 01:44:34.797 [2024-12-09 05:39:26.388333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 01:44:34.797 [2024-12-09 05:39:26.388349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 01:44:34.797 [2024-12-09 05:39:26.388365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 01:44:34.797 [2024-12-09 05:39:26.388384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 01:44:34.797 [2024-12-09 05:39:26.388401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 01:44:34.797 [2024-12-09 05:39:26.388418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 01:44:34.797 [2024-12-09 05:39:26.388434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 01:44:34.797 [2024-12-09 05:39:26.388451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 01:44:34.797 [2024-12-09 05:39:26.388468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 01:44:34.797 [2024-12-09 05:39:26.388484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 01:44:34.797 [2024-12-09 05:39:26.388500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 01:44:34.797 [2024-12-09 05:39:26.388516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 01:44:34.797 [2024-12-09 05:39:26.388532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 01:44:34.797 [2024-12-09 05:39:26.388549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 01:44:34.797 [2024-12-09 05:39:26.388565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 01:44:34.797 [2024-12-09 05:39:26.388580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 01:44:34.797 [2024-12-09 05:39:26.388597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 01:44:34.797 [2024-12-09 05:39:26.388613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 01:44:34.797 [2024-12-09 05:39:26.388629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 01:44:34.797 [2024-12-09 05:39:26.388646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 01:44:34.797 [2024-12-09 05:39:26.388700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 01:44:34.797 [2024-12-09 05:39:26.388739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 01:44:34.797 [2024-12-09 05:39:26.388757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 01:44:34.797 [2024-12-09 05:39:26.388776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 01:44:34.797 [2024-12-09 05:39:26.388795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 01:44:34.797 [2024-12-09 05:39:26.388829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 01:44:34.797 [2024-12-09 05:39:26.388847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 01:44:34.797 [2024-12-09 05:39:26.388865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 01:44:34.797 [2024-12-09 05:39:26.388883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 01:44:34.797 [2024-12-09 05:39:26.388913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 01:44:34.797 [2024-12-09 05:39:26.388932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 01:44:34.797 [2024-12-09 05:39:26.388950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 01:44:34.797 [2024-12-09 05:39:26.388968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 01:44:34.797 [2024-12-09 05:39:26.388985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 01:44:34.797 [2024-12-09 05:39:26.389003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 01:44:34.797 [2024-12-09 05:39:26.389021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 01:44:34.797 [2024-12-09 05:39:26.389040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 01:44:34.797 [2024-12-09 05:39:26.389057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 01:44:34.797 [2024-12-09 05:39:26.389075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 01:44:34.797 [2024-12-09 05:39:26.389106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 01:44:34.797 [2024-12-09 05:39:26.389156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 01:44:34.797 [2024-12-09 05:39:26.389174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 01:44:34.797 [2024-12-09 05:39:26.389192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 01:44:34.797 [2024-12-09 05:39:26.389208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 01:44:34.797 [2024-12-09 05:39:26.389227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 01:44:34.797 [2024-12-09 05:39:26.389247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 01:44:34.797 [2024-12-09 05:39:26.389261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 01:44:34.797 [2024-12-09 05:39:26.389272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 01:44:34.797 [2024-12-09 05:39:26.389282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 01:44:34.798 [2024-12-09 05:39:26.389296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 01:44:34.798 [2024-12-09 05:39:26.389314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 01:44:34.798 [2024-12-09 05:39:26.389327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 01:44:34.798 [2024-12-09 05:39:26.389339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 01:44:34.798 [2024-12-09 05:39:26.389355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 01:44:34.798 [2024-12-09 05:39:26.389371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 01:44:34.798 [2024-12-09 05:39:26.389390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 01:44:34.798 [2024-12-09 05:39:26.389409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 01:44:34.798 [2024-12-09 05:39:26.389428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 01:44:34.798 [2024-12-09 05:39:26.389445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 01:44:34.798 [2024-12-09 05:39:26.389464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 01:44:34.798 [2024-12-09 05:39:26.389482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 01:44:34.798 [2024-12-09 05:39:26.389501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 01:44:34.798 [2024-12-09 05:39:26.389519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 01:44:34.798 [2024-12-09 05:39:26.389537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 01:44:34.798 [2024-12-09 05:39:26.389549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 01:44:34.798 [2024-12-09 05:39:26.389560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 01:44:34.798 [2024-12-09 05:39:26.389570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 01:44:34.798 [2024-12-09 05:39:26.389580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 01:44:34.798 [2024-12-09 05:39:26.389591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 01:44:34.798 [2024-12-09 05:39:26.389603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 01:44:34.798 [2024-12-09 05:39:26.389621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 01:44:34.798 [2024-12-09 05:39:26.389638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 01:44:34.798 [2024-12-09 05:39:26.389653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 01:44:34.798 [2024-12-09 05:39:26.389672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 01:44:34.798 [2024-12-09 05:39:26.389692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 01:44:34.798 [2024-12-09 05:39:26.389727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 01:44:34.798 [2024-12-09 05:39:26.389749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 01:44:34.798 [2024-12-09 05:39:26.389778] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 01:44:34.798 [2024-12-09 05:39:26.389797] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2fe6c684-ea00-40ee-a25d-c4c960459442 01:44:34.798 [2024-12-09 05:39:26.389808] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 01:44:34.798 [2024-12-09 05:39:26.389817] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 01:44:34.798 [2024-12-09 05:39:26.389828] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 01:44:34.798 [2024-12-09 05:39:26.389839] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 01:44:34.798 [2024-12-09 05:39:26.389848] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 01:44:34.798 [2024-12-09 05:39:26.389873] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 01:44:34.798 [2024-12-09 05:39:26.389892] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 01:44:34.798 [2024-12-09 05:39:26.389905] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 01:44:34.798 [2024-12-09 05:39:26.389920] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 01:44:34.798 [2024-12-09 05:39:26.389939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:44:34.798 [2024-12-09 05:39:26.389960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 01:44:34.798 [2024-12-09 05:39:26.389979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.029 ms 01:44:34.798 [2024-12-09 05:39:26.389998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:44:34.798 [2024-12-09 05:39:26.406114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:44:34.798 [2024-12-09 05:39:26.406277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 01:44:34.798 [2024-12-09 05:39:26.406315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.073 ms 01:44:34.798 [2024-12-09 05:39:26.406335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:44:34.798 [2024-12-09 05:39:26.407054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:44:34.798 [2024-12-09 05:39:26.407118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 01:44:34.798 [2024-12-09 05:39:26.407158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.651 ms 01:44:34.798 [2024-12-09 05:39:26.407208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:44:35.057 [2024-12-09 05:39:26.447997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:44:35.057 [2024-12-09 05:39:26.448041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 01:44:35.057 [2024-12-09 05:39:26.448064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:44:35.057 [2024-12-09 05:39:26.448080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:44:35.057 [2024-12-09 05:39:26.448161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:44:35.057 [2024-12-09 05:39:26.448184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 01:44:35.057 [2024-12-09 05:39:26.448202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:44:35.057 [2024-12-09 05:39:26.448236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:44:35.057 [2024-12-09 05:39:26.448351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:44:35.057 [2024-12-09 05:39:26.448375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 01:44:35.057 [2024-12-09 05:39:26.448394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:44:35.057 [2024-12-09 05:39:26.448410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:44:35.057 [2024-12-09 05:39:26.448443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:44:35.057 [2024-12-09 05:39:26.448463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 01:44:35.057 [2024-12-09 05:39:26.448481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:44:35.057 [2024-12-09 05:39:26.448496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:44:35.058 [2024-12-09 05:39:26.548440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:44:35.058 [2024-12-09 05:39:26.548502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 01:44:35.058 [2024-12-09 05:39:26.548529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:44:35.058 [2024-12-09 05:39:26.548545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:44:35.058 [2024-12-09 05:39:26.624240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:44:35.058 [2024-12-09 05:39:26.624306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 01:44:35.058 [2024-12-09 05:39:26.624333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:44:35.058 [2024-12-09 05:39:26.624375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:44:35.058 [2024-12-09 05:39:26.624529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:44:35.058 [2024-12-09 05:39:26.624556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 01:44:35.058 [2024-12-09 05:39:26.624590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:44:35.058 [2024-12-09 05:39:26.624608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:44:35.058 [2024-12-09 05:39:26.624675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:44:35.058 [2024-12-09 05:39:26.624715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 01:44:35.058 [2024-12-09 05:39:26.624780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:44:35.058 [2024-12-09 05:39:26.624803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:44:35.058 [2024-12-09 05:39:26.624987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:44:35.058 [2024-12-09 05:39:26.625015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 01:44:35.058 [2024-12-09 05:39:26.625035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:44:35.058 [2024-12-09 05:39:26.625052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:44:35.058 [2024-12-09 05:39:26.625161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:44:35.058 [2024-12-09 05:39:26.625218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 01:44:35.058 [2024-12-09 05:39:26.625240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:44:35.058 [2024-12-09 05:39:26.625253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:44:35.058 [2024-12-09 05:39:26.625314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:44:35.058 [2024-12-09 05:39:26.625356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 01:44:35.058 [2024-12-09 05:39:26.625376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:44:35.058 [2024-12-09 05:39:26.625393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:44:35.058 [2024-12-09 05:39:26.625473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:44:35.058 [2024-12-09 05:39:26.625500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 01:44:35.058 [2024-12-09 05:39:26.625520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:44:35.058 [2024-12-09 05:39:26.625539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:44:35.058 [2024-12-09 05:39:26.625812] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 412.967 ms, result 0 01:44:35.994 01:44:35.994 01:44:36.253 05:39:27 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 01:44:36.253 [2024-12-09 05:39:27.738125] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:44:36.253 [2024-12-09 05:39:27.738306] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79884 ] 01:44:36.512 [2024-12-09 05:39:27.920298] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:44:36.512 [2024-12-09 05:39:28.030279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:44:36.771 [2024-12-09 05:39:28.368617] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 01:44:36.771 [2024-12-09 05:39:28.368733] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 01:44:37.031 [2024-12-09 05:39:28.530556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:44:37.031 [2024-12-09 05:39:28.530612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 01:44:37.031 [2024-12-09 05:39:28.530643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 01:44:37.031 [2024-12-09 05:39:28.530682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:44:37.031 [2024-12-09 05:39:28.530792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:44:37.031 [2024-12-09 05:39:28.530823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 01:44:37.031 [2024-12-09 05:39:28.530843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 01:44:37.031 [2024-12-09 05:39:28.530861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:44:37.031 [2024-12-09 05:39:28.530907] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 01:44:37.031 [2024-12-09 05:39:28.532143] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 01:44:37.031 [2024-12-09 05:39:28.532201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:44:37.031 [2024-12-09 05:39:28.532223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 01:44:37.031 [2024-12-09 05:39:28.532243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.303 ms 01:44:37.031 [2024-12-09 05:39:28.532259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:44:37.031 [2024-12-09 05:39:28.534479] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 01:44:37.031 [2024-12-09 05:39:28.550235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:44:37.031 [2024-12-09 05:39:28.550276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 01:44:37.031 [2024-12-09 05:39:28.550301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.758 ms 01:44:37.031 [2024-12-09 05:39:28.550319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:44:37.031 [2024-12-09 05:39:28.550409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:44:37.031 [2024-12-09 05:39:28.550462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 01:44:37.031 [2024-12-09 05:39:28.550483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 01:44:37.031 [2024-12-09 05:39:28.550501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:44:37.031 [2024-12-09 05:39:28.559882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:44:37.031 [2024-12-09 05:39:28.559922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 01:44:37.031 [2024-12-09 05:39:28.559946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.260 ms 01:44:37.031 [2024-12-09 05:39:28.559974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:44:37.031 [2024-12-09 05:39:28.560097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:44:37.031 [2024-12-09 05:39:28.560124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 01:44:37.031 [2024-12-09 05:39:28.560142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 01:44:37.031 [2024-12-09 05:39:28.560159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:44:37.031 [2024-12-09 05:39:28.560238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:44:37.031 [2024-12-09 05:39:28.560262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 01:44:37.031 [2024-12-09 05:39:28.560280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 01:44:37.031 [2024-12-09 05:39:28.560299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:44:37.031 [2024-12-09 05:39:28.560353] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 01:44:37.031 [2024-12-09 05:39:28.564916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:44:37.031 [2024-12-09 05:39:28.564962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 01:44:37.031 [2024-12-09 05:39:28.564994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.575 ms 01:44:37.031 [2024-12-09 05:39:28.565013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:44:37.031 [2024-12-09 05:39:28.565084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:44:37.031 [2024-12-09 05:39:28.565108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 01:44:37.031 [2024-12-09 05:39:28.565125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 01:44:37.031 [2024-12-09 05:39:28.565142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:44:37.031 [2024-12-09 05:39:28.565224] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 01:44:37.031 [2024-12-09 05:39:28.565266] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 01:44:37.031 [2024-12-09 05:39:28.565317] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 01:44:37.031 [2024-12-09 05:39:28.565352] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 01:44:37.031 [2024-12-09 05:39:28.565470] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 01:44:37.031 [2024-12-09 05:39:28.565496] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 01:44:37.031 [2024-12-09 05:39:28.565517] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 01:44:37.031 [2024-12-09 05:39:28.565536] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 01:44:37.031 [2024-12-09 05:39:28.565556] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 01:44:37.031 [2024-12-09 05:39:28.565574] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 01:44:37.031 [2024-12-09 05:39:28.565589] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 01:44:37.031 [2024-12-09 05:39:28.565612] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 01:44:37.031 [2024-12-09 05:39:28.565627] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 01:44:37.032 [2024-12-09 05:39:28.565643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:44:37.032 [2024-12-09 05:39:28.565659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 01:44:37.032 [2024-12-09 05:39:28.565716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.423 ms 01:44:37.032 [2024-12-09 05:39:28.565739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:44:37.032 [2024-12-09 05:39:28.565851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:44:37.032 [2024-12-09 05:39:28.565875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 01:44:37.032 [2024-12-09 05:39:28.565894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 01:44:37.032 [2024-12-09 05:39:28.565910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:44:37.032 [2024-12-09 05:39:28.566054] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 01:44:37.032 [2024-12-09 05:39:28.566098] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 01:44:37.032 [2024-12-09 05:39:28.566116] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 01:44:37.032 [2024-12-09 05:39:28.566133] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:44:37.032 [2024-12-09 05:39:28.566150] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 01:44:37.032 [2024-12-09 05:39:28.566166] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 01:44:37.032 [2024-12-09 05:39:28.566182] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 01:44:37.032 [2024-12-09 05:39:28.566198] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 01:44:37.032 [2024-12-09 05:39:28.566213] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 01:44:37.032 [2024-12-09 05:39:28.566228] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 01:44:37.032 [2024-12-09 05:39:28.566243] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 01:44:37.032 [2024-12-09 05:39:28.566258] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 01:44:37.032 [2024-12-09 05:39:28.566273] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 01:44:37.032 [2024-12-09 05:39:28.566305] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 01:44:37.032 [2024-12-09 05:39:28.566321] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 01:44:37.032 [2024-12-09 05:39:28.566336] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:44:37.032 [2024-12-09 05:39:28.566351] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 01:44:37.032 [2024-12-09 05:39:28.566366] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 01:44:37.032 [2024-12-09 05:39:28.566381] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:44:37.032 [2024-12-09 05:39:28.566396] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 01:44:37.032 [2024-12-09 05:39:28.566411] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 01:44:37.032 [2024-12-09 05:39:28.566455] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:44:37.032 [2024-12-09 05:39:28.566478] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 01:44:37.032 [2024-12-09 05:39:28.566494] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 01:44:37.032 [2024-12-09 05:39:28.566510] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:44:37.032 [2024-12-09 05:39:28.566525] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 01:44:37.032 [2024-12-09 05:39:28.566542] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 01:44:37.032 [2024-12-09 05:39:28.566558] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:44:37.032 [2024-12-09 05:39:28.566572] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 01:44:37.032 [2024-12-09 05:39:28.566588] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 01:44:37.032 [2024-12-09 05:39:28.566604] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:44:37.032 [2024-12-09 05:39:28.566619] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 01:44:37.032 [2024-12-09 05:39:28.566635] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 01:44:37.032 [2024-12-09 05:39:28.566651] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 01:44:37.032 [2024-12-09 05:39:28.566668] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 01:44:37.032 [2024-12-09 05:39:28.566700] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 01:44:37.032 [2024-12-09 05:39:28.566719] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 01:44:37.032 [2024-12-09 05:39:28.566736] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 01:44:37.032 [2024-12-09 05:39:28.566751] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 01:44:37.032 [2024-12-09 05:39:28.566781] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:44:37.032 [2024-12-09 05:39:28.566796] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 01:44:37.032 [2024-12-09 05:39:28.566811] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 01:44:37.032 [2024-12-09 05:39:28.566828] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:44:37.032 [2024-12-09 05:39:28.566844] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 01:44:37.032 [2024-12-09 05:39:28.566861] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 01:44:37.032 [2024-12-09 05:39:28.566876] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 01:44:37.032 [2024-12-09 05:39:28.566893] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:44:37.032 [2024-12-09 05:39:28.566909] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 01:44:37.032 [2024-12-09 05:39:28.566925] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 01:44:37.032 [2024-12-09 05:39:28.566941] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 01:44:37.032 [2024-12-09 05:39:28.566956] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 01:44:37.032 [2024-12-09 05:39:28.566971] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 01:44:37.032 [2024-12-09 05:39:28.566988] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 01:44:37.032 [2024-12-09 05:39:28.567005] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 01:44:37.032 [2024-12-09 05:39:28.567025] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 01:44:37.032 [2024-12-09 05:39:28.567052] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 01:44:37.032 [2024-12-09 05:39:28.567068] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 01:44:37.032 [2024-12-09 05:39:28.567084] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 01:44:37.032 [2024-12-09 05:39:28.567100] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 01:44:37.032 [2024-12-09 05:39:28.567117] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 01:44:37.032 [2024-12-09 05:39:28.567133] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 01:44:37.032 [2024-12-09 05:39:28.567150] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 01:44:37.032 [2024-12-09 05:39:28.567167] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 01:44:37.032 [2024-12-09 05:39:28.567182] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 01:44:37.032 [2024-12-09 05:39:28.567198] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 01:44:37.032 [2024-12-09 05:39:28.567215] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 01:44:37.032 [2024-12-09 05:39:28.567231] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 01:44:37.032 [2024-12-09 05:39:28.567248] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 01:44:37.032 [2024-12-09 05:39:28.567264] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 01:44:37.032 [2024-12-09 05:39:28.567280] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 01:44:37.032 [2024-12-09 05:39:28.567299] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 01:44:37.032 [2024-12-09 05:39:28.567317] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 01:44:37.032 [2024-12-09 05:39:28.567334] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 01:44:37.032 [2024-12-09 05:39:28.567351] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 01:44:37.032 [2024-12-09 05:39:28.567368] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 01:44:37.032 [2024-12-09 05:39:28.567387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:44:37.032 [2024-12-09 05:39:28.567405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 01:44:37.032 [2024-12-09 05:39:28.567421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.403 ms 01:44:37.032 [2024-12-09 05:39:28.567437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:44:37.032 [2024-12-09 05:39:28.603181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:44:37.032 [2024-12-09 05:39:28.603462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 01:44:37.032 [2024-12-09 05:39:28.603601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.660 ms 01:44:37.032 [2024-12-09 05:39:28.603787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:44:37.032 [2024-12-09 05:39:28.604065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:44:37.032 [2024-12-09 05:39:28.604232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 01:44:37.032 [2024-12-09 05:39:28.604376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 01:44:37.032 [2024-12-09 05:39:28.604450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:44:37.292 [2024-12-09 05:39:28.655584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:44:37.292 [2024-12-09 05:39:28.655829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 01:44:37.292 [2024-12-09 05:39:28.655991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.951 ms 01:44:37.292 [2024-12-09 05:39:28.656129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:44:37.292 [2024-12-09 05:39:28.656278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:44:37.292 [2024-12-09 05:39:28.656353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 01:44:37.292 [2024-12-09 05:39:28.656526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 01:44:37.292 [2024-12-09 05:39:28.656655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:44:37.292 [2024-12-09 05:39:28.657468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:44:37.292 [2024-12-09 05:39:28.657639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 01:44:37.292 [2024-12-09 05:39:28.657826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.605 ms 01:44:37.292 [2024-12-09 05:39:28.657964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:44:37.292 [2024-12-09 05:39:28.658368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:44:37.292 [2024-12-09 05:39:28.658591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 01:44:37.292 [2024-12-09 05:39:28.658778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.195 ms 01:44:37.292 [2024-12-09 05:39:28.658956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:44:37.292 [2024-12-09 05:39:28.676460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:44:37.292 [2024-12-09 05:39:28.676500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 01:44:37.292 [2024-12-09 05:39:28.676524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.316 ms 01:44:37.292 [2024-12-09 05:39:28.676542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:44:37.292 [2024-12-09 05:39:28.692898] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 01:44:37.292 [2024-12-09 05:39:28.692956] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 01:44:37.292 [2024-12-09 05:39:28.692982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:44:37.292 [2024-12-09 05:39:28.693017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 01:44:37.292 [2024-12-09 05:39:28.693036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.196 ms 01:44:37.292 [2024-12-09 05:39:28.693050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:44:37.292 [2024-12-09 05:39:28.718245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:44:37.292 [2024-12-09 05:39:28.718287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 01:44:37.292 [2024-12-09 05:39:28.718311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.138 ms 01:44:37.292 [2024-12-09 05:39:28.718328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:44:37.292 [2024-12-09 05:39:28.731522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:44:37.292 [2024-12-09 05:39:28.731733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 01:44:37.292 [2024-12-09 05:39:28.731770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.123 ms 01:44:37.292 [2024-12-09 05:39:28.731792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:44:37.292 [2024-12-09 05:39:28.744771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:44:37.292 [2024-12-09 05:39:28.744813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 01:44:37.292 [2024-12-09 05:39:28.744838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.896 ms 01:44:37.292 [2024-12-09 05:39:28.744855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:44:37.292 [2024-12-09 05:39:28.745802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:44:37.292 [2024-12-09 05:39:28.745860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 01:44:37.292 [2024-12-09 05:39:28.745891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.810 ms 01:44:37.292 [2024-12-09 05:39:28.745909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:44:37.292 [2024-12-09 05:39:28.813446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:44:37.292 [2024-12-09 05:39:28.813554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 01:44:37.292 [2024-12-09 05:39:28.813594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.483 ms 01:44:37.292 [2024-12-09 05:39:28.813612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:44:37.292 [2024-12-09 05:39:28.824507] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 01:44:37.293 [2024-12-09 05:39:28.827046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:44:37.293 [2024-12-09 05:39:28.827220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 01:44:37.293 [2024-12-09 05:39:28.827255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.313 ms 01:44:37.293 [2024-12-09 05:39:28.827275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:44:37.293 [2024-12-09 05:39:28.827419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:44:37.293 [2024-12-09 05:39:28.827450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 01:44:37.293 [2024-12-09 05:39:28.827477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 01:44:37.293 [2024-12-09 05:39:28.827495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:44:37.293 [2024-12-09 05:39:28.827646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:44:37.293 [2024-12-09 05:39:28.827671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 01:44:37.293 [2024-12-09 05:39:28.827734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 01:44:37.293 [2024-12-09 05:39:28.827753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:44:37.293 [2024-12-09 05:39:28.827833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:44:37.293 [2024-12-09 05:39:28.827856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 01:44:37.293 [2024-12-09 05:39:28.827873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 01:44:37.293 [2024-12-09 05:39:28.827889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:44:37.293 [2024-12-09 05:39:28.827958] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 01:44:37.293 [2024-12-09 05:39:28.827983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:44:37.293 [2024-12-09 05:39:28.828001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 01:44:37.293 [2024-12-09 05:39:28.828018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 01:44:37.293 [2024-12-09 05:39:28.828034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:44:37.293 [2024-12-09 05:39:28.855074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:44:37.293 [2024-12-09 05:39:28.855116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 01:44:37.293 [2024-12-09 05:39:28.855148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.967 ms 01:44:37.293 [2024-12-09 05:39:28.855167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:44:37.293 [2024-12-09 05:39:28.855264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:44:37.293 [2024-12-09 05:39:28.855291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 01:44:37.293 [2024-12-09 05:39:28.855309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 01:44:37.293 [2024-12-09 05:39:28.855324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:44:37.293 [2024-12-09 05:39:28.857074] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 325.896 ms, result 0 01:44:38.670  [2024-12-09T05:39:31.223Z] Copying: 21/1024 [MB] (21 MBps) [2024-12-09T05:39:32.159Z] Copying: 44/1024 [MB] (22 MBps) [2024-12-09T05:39:33.095Z] Copying: 66/1024 [MB] (22 MBps) [2024-12-09T05:39:34.470Z] Copying: 88/1024 [MB] (22 MBps) [2024-12-09T05:39:35.404Z] Copying: 110/1024 [MB] (22 MBps) [2024-12-09T05:39:36.338Z] Copying: 133/1024 [MB] (23 MBps) [2024-12-09T05:39:37.273Z] Copying: 156/1024 [MB] (22 MBps) [2024-12-09T05:39:38.208Z] Copying: 178/1024 [MB] (22 MBps) [2024-12-09T05:39:39.143Z] Copying: 201/1024 [MB] (22 MBps) [2024-12-09T05:39:40.080Z] Copying: 224/1024 [MB] (22 MBps) [2024-12-09T05:39:41.459Z] Copying: 246/1024 [MB] (22 MBps) [2024-12-09T05:39:42.397Z] Copying: 272/1024 [MB] (25 MBps) [2024-12-09T05:39:43.335Z] Copying: 295/1024 [MB] (23 MBps) [2024-12-09T05:39:44.329Z] Copying: 317/1024 [MB] (21 MBps) [2024-12-09T05:39:45.273Z] Copying: 338/1024 [MB] (21 MBps) [2024-12-09T05:39:46.205Z] Copying: 360/1024 [MB] (21 MBps) [2024-12-09T05:39:47.139Z] Copying: 381/1024 [MB] (21 MBps) [2024-12-09T05:39:48.074Z] Copying: 403/1024 [MB] (21 MBps) [2024-12-09T05:39:49.452Z] Copying: 424/1024 [MB] (21 MBps) [2024-12-09T05:39:50.387Z] Copying: 446/1024 [MB] (22 MBps) [2024-12-09T05:39:51.323Z] Copying: 470/1024 [MB] (23 MBps) [2024-12-09T05:39:52.259Z] Copying: 492/1024 [MB] (22 MBps) [2024-12-09T05:39:53.212Z] Copying: 515/1024 [MB] (22 MBps) [2024-12-09T05:39:54.148Z] Copying: 539/1024 [MB] (24 MBps) [2024-12-09T05:39:55.096Z] Copying: 561/1024 [MB] (22 MBps) [2024-12-09T05:39:56.471Z] Copying: 584/1024 [MB] (22 MBps) [2024-12-09T05:39:57.406Z] Copying: 607/1024 [MB] (23 MBps) [2024-12-09T05:39:58.342Z] Copying: 630/1024 [MB] (22 MBps) [2024-12-09T05:39:59.278Z] Copying: 652/1024 [MB] (22 MBps) [2024-12-09T05:40:00.215Z] Copying: 675/1024 [MB] (22 MBps) [2024-12-09T05:40:01.156Z] Copying: 698/1024 [MB] (22 MBps) [2024-12-09T05:40:02.088Z] Copying: 720/1024 [MB] (22 MBps) [2024-12-09T05:40:03.466Z] Copying: 742/1024 [MB] (22 MBps) [2024-12-09T05:40:04.405Z] Copying: 765/1024 [MB] (22 MBps) [2024-12-09T05:40:05.341Z] Copying: 787/1024 [MB] (22 MBps) [2024-12-09T05:40:06.277Z] Copying: 810/1024 [MB] (22 MBps) [2024-12-09T05:40:07.212Z] Copying: 833/1024 [MB] (23 MBps) [2024-12-09T05:40:08.144Z] Copying: 855/1024 [MB] (22 MBps) [2024-12-09T05:40:09.076Z] Copying: 877/1024 [MB] (21 MBps) [2024-12-09T05:40:10.448Z] Copying: 899/1024 [MB] (22 MBps) [2024-12-09T05:40:11.384Z] Copying: 920/1024 [MB] (21 MBps) [2024-12-09T05:40:12.325Z] Copying: 943/1024 [MB] (22 MBps) [2024-12-09T05:40:13.259Z] Copying: 965/1024 [MB] (22 MBps) [2024-12-09T05:40:14.196Z] Copying: 988/1024 [MB] (23 MBps) [2024-12-09T05:40:14.764Z] Copying: 1011/1024 [MB] (22 MBps) [2024-12-09T05:40:14.764Z] Copying: 1024/1024 [MB] (average 22 MBps)[2024-12-09 05:40:14.702514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:45:23.147 [2024-12-09 05:40:14.702602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 01:45:23.147 [2024-12-09 05:40:14.702640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 01:45:23.147 [2024-12-09 05:40:14.702652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:45:23.147 [2024-12-09 05:40:14.702725] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 01:45:23.147 [2024-12-09 05:40:14.706362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:45:23.147 [2024-12-09 05:40:14.706405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 01:45:23.147 [2024-12-09 05:40:14.706455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.611 ms 01:45:23.147 [2024-12-09 05:40:14.706466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:45:23.147 [2024-12-09 05:40:14.706727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:45:23.147 [2024-12-09 05:40:14.706747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 01:45:23.147 [2024-12-09 05:40:14.706775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.232 ms 01:45:23.147 [2024-12-09 05:40:14.706785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:45:23.147 [2024-12-09 05:40:14.709863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:45:23.147 [2024-12-09 05:40:14.709890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 01:45:23.147 [2024-12-09 05:40:14.709920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.045 ms 01:45:23.147 [2024-12-09 05:40:14.709935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:45:23.147 [2024-12-09 05:40:14.716549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:45:23.147 [2024-12-09 05:40:14.716585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 01:45:23.147 [2024-12-09 05:40:14.716600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.593 ms 01:45:23.147 [2024-12-09 05:40:14.716611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:45:23.147 [2024-12-09 05:40:14.743122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:45:23.147 [2024-12-09 05:40:14.743159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 01:45:23.147 [2024-12-09 05:40:14.743175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.436 ms 01:45:23.147 [2024-12-09 05:40:14.743185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:45:23.147 [2024-12-09 05:40:14.759421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:45:23.147 [2024-12-09 05:40:14.759462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 01:45:23.147 [2024-12-09 05:40:14.759494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.196 ms 01:45:23.147 [2024-12-09 05:40:14.759505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:45:23.147 [2024-12-09 05:40:14.759642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:45:23.147 [2024-12-09 05:40:14.759661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 01:45:23.147 [2024-12-09 05:40:14.759725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 01:45:23.147 [2024-12-09 05:40:14.759736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:45:23.407 [2024-12-09 05:40:14.787950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:45:23.407 [2024-12-09 05:40:14.787990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 01:45:23.407 [2024-12-09 05:40:14.788006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.193 ms 01:45:23.407 [2024-12-09 05:40:14.788017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:45:23.407 [2024-12-09 05:40:14.813738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:45:23.407 [2024-12-09 05:40:14.813776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 01:45:23.407 [2024-12-09 05:40:14.813790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.669 ms 01:45:23.407 [2024-12-09 05:40:14.813799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:45:23.407 [2024-12-09 05:40:14.839140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:45:23.407 [2024-12-09 05:40:14.839337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 01:45:23.407 [2024-12-09 05:40:14.839362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.302 ms 01:45:23.407 [2024-12-09 05:40:14.839373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:45:23.407 [2024-12-09 05:40:14.864320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:45:23.407 [2024-12-09 05:40:14.864359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 01:45:23.407 [2024-12-09 05:40:14.864374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.861 ms 01:45:23.407 [2024-12-09 05:40:14.864384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:45:23.407 [2024-12-09 05:40:14.864421] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 01:45:23.407 [2024-12-09 05:40:14.864448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 01:45:23.407 [2024-12-09 05:40:14.864464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 01:45:23.407 [2024-12-09 05:40:14.864474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 01:45:23.407 [2024-12-09 05:40:14.864484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 01:45:23.407 [2024-12-09 05:40:14.864493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 01:45:23.407 [2024-12-09 05:40:14.864503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 01:45:23.407 [2024-12-09 05:40:14.864513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 01:45:23.407 [2024-12-09 05:40:14.864523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 01:45:23.407 [2024-12-09 05:40:14.864532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 01:45:23.407 [2024-12-09 05:40:14.864542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 01:45:23.407 [2024-12-09 05:40:14.864552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 01:45:23.407 [2024-12-09 05:40:14.864561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 01:45:23.407 [2024-12-09 05:40:14.864570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 01:45:23.407 [2024-12-09 05:40:14.864580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 01:45:23.407 [2024-12-09 05:40:14.864589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 01:45:23.407 [2024-12-09 05:40:14.864599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 01:45:23.407 [2024-12-09 05:40:14.864609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 01:45:23.407 [2024-12-09 05:40:14.864618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 01:45:23.408 [2024-12-09 05:40:14.864627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 01:45:23.408 [2024-12-09 05:40:14.864637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 01:45:23.408 [2024-12-09 05:40:14.864646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 01:45:23.408 [2024-12-09 05:40:14.864656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 01:45:23.408 [2024-12-09 05:40:14.864700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 01:45:23.408 [2024-12-09 05:40:14.864712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 01:45:23.408 [2024-12-09 05:40:14.864722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 01:45:23.408 [2024-12-09 05:40:14.864733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 01:45:23.408 [2024-12-09 05:40:14.864743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 01:45:23.408 [2024-12-09 05:40:14.864753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 01:45:23.408 [2024-12-09 05:40:14.864764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 01:45:23.408 [2024-12-09 05:40:14.864774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 01:45:23.408 [2024-12-09 05:40:14.864784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 01:45:23.408 [2024-12-09 05:40:14.864794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 01:45:23.408 [2024-12-09 05:40:14.864804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 01:45:23.408 [2024-12-09 05:40:14.864814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 01:45:23.408 [2024-12-09 05:40:14.864823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 01:45:23.408 [2024-12-09 05:40:14.864834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 01:45:23.408 [2024-12-09 05:40:14.864846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 01:45:23.408 [2024-12-09 05:40:14.864856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 01:45:23.408 [2024-12-09 05:40:14.864867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 01:45:23.408 [2024-12-09 05:40:14.864877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 01:45:23.408 [2024-12-09 05:40:14.864888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 01:45:23.408 [2024-12-09 05:40:14.864898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 01:45:23.408 [2024-12-09 05:40:14.864908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 01:45:23.408 [2024-12-09 05:40:14.864919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 01:45:23.408 [2024-12-09 05:40:14.864929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 01:45:23.408 [2024-12-09 05:40:14.864939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 01:45:23.408 [2024-12-09 05:40:14.864950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 01:45:23.408 [2024-12-09 05:40:14.864960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 01:45:23.408 [2024-12-09 05:40:14.864971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 01:45:23.408 [2024-12-09 05:40:14.864981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 01:45:23.408 [2024-12-09 05:40:14.864991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 01:45:23.408 [2024-12-09 05:40:14.865002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 01:45:23.408 [2024-12-09 05:40:14.865013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 01:45:23.408 [2024-12-09 05:40:14.865024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 01:45:23.408 [2024-12-09 05:40:14.865034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 01:45:23.408 [2024-12-09 05:40:14.865059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 01:45:23.408 [2024-12-09 05:40:14.865068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 01:45:23.408 [2024-12-09 05:40:14.865078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 01:45:23.408 [2024-12-09 05:40:14.865087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 01:45:23.408 [2024-12-09 05:40:14.865096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 01:45:23.408 [2024-12-09 05:40:14.865106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 01:45:23.408 [2024-12-09 05:40:14.865116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 01:45:23.408 [2024-12-09 05:40:14.865125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 01:45:23.408 [2024-12-09 05:40:14.865135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 01:45:23.408 [2024-12-09 05:40:14.865145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 01:45:23.408 [2024-12-09 05:40:14.865154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 01:45:23.408 [2024-12-09 05:40:14.865164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 01:45:23.408 [2024-12-09 05:40:14.865175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 01:45:23.408 [2024-12-09 05:40:14.865185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 01:45:23.408 [2024-12-09 05:40:14.865195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 01:45:23.408 [2024-12-09 05:40:14.865204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 01:45:23.408 [2024-12-09 05:40:14.865213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 01:45:23.408 [2024-12-09 05:40:14.865224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 01:45:23.408 [2024-12-09 05:40:14.865234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 01:45:23.408 [2024-12-09 05:40:14.865243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 01:45:23.408 [2024-12-09 05:40:14.865253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 01:45:23.408 [2024-12-09 05:40:14.865263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 01:45:23.408 [2024-12-09 05:40:14.865273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 01:45:23.408 [2024-12-09 05:40:14.865282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 01:45:23.408 [2024-12-09 05:40:14.865291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 01:45:23.408 [2024-12-09 05:40:14.865301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 01:45:23.408 [2024-12-09 05:40:14.865310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 01:45:23.408 [2024-12-09 05:40:14.865320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 01:45:23.408 [2024-12-09 05:40:14.865329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 01:45:23.408 [2024-12-09 05:40:14.865339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 01:45:23.408 [2024-12-09 05:40:14.865349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 01:45:23.408 [2024-12-09 05:40:14.865359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 01:45:23.408 [2024-12-09 05:40:14.865368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 01:45:23.408 [2024-12-09 05:40:14.865378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 01:45:23.408 [2024-12-09 05:40:14.865388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 01:45:23.408 [2024-12-09 05:40:14.865398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 01:45:23.408 [2024-12-09 05:40:14.865407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 01:45:23.408 [2024-12-09 05:40:14.865417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 01:45:23.408 [2024-12-09 05:40:14.865427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 01:45:23.408 [2024-12-09 05:40:14.865436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 01:45:23.408 [2024-12-09 05:40:14.865445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 01:45:23.408 [2024-12-09 05:40:14.865455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 01:45:23.408 [2024-12-09 05:40:14.865465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 01:45:23.408 [2024-12-09 05:40:14.865474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 01:45:23.408 [2024-12-09 05:40:14.865484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 01:45:23.408 [2024-12-09 05:40:14.865502] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 01:45:23.408 [2024-12-09 05:40:14.865512] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2fe6c684-ea00-40ee-a25d-c4c960459442 01:45:23.408 [2024-12-09 05:40:14.865522] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 01:45:23.408 [2024-12-09 05:40:14.865532] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 01:45:23.408 [2024-12-09 05:40:14.865541] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 01:45:23.408 [2024-12-09 05:40:14.865551] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 01:45:23.408 [2024-12-09 05:40:14.865581] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 01:45:23.408 [2024-12-09 05:40:14.865592] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 01:45:23.409 [2024-12-09 05:40:14.865601] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 01:45:23.409 [2024-12-09 05:40:14.865610] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 01:45:23.409 [2024-12-09 05:40:14.865618] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 01:45:23.409 [2024-12-09 05:40:14.865628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:45:23.409 [2024-12-09 05:40:14.865637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 01:45:23.409 [2024-12-09 05:40:14.865648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.208 ms 01:45:23.409 [2024-12-09 05:40:14.866013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:45:23.409 [2024-12-09 05:40:14.880193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:45:23.409 [2024-12-09 05:40:14.880230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 01:45:23.409 [2024-12-09 05:40:14.880245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.110 ms 01:45:23.409 [2024-12-09 05:40:14.880255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:45:23.409 [2024-12-09 05:40:14.880753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:45:23.409 [2024-12-09 05:40:14.880779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 01:45:23.409 [2024-12-09 05:40:14.880800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.460 ms 01:45:23.409 [2024-12-09 05:40:14.880812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:45:23.409 [2024-12-09 05:40:14.919191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:45:23.409 [2024-12-09 05:40:14.919396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 01:45:23.409 [2024-12-09 05:40:14.919421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:45:23.409 [2024-12-09 05:40:14.919433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:45:23.409 [2024-12-09 05:40:14.919498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:45:23.409 [2024-12-09 05:40:14.919512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 01:45:23.409 [2024-12-09 05:40:14.919531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:45:23.409 [2024-12-09 05:40:14.919542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:45:23.409 [2024-12-09 05:40:14.919650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:45:23.409 [2024-12-09 05:40:14.919738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 01:45:23.409 [2024-12-09 05:40:14.919752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:45:23.409 [2024-12-09 05:40:14.919763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:45:23.409 [2024-12-09 05:40:14.919786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:45:23.409 [2024-12-09 05:40:14.919800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 01:45:23.409 [2024-12-09 05:40:14.919810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:45:23.409 [2024-12-09 05:40:14.919840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:45:23.409 [2024-12-09 05:40:15.007423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:45:23.409 [2024-12-09 05:40:15.007478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 01:45:23.409 [2024-12-09 05:40:15.007557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:45:23.409 [2024-12-09 05:40:15.007568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:45:23.666 [2024-12-09 05:40:15.077233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:45:23.666 [2024-12-09 05:40:15.077305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 01:45:23.666 [2024-12-09 05:40:15.077349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:45:23.666 [2024-12-09 05:40:15.077360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:45:23.666 [2024-12-09 05:40:15.077439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:45:23.666 [2024-12-09 05:40:15.077455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 01:45:23.666 [2024-12-09 05:40:15.077466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:45:23.666 [2024-12-09 05:40:15.077476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:45:23.666 [2024-12-09 05:40:15.077545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:45:23.666 [2024-12-09 05:40:15.077561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 01:45:23.666 [2024-12-09 05:40:15.077571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:45:23.666 [2024-12-09 05:40:15.077581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:45:23.666 [2024-12-09 05:40:15.077735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:45:23.666 [2024-12-09 05:40:15.077755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 01:45:23.666 [2024-12-09 05:40:15.077767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:45:23.666 [2024-12-09 05:40:15.077777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:45:23.666 [2024-12-09 05:40:15.077841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:45:23.666 [2024-12-09 05:40:15.077859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 01:45:23.666 [2024-12-09 05:40:15.077870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:45:23.666 [2024-12-09 05:40:15.077880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:45:23.666 [2024-12-09 05:40:15.077963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:45:23.666 [2024-12-09 05:40:15.077980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 01:45:23.666 [2024-12-09 05:40:15.077990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:45:23.666 [2024-12-09 05:40:15.078001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:45:23.666 [2024-12-09 05:40:15.078052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:45:23.666 [2024-12-09 05:40:15.078067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 01:45:23.666 [2024-12-09 05:40:15.078079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:45:23.666 [2024-12-09 05:40:15.078089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:45:23.666 [2024-12-09 05:40:15.078300] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 375.776 ms, result 0 01:45:24.600 01:45:24.600 01:45:24.600 05:40:15 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 01:45:26.521 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 01:45:26.521 05:40:17 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 01:45:26.521 [2024-12-09 05:40:17.790612] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:45:26.521 [2024-12-09 05:40:17.791006] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80378 ] 01:45:26.521 [2024-12-09 05:40:17.965606] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:45:26.521 [2024-12-09 05:40:18.108331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:45:27.090 [2024-12-09 05:40:18.417135] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 01:45:27.090 [2024-12-09 05:40:18.417215] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 01:45:27.090 [2024-12-09 05:40:18.575963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:45:27.090 [2024-12-09 05:40:18.576017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 01:45:27.090 [2024-12-09 05:40:18.576046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 01:45:27.090 [2024-12-09 05:40:18.576058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:45:27.090 [2024-12-09 05:40:18.576115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:45:27.090 [2024-12-09 05:40:18.576135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 01:45:27.090 [2024-12-09 05:40:18.576146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 01:45:27.090 [2024-12-09 05:40:18.576155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:45:27.090 [2024-12-09 05:40:18.576183] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 01:45:27.090 [2024-12-09 05:40:18.576958] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 01:45:27.090 [2024-12-09 05:40:18.577005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:45:27.090 [2024-12-09 05:40:18.577018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 01:45:27.090 [2024-12-09 05:40:18.577030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.828 ms 01:45:27.090 [2024-12-09 05:40:18.577041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:45:27.090 [2024-12-09 05:40:18.579009] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 01:45:27.090 [2024-12-09 05:40:18.593363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:45:27.090 [2024-12-09 05:40:18.593415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 01:45:27.090 [2024-12-09 05:40:18.593431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.355 ms 01:45:27.090 [2024-12-09 05:40:18.593442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:45:27.090 [2024-12-09 05:40:18.593512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:45:27.090 [2024-12-09 05:40:18.593531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 01:45:27.090 [2024-12-09 05:40:18.593543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 01:45:27.090 [2024-12-09 05:40:18.593939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:45:27.090 [2024-12-09 05:40:18.602641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:45:27.090 [2024-12-09 05:40:18.602860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 01:45:27.090 [2024-12-09 05:40:18.602893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.564 ms 01:45:27.090 [2024-12-09 05:40:18.602913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:45:27.090 [2024-12-09 05:40:18.603008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:45:27.090 [2024-12-09 05:40:18.603025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 01:45:27.090 [2024-12-09 05:40:18.603038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 01:45:27.090 [2024-12-09 05:40:18.603048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:45:27.090 [2024-12-09 05:40:18.603116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:45:27.090 [2024-12-09 05:40:18.603133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 01:45:27.090 [2024-12-09 05:40:18.603145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 01:45:27.090 [2024-12-09 05:40:18.603156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:45:27.090 [2024-12-09 05:40:18.603207] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 01:45:27.090 [2024-12-09 05:40:18.607658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:45:27.090 [2024-12-09 05:40:18.607696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 01:45:27.090 [2024-12-09 05:40:18.607716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.475 ms 01:45:27.090 [2024-12-09 05:40:18.607726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:45:27.090 [2024-12-09 05:40:18.607761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:45:27.090 [2024-12-09 05:40:18.607789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 01:45:27.090 [2024-12-09 05:40:18.607799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 01:45:27.090 [2024-12-09 05:40:18.607809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:45:27.090 [2024-12-09 05:40:18.607868] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 01:45:27.090 [2024-12-09 05:40:18.607907] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 01:45:27.090 [2024-12-09 05:40:18.607957] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 01:45:27.090 [2024-12-09 05:40:18.607978] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 01:45:27.090 [2024-12-09 05:40:18.608066] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 01:45:27.090 [2024-12-09 05:40:18.608079] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 01:45:27.090 [2024-12-09 05:40:18.608092] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 01:45:27.090 [2024-12-09 05:40:18.608104] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 01:45:27.090 [2024-12-09 05:40:18.608123] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 01:45:27.090 [2024-12-09 05:40:18.608134] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 01:45:27.090 [2024-12-09 05:40:18.608148] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 01:45:27.090 [2024-12-09 05:40:18.608161] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 01:45:27.090 [2024-12-09 05:40:18.608170] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 01:45:27.090 [2024-12-09 05:40:18.608181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:45:27.090 [2024-12-09 05:40:18.608191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 01:45:27.090 [2024-12-09 05:40:18.608201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.316 ms 01:45:27.090 [2024-12-09 05:40:18.608211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:45:27.090 [2024-12-09 05:40:18.608288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:45:27.090 [2024-12-09 05:40:18.608301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 01:45:27.090 [2024-12-09 05:40:18.608312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 01:45:27.090 [2024-12-09 05:40:18.608321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:45:27.090 [2024-12-09 05:40:18.608421] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 01:45:27.090 [2024-12-09 05:40:18.608439] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 01:45:27.090 [2024-12-09 05:40:18.608449] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 01:45:27.090 [2024-12-09 05:40:18.608459] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:45:27.090 [2024-12-09 05:40:18.608468] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 01:45:27.090 [2024-12-09 05:40:18.608477] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 01:45:27.090 [2024-12-09 05:40:18.608486] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 01:45:27.090 [2024-12-09 05:40:18.608504] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 01:45:27.090 [2024-12-09 05:40:18.608513] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 01:45:27.090 [2024-12-09 05:40:18.608522] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 01:45:27.090 [2024-12-09 05:40:18.608530] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 01:45:27.090 [2024-12-09 05:40:18.608539] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 01:45:27.090 [2024-12-09 05:40:18.608547] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 01:45:27.090 [2024-12-09 05:40:18.608568] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 01:45:27.090 [2024-12-09 05:40:18.608584] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 01:45:27.090 [2024-12-09 05:40:18.608593] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:45:27.090 [2024-12-09 05:40:18.608603] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 01:45:27.090 [2024-12-09 05:40:18.608612] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 01:45:27.091 [2024-12-09 05:40:18.608622] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:45:27.091 [2024-12-09 05:40:18.608631] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 01:45:27.091 [2024-12-09 05:40:18.608641] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 01:45:27.091 [2024-12-09 05:40:18.608649] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:45:27.091 [2024-12-09 05:40:18.608659] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 01:45:27.091 [2024-12-09 05:40:18.608667] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 01:45:27.091 [2024-12-09 05:40:18.608693] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:45:27.091 [2024-12-09 05:40:18.608705] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 01:45:27.091 [2024-12-09 05:40:18.608714] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 01:45:27.091 [2024-12-09 05:40:18.608724] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:45:27.091 [2024-12-09 05:40:18.608733] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 01:45:27.091 [2024-12-09 05:40:18.608742] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 01:45:27.091 [2024-12-09 05:40:18.608750] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:45:27.091 [2024-12-09 05:40:18.608759] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 01:45:27.091 [2024-12-09 05:40:18.608768] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 01:45:27.091 [2024-12-09 05:40:18.608776] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 01:45:27.091 [2024-12-09 05:40:18.608785] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 01:45:27.091 [2024-12-09 05:40:18.608794] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 01:45:27.091 [2024-12-09 05:40:18.608803] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 01:45:27.091 [2024-12-09 05:40:18.608811] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 01:45:27.091 [2024-12-09 05:40:18.608820] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 01:45:27.091 [2024-12-09 05:40:18.608829] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:45:27.091 [2024-12-09 05:40:18.608838] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 01:45:27.091 [2024-12-09 05:40:18.608847] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 01:45:27.091 [2024-12-09 05:40:18.608857] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:45:27.091 [2024-12-09 05:40:18.608865] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 01:45:27.091 [2024-12-09 05:40:18.608884] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 01:45:27.091 [2024-12-09 05:40:18.608893] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 01:45:27.091 [2024-12-09 05:40:18.608903] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:45:27.091 [2024-12-09 05:40:18.608915] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 01:45:27.091 [2024-12-09 05:40:18.608936] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 01:45:27.091 [2024-12-09 05:40:18.608945] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 01:45:27.091 [2024-12-09 05:40:18.608954] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 01:45:27.091 [2024-12-09 05:40:18.608963] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 01:45:27.091 [2024-12-09 05:40:18.608972] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 01:45:27.091 [2024-12-09 05:40:18.608982] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 01:45:27.091 [2024-12-09 05:40:18.608995] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 01:45:27.091 [2024-12-09 05:40:18.609010] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 01:45:27.091 [2024-12-09 05:40:18.609020] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 01:45:27.091 [2024-12-09 05:40:18.609030] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 01:45:27.091 [2024-12-09 05:40:18.609040] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 01:45:27.091 [2024-12-09 05:40:18.609050] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 01:45:27.091 [2024-12-09 05:40:18.609059] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 01:45:27.091 [2024-12-09 05:40:18.609069] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 01:45:27.091 [2024-12-09 05:40:18.609079] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 01:45:27.091 [2024-12-09 05:40:18.609088] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 01:45:27.091 [2024-12-09 05:40:18.609098] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 01:45:27.091 [2024-12-09 05:40:18.609108] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 01:45:27.091 [2024-12-09 05:40:18.609118] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 01:45:27.091 [2024-12-09 05:40:18.609127] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 01:45:27.091 [2024-12-09 05:40:18.609137] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 01:45:27.091 [2024-12-09 05:40:18.609147] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 01:45:27.091 [2024-12-09 05:40:18.609158] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 01:45:27.091 [2024-12-09 05:40:18.609169] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 01:45:27.091 [2024-12-09 05:40:18.609179] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 01:45:27.091 [2024-12-09 05:40:18.609189] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 01:45:27.091 [2024-12-09 05:40:18.609198] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 01:45:27.091 [2024-12-09 05:40:18.609209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:45:27.091 [2024-12-09 05:40:18.609218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 01:45:27.091 [2024-12-09 05:40:18.609229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.841 ms 01:45:27.091 [2024-12-09 05:40:18.609239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:45:27.091 [2024-12-09 05:40:18.644704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:45:27.091 [2024-12-09 05:40:18.644924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 01:45:27.091 [2024-12-09 05:40:18.645046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.392 ms 01:45:27.091 [2024-12-09 05:40:18.645171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:45:27.091 [2024-12-09 05:40:18.645314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:45:27.091 [2024-12-09 05:40:18.645377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 01:45:27.091 [2024-12-09 05:40:18.645477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 01:45:27.091 [2024-12-09 05:40:18.645523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:45:27.091 [2024-12-09 05:40:18.693158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:45:27.091 [2024-12-09 05:40:18.693373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 01:45:27.091 [2024-12-09 05:40:18.693493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.435 ms 01:45:27.091 [2024-12-09 05:40:18.693541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:45:27.091 [2024-12-09 05:40:18.693718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:45:27.091 [2024-12-09 05:40:18.693854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 01:45:27.091 [2024-12-09 05:40:18.693990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 01:45:27.091 [2024-12-09 05:40:18.694050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:45:27.091 [2024-12-09 05:40:18.694855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:45:27.091 [2024-12-09 05:40:18.695027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 01:45:27.091 [2024-12-09 05:40:18.695127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.620 ms 01:45:27.091 [2024-12-09 05:40:18.695222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:45:27.091 [2024-12-09 05:40:18.695421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:45:27.091 [2024-12-09 05:40:18.695520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 01:45:27.091 [2024-12-09 05:40:18.695624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.132 ms 01:45:27.091 [2024-12-09 05:40:18.695766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:45:27.350 [2024-12-09 05:40:18.715491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:45:27.350 [2024-12-09 05:40:18.715534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 01:45:27.350 [2024-12-09 05:40:18.715550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.604 ms 01:45:27.350 [2024-12-09 05:40:18.715560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:45:27.350 [2024-12-09 05:40:18.730539] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 01:45:27.350 [2024-12-09 05:40:18.730579] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 01:45:27.350 [2024-12-09 05:40:18.730596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:45:27.350 [2024-12-09 05:40:18.730608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 01:45:27.350 [2024-12-09 05:40:18.730620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.862 ms 01:45:27.350 [2024-12-09 05:40:18.730630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:45:27.350 [2024-12-09 05:40:18.754496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:45:27.350 [2024-12-09 05:40:18.754535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 01:45:27.350 [2024-12-09 05:40:18.754551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.812 ms 01:45:27.350 [2024-12-09 05:40:18.754561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:45:27.350 [2024-12-09 05:40:18.767829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:45:27.350 [2024-12-09 05:40:18.767866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 01:45:27.350 [2024-12-09 05:40:18.767881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.212 ms 01:45:27.350 [2024-12-09 05:40:18.767891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:45:27.350 [2024-12-09 05:40:18.780057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:45:27.350 [2024-12-09 05:40:18.780093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 01:45:27.350 [2024-12-09 05:40:18.780109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.127 ms 01:45:27.350 [2024-12-09 05:40:18.780133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:45:27.350 [2024-12-09 05:40:18.780912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:45:27.350 [2024-12-09 05:40:18.780937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 01:45:27.350 [2024-12-09 05:40:18.780955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.667 ms 01:45:27.350 [2024-12-09 05:40:18.780965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:45:27.350 [2024-12-09 05:40:18.846507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:45:27.350 [2024-12-09 05:40:18.846756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 01:45:27.350 [2024-12-09 05:40:18.846809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.516 ms 01:45:27.350 [2024-12-09 05:40:18.846822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:45:27.350 [2024-12-09 05:40:18.856630] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 01:45:27.350 [2024-12-09 05:40:18.858608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:45:27.350 [2024-12-09 05:40:18.858642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 01:45:27.350 [2024-12-09 05:40:18.858657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.708 ms 01:45:27.350 [2024-12-09 05:40:18.858678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:45:27.350 [2024-12-09 05:40:18.858773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:45:27.350 [2024-12-09 05:40:18.858792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 01:45:27.350 [2024-12-09 05:40:18.858807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 01:45:27.351 [2024-12-09 05:40:18.858817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:45:27.351 [2024-12-09 05:40:18.858906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:45:27.351 [2024-12-09 05:40:18.858922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 01:45:27.351 [2024-12-09 05:40:18.858934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 01:45:27.351 [2024-12-09 05:40:18.858950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:45:27.351 [2024-12-09 05:40:18.858980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:45:27.351 [2024-12-09 05:40:18.858994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 01:45:27.351 [2024-12-09 05:40:18.859004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 01:45:27.351 [2024-12-09 05:40:18.859014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:45:27.351 [2024-12-09 05:40:18.859058] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 01:45:27.351 [2024-12-09 05:40:18.859073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:45:27.351 [2024-12-09 05:40:18.859083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 01:45:27.351 [2024-12-09 05:40:18.859094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 01:45:27.351 [2024-12-09 05:40:18.859104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:45:27.351 [2024-12-09 05:40:18.883939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:45:27.351 [2024-12-09 05:40:18.883978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 01:45:27.351 [2024-12-09 05:40:18.883999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.814 ms 01:45:27.351 [2024-12-09 05:40:18.884010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:45:27.351 [2024-12-09 05:40:18.884089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:45:27.351 [2024-12-09 05:40:18.884106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 01:45:27.351 [2024-12-09 05:40:18.884117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 01:45:27.351 [2024-12-09 05:40:18.884127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:45:27.351 [2024-12-09 05:40:18.885743] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 309.199 ms, result 0 01:45:28.282  [2024-12-09T05:40:21.270Z] Copying: 23/1024 [MB] (23 MBps) [2024-12-09T05:40:22.203Z] Copying: 47/1024 [MB] (23 MBps) [2024-12-09T05:40:23.134Z] Copying: 70/1024 [MB] (23 MBps) [2024-12-09T05:40:24.071Z] Copying: 93/1024 [MB] (23 MBps) [2024-12-09T05:40:25.005Z] Copying: 117/1024 [MB] (23 MBps) [2024-12-09T05:40:25.937Z] Copying: 140/1024 [MB] (23 MBps) [2024-12-09T05:40:27.311Z] Copying: 163/1024 [MB] (23 MBps) [2024-12-09T05:40:28.244Z] Copying: 187/1024 [MB] (23 MBps) [2024-12-09T05:40:29.178Z] Copying: 210/1024 [MB] (23 MBps) [2024-12-09T05:40:30.111Z] Copying: 234/1024 [MB] (23 MBps) [2024-12-09T05:40:31.043Z] Copying: 258/1024 [MB] (23 MBps) [2024-12-09T05:40:31.975Z] Copying: 281/1024 [MB] (23 MBps) [2024-12-09T05:40:32.909Z] Copying: 305/1024 [MB] (23 MBps) [2024-12-09T05:40:34.287Z] Copying: 328/1024 [MB] (22 MBps) [2024-12-09T05:40:35.222Z] Copying: 351/1024 [MB] (23 MBps) [2024-12-09T05:40:36.155Z] Copying: 375/1024 [MB] (23 MBps) [2024-12-09T05:40:37.092Z] Copying: 398/1024 [MB] (23 MBps) [2024-12-09T05:40:38.029Z] Copying: 422/1024 [MB] (23 MBps) [2024-12-09T05:40:38.967Z] Copying: 445/1024 [MB] (23 MBps) [2024-12-09T05:40:39.904Z] Copying: 468/1024 [MB] (23 MBps) [2024-12-09T05:40:41.283Z] Copying: 491/1024 [MB] (22 MBps) [2024-12-09T05:40:42.240Z] Copying: 514/1024 [MB] (22 MBps) [2024-12-09T05:40:43.174Z] Copying: 537/1024 [MB] (22 MBps) [2024-12-09T05:40:44.108Z] Copying: 560/1024 [MB] (23 MBps) [2024-12-09T05:40:45.044Z] Copying: 583/1024 [MB] (23 MBps) [2024-12-09T05:40:45.977Z] Copying: 606/1024 [MB] (23 MBps) [2024-12-09T05:40:46.913Z] Copying: 629/1024 [MB] (23 MBps) [2024-12-09T05:40:48.291Z] Copying: 652/1024 [MB] (23 MBps) [2024-12-09T05:40:49.227Z] Copying: 676/1024 [MB] (23 MBps) [2024-12-09T05:40:50.163Z] Copying: 699/1024 [MB] (23 MBps) [2024-12-09T05:40:51.101Z] Copying: 722/1024 [MB] (23 MBps) [2024-12-09T05:40:52.156Z] Copying: 745/1024 [MB] (23 MBps) [2024-12-09T05:40:53.108Z] Copying: 769/1024 [MB] (23 MBps) [2024-12-09T05:40:54.042Z] Copying: 792/1024 [MB] (23 MBps) [2024-12-09T05:40:54.976Z] Copying: 815/1024 [MB] (23 MBps) [2024-12-09T05:40:55.908Z] Copying: 838/1024 [MB] (22 MBps) [2024-12-09T05:40:57.279Z] Copying: 861/1024 [MB] (23 MBps) [2024-12-09T05:40:58.212Z] Copying: 884/1024 [MB] (23 MBps) [2024-12-09T05:40:59.148Z] Copying: 907/1024 [MB] (23 MBps) [2024-12-09T05:41:00.083Z] Copying: 930/1024 [MB] (23 MBps) [2024-12-09T05:41:01.019Z] Copying: 954/1024 [MB] (23 MBps) [2024-12-09T05:41:01.953Z] Copying: 977/1024 [MB] (23 MBps) [2024-12-09T05:41:03.329Z] Copying: 1001/1024 [MB] (23 MBps) [2024-12-09T05:41:03.896Z] Copying: 1023/1024 [MB] (21 MBps) [2024-12-09T05:41:03.896Z] Copying: 1024/1024 [MB] (average 22 MBps)[2024-12-09 05:41:03.858208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:46:12.279 [2024-12-09 05:41:03.858308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 01:46:12.279 [2024-12-09 05:41:03.858354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 01:46:12.279 [2024-12-09 05:41:03.858365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:46:12.279 [2024-12-09 05:41:03.860214] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 01:46:12.279 [2024-12-09 05:41:03.865690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:46:12.279 [2024-12-09 05:41:03.865743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 01:46:12.279 [2024-12-09 05:41:03.865757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.363 ms 01:46:12.279 [2024-12-09 05:41:03.865768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:46:12.279 [2024-12-09 05:41:03.877472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:46:12.279 [2024-12-09 05:41:03.877515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 01:46:12.279 [2024-12-09 05:41:03.877546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.721 ms 01:46:12.279 [2024-12-09 05:41:03.877564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:46:12.538 [2024-12-09 05:41:03.898442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:46:12.538 [2024-12-09 05:41:03.898488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 01:46:12.538 [2024-12-09 05:41:03.898506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.857 ms 01:46:12.538 [2024-12-09 05:41:03.898519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:46:12.538 [2024-12-09 05:41:03.903738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:46:12.538 [2024-12-09 05:41:03.903771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 01:46:12.538 [2024-12-09 05:41:03.903799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.180 ms 01:46:12.538 [2024-12-09 05:41:03.903825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:46:12.538 [2024-12-09 05:41:03.929464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:46:12.538 [2024-12-09 05:41:03.929502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 01:46:12.538 [2024-12-09 05:41:03.929532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.572 ms 01:46:12.538 [2024-12-09 05:41:03.929543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:46:12.538 [2024-12-09 05:41:03.944474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:46:12.538 [2024-12-09 05:41:03.944513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 01:46:12.539 [2024-12-09 05:41:03.944543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.890 ms 01:46:12.539 [2024-12-09 05:41:03.944554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:46:12.539 [2024-12-09 05:41:04.049277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:46:12.539 [2024-12-09 05:41:04.049346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 01:46:12.539 [2024-12-09 05:41:04.049380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 104.681 ms 01:46:12.539 [2024-12-09 05:41:04.049392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:46:12.539 [2024-12-09 05:41:04.074257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:46:12.539 [2024-12-09 05:41:04.074294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 01:46:12.539 [2024-12-09 05:41:04.074323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.832 ms 01:46:12.539 [2024-12-09 05:41:04.074333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:46:12.539 [2024-12-09 05:41:04.098494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:46:12.539 [2024-12-09 05:41:04.098531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 01:46:12.539 [2024-12-09 05:41:04.098560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.123 ms 01:46:12.539 [2024-12-09 05:41:04.098570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:46:12.539 [2024-12-09 05:41:04.122423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:46:12.539 [2024-12-09 05:41:04.122475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 01:46:12.539 [2024-12-09 05:41:04.122490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.815 ms 01:46:12.539 [2024-12-09 05:41:04.122500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:46:12.539 [2024-12-09 05:41:04.146284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:46:12.539 [2024-12-09 05:41:04.146320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 01:46:12.539 [2024-12-09 05:41:04.146349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.703 ms 01:46:12.539 [2024-12-09 05:41:04.146359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:46:12.539 [2024-12-09 05:41:04.146398] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 01:46:12.539 [2024-12-09 05:41:04.146439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 115200 / 261120 wr_cnt: 1 state: open 01:46:12.539 [2024-12-09 05:41:04.146453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 01:46:12.539 [2024-12-09 05:41:04.146464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 01:46:12.539 [2024-12-09 05:41:04.146474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 01:46:12.539 [2024-12-09 05:41:04.146485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 01:46:12.539 [2024-12-09 05:41:04.146495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 01:46:12.539 [2024-12-09 05:41:04.146505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 01:46:12.539 [2024-12-09 05:41:04.146514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 01:46:12.539 [2024-12-09 05:41:04.146524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 01:46:12.539 [2024-12-09 05:41:04.146534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 01:46:12.539 [2024-12-09 05:41:04.146545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 01:46:12.539 [2024-12-09 05:41:04.146554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 01:46:12.539 [2024-12-09 05:41:04.146565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 01:46:12.539 [2024-12-09 05:41:04.146574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 01:46:12.539 [2024-12-09 05:41:04.146584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 01:46:12.539 [2024-12-09 05:41:04.146610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 01:46:12.539 [2024-12-09 05:41:04.146636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 01:46:12.539 [2024-12-09 05:41:04.146647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 01:46:12.539 [2024-12-09 05:41:04.146657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 01:46:12.539 [2024-12-09 05:41:04.146668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 01:46:12.539 [2024-12-09 05:41:04.146679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 01:46:12.539 [2024-12-09 05:41:04.146689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 01:46:12.539 [2024-12-09 05:41:04.146720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 01:46:12.539 [2024-12-09 05:41:04.146732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 01:46:12.539 [2024-12-09 05:41:04.146742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 01:46:12.539 [2024-12-09 05:41:04.146753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 01:46:12.539 [2024-12-09 05:41:04.146763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 01:46:12.539 [2024-12-09 05:41:04.146774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 01:46:12.539 [2024-12-09 05:41:04.146785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 01:46:12.539 [2024-12-09 05:41:04.146795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 01:46:12.539 [2024-12-09 05:41:04.146807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 01:46:12.539 [2024-12-09 05:41:04.146818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 01:46:12.539 [2024-12-09 05:41:04.146829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 01:46:12.539 [2024-12-09 05:41:04.146840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 01:46:12.539 [2024-12-09 05:41:04.146851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 01:46:12.539 [2024-12-09 05:41:04.146861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 01:46:12.539 [2024-12-09 05:41:04.146872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 01:46:12.539 [2024-12-09 05:41:04.146882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 01:46:12.539 [2024-12-09 05:41:04.146893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 01:46:12.539 [2024-12-09 05:41:04.146913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 01:46:12.539 [2024-12-09 05:41:04.146924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 01:46:12.539 [2024-12-09 05:41:04.146938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 01:46:12.539 [2024-12-09 05:41:04.146949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 01:46:12.539 [2024-12-09 05:41:04.146960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 01:46:12.539 [2024-12-09 05:41:04.146970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 01:46:12.539 [2024-12-09 05:41:04.146980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 01:46:12.539 [2024-12-09 05:41:04.146991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 01:46:12.539 [2024-12-09 05:41:04.147002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 01:46:12.539 [2024-12-09 05:41:04.147012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 01:46:12.539 [2024-12-09 05:41:04.147023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 01:46:12.539 [2024-12-09 05:41:04.147034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 01:46:12.539 [2024-12-09 05:41:04.147045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 01:46:12.539 [2024-12-09 05:41:04.147055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 01:46:12.539 [2024-12-09 05:41:04.147065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 01:46:12.539 [2024-12-09 05:41:04.147076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 01:46:12.539 [2024-12-09 05:41:04.147087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 01:46:12.539 [2024-12-09 05:41:04.147097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 01:46:12.539 [2024-12-09 05:41:04.147108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 01:46:12.539 [2024-12-09 05:41:04.147119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 01:46:12.539 [2024-12-09 05:41:04.147130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 01:46:12.539 [2024-12-09 05:41:04.147140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 01:46:12.539 [2024-12-09 05:41:04.147151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 01:46:12.539 [2024-12-09 05:41:04.147163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 01:46:12.539 [2024-12-09 05:41:04.147174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 01:46:12.539 [2024-12-09 05:41:04.147187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 01:46:12.539 [2024-12-09 05:41:04.147197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 01:46:12.539 [2024-12-09 05:41:04.147209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 01:46:12.539 [2024-12-09 05:41:04.147220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 01:46:12.539 [2024-12-09 05:41:04.147230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 01:46:12.539 [2024-12-09 05:41:04.147241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 01:46:12.540 [2024-12-09 05:41:04.147251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 01:46:12.540 [2024-12-09 05:41:04.147262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 01:46:12.540 [2024-12-09 05:41:04.147272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 01:46:12.540 [2024-12-09 05:41:04.147283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 01:46:12.540 [2024-12-09 05:41:04.147294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 01:46:12.540 [2024-12-09 05:41:04.147304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 01:46:12.540 [2024-12-09 05:41:04.147315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 01:46:12.540 [2024-12-09 05:41:04.147326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 01:46:12.540 [2024-12-09 05:41:04.147337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 01:46:12.540 [2024-12-09 05:41:04.147348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 01:46:12.540 [2024-12-09 05:41:04.147359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 01:46:12.540 [2024-12-09 05:41:04.147370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 01:46:12.540 [2024-12-09 05:41:04.147381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 01:46:12.540 [2024-12-09 05:41:04.147391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 01:46:12.540 [2024-12-09 05:41:04.147402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 01:46:12.540 [2024-12-09 05:41:04.147413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 01:46:12.540 [2024-12-09 05:41:04.147423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 01:46:12.540 [2024-12-09 05:41:04.147433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 01:46:12.540 [2024-12-09 05:41:04.147444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 01:46:12.540 [2024-12-09 05:41:04.147462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 01:46:12.540 [2024-12-09 05:41:04.147473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 01:46:12.540 [2024-12-09 05:41:04.147484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 01:46:12.540 [2024-12-09 05:41:04.147495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 01:46:12.540 [2024-12-09 05:41:04.147506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 01:46:12.540 [2024-12-09 05:41:04.147517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 01:46:12.540 [2024-12-09 05:41:04.147529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 01:46:12.540 [2024-12-09 05:41:04.147540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 01:46:12.540 [2024-12-09 05:41:04.147551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 01:46:12.540 [2024-12-09 05:41:04.147561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 01:46:12.540 [2024-12-09 05:41:04.147571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 01:46:12.540 [2024-12-09 05:41:04.147590] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 01:46:12.540 [2024-12-09 05:41:04.147601] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2fe6c684-ea00-40ee-a25d-c4c960459442 01:46:12.540 [2024-12-09 05:41:04.147613] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 115200 01:46:12.540 [2024-12-09 05:41:04.147622] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 116160 01:46:12.540 [2024-12-09 05:41:04.147633] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 115200 01:46:12.540 [2024-12-09 05:41:04.147644] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0083 01:46:12.540 [2024-12-09 05:41:04.147680] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 01:46:12.540 [2024-12-09 05:41:04.147692] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 01:46:12.540 [2024-12-09 05:41:04.147702] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 01:46:12.540 [2024-12-09 05:41:04.147712] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 01:46:12.540 [2024-12-09 05:41:04.147721] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 01:46:12.540 [2024-12-09 05:41:04.147732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:46:12.540 [2024-12-09 05:41:04.147743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 01:46:12.540 [2024-12-09 05:41:04.147754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.335 ms 01:46:12.540 [2024-12-09 05:41:04.147764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:46:12.803 [2024-12-09 05:41:04.161577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:46:12.803 [2024-12-09 05:41:04.161630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 01:46:12.803 [2024-12-09 05:41:04.161650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.791 ms 01:46:12.803 [2024-12-09 05:41:04.161682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:46:12.803 [2024-12-09 05:41:04.162149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:46:12.803 [2024-12-09 05:41:04.162179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 01:46:12.803 [2024-12-09 05:41:04.162193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.424 ms 01:46:12.803 [2024-12-09 05:41:04.162204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:46:12.803 [2024-12-09 05:41:04.198068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:46:12.803 [2024-12-09 05:41:04.198114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 01:46:12.803 [2024-12-09 05:41:04.198143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:46:12.803 [2024-12-09 05:41:04.198154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:46:12.803 [2024-12-09 05:41:04.198206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:46:12.803 [2024-12-09 05:41:04.198221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 01:46:12.803 [2024-12-09 05:41:04.198231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:46:12.803 [2024-12-09 05:41:04.198241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:46:12.804 [2024-12-09 05:41:04.198309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:46:12.804 [2024-12-09 05:41:04.198332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 01:46:12.804 [2024-12-09 05:41:04.198359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:46:12.804 [2024-12-09 05:41:04.198369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:46:12.804 [2024-12-09 05:41:04.198407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:46:12.804 [2024-12-09 05:41:04.198438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 01:46:12.804 [2024-12-09 05:41:04.198454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:46:12.804 [2024-12-09 05:41:04.198465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:46:12.804 [2024-12-09 05:41:04.282537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:46:12.804 [2024-12-09 05:41:04.282603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 01:46:12.804 [2024-12-09 05:41:04.282635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:46:12.804 [2024-12-09 05:41:04.282646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:46:12.804 [2024-12-09 05:41:04.351237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:46:12.804 [2024-12-09 05:41:04.351302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 01:46:12.804 [2024-12-09 05:41:04.351335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:46:12.804 [2024-12-09 05:41:04.351346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:46:12.804 [2024-12-09 05:41:04.351433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:46:12.804 [2024-12-09 05:41:04.351450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 01:46:12.804 [2024-12-09 05:41:04.351462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:46:12.804 [2024-12-09 05:41:04.351480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:46:12.804 [2024-12-09 05:41:04.351526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:46:12.804 [2024-12-09 05:41:04.351549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 01:46:12.804 [2024-12-09 05:41:04.351576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:46:12.804 [2024-12-09 05:41:04.351602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:46:12.804 [2024-12-09 05:41:04.351743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:46:12.804 [2024-12-09 05:41:04.351763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 01:46:12.804 [2024-12-09 05:41:04.351775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:46:12.804 [2024-12-09 05:41:04.351792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:46:12.804 [2024-12-09 05:41:04.351844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:46:12.804 [2024-12-09 05:41:04.351862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 01:46:12.804 [2024-12-09 05:41:04.351873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:46:12.804 [2024-12-09 05:41:04.351884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:46:12.804 [2024-12-09 05:41:04.351944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:46:12.804 [2024-12-09 05:41:04.351960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 01:46:12.804 [2024-12-09 05:41:04.351973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:46:12.804 [2024-12-09 05:41:04.351984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:46:12.804 [2024-12-09 05:41:04.352044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:46:12.804 [2024-12-09 05:41:04.352060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 01:46:12.804 [2024-12-09 05:41:04.352077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:46:12.804 [2024-12-09 05:41:04.352088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:46:12.804 [2024-12-09 05:41:04.352281] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 497.251 ms, result 0 01:46:14.713 01:46:14.713 01:46:14.713 05:41:05 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 01:46:14.713 [2024-12-09 05:41:06.010289] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:46:14.713 [2024-12-09 05:41:06.010480] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80837 ] 01:46:14.713 [2024-12-09 05:41:06.188515] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:46:14.713 [2024-12-09 05:41:06.286127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:46:15.280 [2024-12-09 05:41:06.600845] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 01:46:15.280 [2024-12-09 05:41:06.600935] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 01:46:15.280 [2024-12-09 05:41:06.759553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:46:15.280 [2024-12-09 05:41:06.759601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 01:46:15.280 [2024-12-09 05:41:06.759640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 01:46:15.280 [2024-12-09 05:41:06.759650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:46:15.280 [2024-12-09 05:41:06.759736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:46:15.280 [2024-12-09 05:41:06.759757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 01:46:15.280 [2024-12-09 05:41:06.759768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 01:46:15.280 [2024-12-09 05:41:06.759793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:46:15.280 [2024-12-09 05:41:06.759838] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 01:46:15.280 [2024-12-09 05:41:06.760689] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 01:46:15.280 [2024-12-09 05:41:06.760753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:46:15.280 [2024-12-09 05:41:06.760766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 01:46:15.280 [2024-12-09 05:41:06.760778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.921 ms 01:46:15.280 [2024-12-09 05:41:06.760787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:46:15.280 [2024-12-09 05:41:06.762892] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 01:46:15.280 [2024-12-09 05:41:06.776688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:46:15.280 [2024-12-09 05:41:06.776726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 01:46:15.280 [2024-12-09 05:41:06.776761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.798 ms 01:46:15.280 [2024-12-09 05:41:06.776772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:46:15.280 [2024-12-09 05:41:06.776840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:46:15.280 [2024-12-09 05:41:06.776889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 01:46:15.280 [2024-12-09 05:41:06.776901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 01:46:15.280 [2024-12-09 05:41:06.776911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:46:15.280 [2024-12-09 05:41:06.785323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:46:15.280 [2024-12-09 05:41:06.785360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 01:46:15.280 [2024-12-09 05:41:06.785395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.342 ms 01:46:15.280 [2024-12-09 05:41:06.785411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:46:15.280 [2024-12-09 05:41:06.785495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:46:15.280 [2024-12-09 05:41:06.785544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 01:46:15.280 [2024-12-09 05:41:06.785556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 01:46:15.280 [2024-12-09 05:41:06.785567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:46:15.280 [2024-12-09 05:41:06.785639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:46:15.280 [2024-12-09 05:41:06.785656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 01:46:15.280 [2024-12-09 05:41:06.785668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 01:46:15.280 [2024-12-09 05:41:06.785678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:46:15.280 [2024-12-09 05:41:06.785751] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 01:46:15.280 [2024-12-09 05:41:06.789984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:46:15.280 [2024-12-09 05:41:06.790036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 01:46:15.280 [2024-12-09 05:41:06.790090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.276 ms 01:46:15.280 [2024-12-09 05:41:06.790100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:46:15.280 [2024-12-09 05:41:06.790135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:46:15.280 [2024-12-09 05:41:06.790149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 01:46:15.280 [2024-12-09 05:41:06.790161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 01:46:15.280 [2024-12-09 05:41:06.790186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:46:15.280 [2024-12-09 05:41:06.790244] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 01:46:15.280 [2024-12-09 05:41:06.790273] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 01:46:15.280 [2024-12-09 05:41:06.790310] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 01:46:15.280 [2024-12-09 05:41:06.790342] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 01:46:15.280 [2024-12-09 05:41:06.790471] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 01:46:15.280 [2024-12-09 05:41:06.790493] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 01:46:15.280 [2024-12-09 05:41:06.790508] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 01:46:15.280 [2024-12-09 05:41:06.790522] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 01:46:15.280 [2024-12-09 05:41:06.790534] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 01:46:15.280 [2024-12-09 05:41:06.790555] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 01:46:15.280 [2024-12-09 05:41:06.790565] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 01:46:15.280 [2024-12-09 05:41:06.790581] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 01:46:15.280 [2024-12-09 05:41:06.790591] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 01:46:15.280 [2024-12-09 05:41:06.790602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:46:15.280 [2024-12-09 05:41:06.790613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 01:46:15.280 [2024-12-09 05:41:06.790624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.360 ms 01:46:15.280 [2024-12-09 05:41:06.790634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:46:15.280 [2024-12-09 05:41:06.790752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:46:15.280 [2024-12-09 05:41:06.790776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 01:46:15.280 [2024-12-09 05:41:06.790788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 01:46:15.280 [2024-12-09 05:41:06.790799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:46:15.280 [2024-12-09 05:41:06.790911] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 01:46:15.280 [2024-12-09 05:41:06.790937] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 01:46:15.280 [2024-12-09 05:41:06.790949] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 01:46:15.280 [2024-12-09 05:41:06.790960] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:46:15.280 [2024-12-09 05:41:06.790970] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 01:46:15.281 [2024-12-09 05:41:06.790981] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 01:46:15.281 [2024-12-09 05:41:06.790991] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 01:46:15.281 [2024-12-09 05:41:06.791001] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 01:46:15.281 [2024-12-09 05:41:06.791010] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 01:46:15.281 [2024-12-09 05:41:06.791019] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 01:46:15.281 [2024-12-09 05:41:06.791028] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 01:46:15.281 [2024-12-09 05:41:06.791038] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 01:46:15.281 [2024-12-09 05:41:06.791047] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 01:46:15.281 [2024-12-09 05:41:06.791069] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 01:46:15.281 [2024-12-09 05:41:06.791079] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 01:46:15.281 [2024-12-09 05:41:06.791089] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:46:15.281 [2024-12-09 05:41:06.791098] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 01:46:15.281 [2024-12-09 05:41:06.791107] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 01:46:15.281 [2024-12-09 05:41:06.791116] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:46:15.281 [2024-12-09 05:41:06.791125] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 01:46:15.281 [2024-12-09 05:41:06.791134] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 01:46:15.281 [2024-12-09 05:41:06.791143] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:46:15.281 [2024-12-09 05:41:06.791152] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 01:46:15.281 [2024-12-09 05:41:06.791161] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 01:46:15.281 [2024-12-09 05:41:06.791171] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:46:15.281 [2024-12-09 05:41:06.791180] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 01:46:15.281 [2024-12-09 05:41:06.791188] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 01:46:15.281 [2024-12-09 05:41:06.791197] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:46:15.281 [2024-12-09 05:41:06.791207] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 01:46:15.281 [2024-12-09 05:41:06.791216] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 01:46:15.281 [2024-12-09 05:41:06.791224] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:46:15.281 [2024-12-09 05:41:06.791234] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 01:46:15.281 [2024-12-09 05:41:06.791250] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 01:46:15.281 [2024-12-09 05:41:06.791259] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 01:46:15.281 [2024-12-09 05:41:06.791268] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 01:46:15.281 [2024-12-09 05:41:06.791277] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 01:46:15.281 [2024-12-09 05:41:06.791286] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 01:46:15.281 [2024-12-09 05:41:06.791297] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 01:46:15.281 [2024-12-09 05:41:06.791308] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 01:46:15.281 [2024-12-09 05:41:06.791317] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:46:15.281 [2024-12-09 05:41:06.791327] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 01:46:15.281 [2024-12-09 05:41:06.791336] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 01:46:15.281 [2024-12-09 05:41:06.791345] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:46:15.281 [2024-12-09 05:41:06.791354] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 01:46:15.281 [2024-12-09 05:41:06.791365] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 01:46:15.281 [2024-12-09 05:41:06.791375] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 01:46:15.281 [2024-12-09 05:41:06.791384] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:46:15.281 [2024-12-09 05:41:06.791394] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 01:46:15.281 [2024-12-09 05:41:06.791403] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 01:46:15.281 [2024-12-09 05:41:06.791413] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 01:46:15.281 [2024-12-09 05:41:06.791422] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 01:46:15.281 [2024-12-09 05:41:06.791431] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 01:46:15.281 [2024-12-09 05:41:06.791440] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 01:46:15.281 [2024-12-09 05:41:06.791450] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 01:46:15.281 [2024-12-09 05:41:06.791463] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 01:46:15.281 [2024-12-09 05:41:06.791479] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 01:46:15.281 [2024-12-09 05:41:06.791490] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 01:46:15.281 [2024-12-09 05:41:06.791500] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 01:46:15.281 [2024-12-09 05:41:06.791509] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 01:46:15.281 [2024-12-09 05:41:06.791526] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 01:46:15.281 [2024-12-09 05:41:06.791535] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 01:46:15.281 [2024-12-09 05:41:06.791545] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 01:46:15.281 [2024-12-09 05:41:06.791555] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 01:46:15.281 [2024-12-09 05:41:06.791564] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 01:46:15.281 [2024-12-09 05:41:06.791574] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 01:46:15.281 [2024-12-09 05:41:06.791584] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 01:46:15.281 [2024-12-09 05:41:06.791594] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 01:46:15.281 [2024-12-09 05:41:06.791604] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 01:46:15.281 [2024-12-09 05:41:06.791614] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 01:46:15.281 [2024-12-09 05:41:06.791624] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 01:46:15.281 [2024-12-09 05:41:06.791636] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 01:46:15.281 [2024-12-09 05:41:06.791647] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 01:46:15.281 [2024-12-09 05:41:06.791658] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 01:46:15.281 [2024-12-09 05:41:06.791706] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 01:46:15.281 [2024-12-09 05:41:06.791718] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 01:46:15.281 [2024-12-09 05:41:06.791731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:46:15.281 [2024-12-09 05:41:06.791742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 01:46:15.281 [2024-12-09 05:41:06.791753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.880 ms 01:46:15.281 [2024-12-09 05:41:06.791764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:46:15.281 [2024-12-09 05:41:06.827396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:46:15.281 [2024-12-09 05:41:06.827454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 01:46:15.281 [2024-12-09 05:41:06.827487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.571 ms 01:46:15.281 [2024-12-09 05:41:06.827504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:46:15.281 [2024-12-09 05:41:06.827604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:46:15.281 [2024-12-09 05:41:06.827627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 01:46:15.281 [2024-12-09 05:41:06.827654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 01:46:15.281 [2024-12-09 05:41:06.827664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:46:15.281 [2024-12-09 05:41:06.875522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:46:15.281 [2024-12-09 05:41:06.875586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 01:46:15.281 [2024-12-09 05:41:06.875619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.733 ms 01:46:15.281 [2024-12-09 05:41:06.875630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:46:15.281 [2024-12-09 05:41:06.875699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:46:15.281 [2024-12-09 05:41:06.875748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 01:46:15.281 [2024-12-09 05:41:06.875767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 01:46:15.281 [2024-12-09 05:41:06.875792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:46:15.281 [2024-12-09 05:41:06.876447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:46:15.281 [2024-12-09 05:41:06.876491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 01:46:15.281 [2024-12-09 05:41:06.876506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.564 ms 01:46:15.281 [2024-12-09 05:41:06.876516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:46:15.281 [2024-12-09 05:41:06.876719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:46:15.281 [2024-12-09 05:41:06.876789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 01:46:15.281 [2024-12-09 05:41:06.876810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.174 ms 01:46:15.281 [2024-12-09 05:41:06.876822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:46:15.281 [2024-12-09 05:41:06.893600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:46:15.281 [2024-12-09 05:41:06.893676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 01:46:15.282 [2024-12-09 05:41:06.893692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.750 ms 01:46:15.282 [2024-12-09 05:41:06.893703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:46:15.540 [2024-12-09 05:41:06.907619] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 01:46:15.540 [2024-12-09 05:41:06.907659] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 01:46:15.541 [2024-12-09 05:41:06.907703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:46:15.541 [2024-12-09 05:41:06.907715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 01:46:15.541 [2024-12-09 05:41:06.907727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.844 ms 01:46:15.541 [2024-12-09 05:41:06.907737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:46:15.541 [2024-12-09 05:41:06.931177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:46:15.541 [2024-12-09 05:41:06.931219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 01:46:15.541 [2024-12-09 05:41:06.931250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.391 ms 01:46:15.541 [2024-12-09 05:41:06.931261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:46:15.541 [2024-12-09 05:41:06.943641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:46:15.541 [2024-12-09 05:41:06.943704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 01:46:15.541 [2024-12-09 05:41:06.943734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.329 ms 01:46:15.541 [2024-12-09 05:41:06.943744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:46:15.541 [2024-12-09 05:41:06.955887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:46:15.541 [2024-12-09 05:41:06.955940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 01:46:15.541 [2024-12-09 05:41:06.955969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.099 ms 01:46:15.541 [2024-12-09 05:41:06.955979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:46:15.541 [2024-12-09 05:41:06.956730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:46:15.541 [2024-12-09 05:41:06.956789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 01:46:15.541 [2024-12-09 05:41:06.956823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.594 ms 01:46:15.541 [2024-12-09 05:41:06.956834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:46:15.541 [2024-12-09 05:41:07.024484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:46:15.541 [2024-12-09 05:41:07.024555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 01:46:15.541 [2024-12-09 05:41:07.024596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.621 ms 01:46:15.541 [2024-12-09 05:41:07.024607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:46:15.541 [2024-12-09 05:41:07.035155] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 01:46:15.541 [2024-12-09 05:41:07.037474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:46:15.541 [2024-12-09 05:41:07.037504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 01:46:15.541 [2024-12-09 05:41:07.037534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.782 ms 01:46:15.541 [2024-12-09 05:41:07.037544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:46:15.541 [2024-12-09 05:41:07.037653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:46:15.541 [2024-12-09 05:41:07.037672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 01:46:15.541 [2024-12-09 05:41:07.037735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 01:46:15.541 [2024-12-09 05:41:07.037746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:46:15.541 [2024-12-09 05:41:07.039633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:46:15.541 [2024-12-09 05:41:07.039696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 01:46:15.541 [2024-12-09 05:41:07.039726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.827 ms 01:46:15.541 [2024-12-09 05:41:07.039736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:46:15.541 [2024-12-09 05:41:07.039773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:46:15.541 [2024-12-09 05:41:07.039788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 01:46:15.541 [2024-12-09 05:41:07.039800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 01:46:15.541 [2024-12-09 05:41:07.039810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:46:15.541 [2024-12-09 05:41:07.039897] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 01:46:15.541 [2024-12-09 05:41:07.039914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:46:15.541 [2024-12-09 05:41:07.039926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 01:46:15.541 [2024-12-09 05:41:07.039937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 01:46:15.541 [2024-12-09 05:41:07.039948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:46:15.541 [2024-12-09 05:41:07.067132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:46:15.541 [2024-12-09 05:41:07.067188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 01:46:15.541 [2024-12-09 05:41:07.067225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.156 ms 01:46:15.541 [2024-12-09 05:41:07.067237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:46:15.541 [2024-12-09 05:41:07.067323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:46:15.541 [2024-12-09 05:41:07.067373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 01:46:15.541 [2024-12-09 05:41:07.067385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 01:46:15.541 [2024-12-09 05:41:07.067396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:46:15.541 [2024-12-09 05:41:07.071410] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 310.065 ms, result 0 01:46:16.915  [2024-12-09T05:41:09.468Z] Copying: 18/1024 [MB] (18 MBps) [2024-12-09T05:41:10.404Z] Copying: 40/1024 [MB] (22 MBps) [2024-12-09T05:41:11.339Z] Copying: 62/1024 [MB] (21 MBps) [2024-12-09T05:41:12.274Z] Copying: 83/1024 [MB] (21 MBps) [2024-12-09T05:41:13.652Z] Copying: 104/1024 [MB] (21 MBps) [2024-12-09T05:41:14.590Z] Copying: 126/1024 [MB] (21 MBps) [2024-12-09T05:41:15.527Z] Copying: 148/1024 [MB] (22 MBps) [2024-12-09T05:41:16.459Z] Copying: 170/1024 [MB] (22 MBps) [2024-12-09T05:41:17.396Z] Copying: 192/1024 [MB] (22 MBps) [2024-12-09T05:41:18.340Z] Copying: 214/1024 [MB] (21 MBps) [2024-12-09T05:41:19.275Z] Copying: 236/1024 [MB] (21 MBps) [2024-12-09T05:41:20.649Z] Copying: 257/1024 [MB] (21 MBps) [2024-12-09T05:41:21.263Z] Copying: 279/1024 [MB] (21 MBps) [2024-12-09T05:41:22.637Z] Copying: 300/1024 [MB] (21 MBps) [2024-12-09T05:41:23.571Z] Copying: 322/1024 [MB] (21 MBps) [2024-12-09T05:41:24.507Z] Copying: 344/1024 [MB] (22 MBps) [2024-12-09T05:41:25.443Z] Copying: 366/1024 [MB] (22 MBps) [2024-12-09T05:41:26.379Z] Copying: 389/1024 [MB] (22 MBps) [2024-12-09T05:41:27.314Z] Copying: 411/1024 [MB] (22 MBps) [2024-12-09T05:41:28.690Z] Copying: 432/1024 [MB] (21 MBps) [2024-12-09T05:41:29.258Z] Copying: 454/1024 [MB] (21 MBps) [2024-12-09T05:41:30.633Z] Copying: 476/1024 [MB] (22 MBps) [2024-12-09T05:41:31.571Z] Copying: 498/1024 [MB] (21 MBps) [2024-12-09T05:41:32.507Z] Copying: 520/1024 [MB] (21 MBps) [2024-12-09T05:41:33.441Z] Copying: 542/1024 [MB] (21 MBps) [2024-12-09T05:41:34.377Z] Copying: 564/1024 [MB] (21 MBps) [2024-12-09T05:41:35.314Z] Copying: 585/1024 [MB] (21 MBps) [2024-12-09T05:41:36.688Z] Copying: 607/1024 [MB] (21 MBps) [2024-12-09T05:41:37.254Z] Copying: 629/1024 [MB] (21 MBps) [2024-12-09T05:41:38.629Z] Copying: 651/1024 [MB] (21 MBps) [2024-12-09T05:41:39.572Z] Copying: 672/1024 [MB] (21 MBps) [2024-12-09T05:41:40.507Z] Copying: 694/1024 [MB] (21 MBps) [2024-12-09T05:41:41.441Z] Copying: 716/1024 [MB] (21 MBps) [2024-12-09T05:41:42.373Z] Copying: 738/1024 [MB] (22 MBps) [2024-12-09T05:41:43.307Z] Copying: 760/1024 [MB] (22 MBps) [2024-12-09T05:41:44.705Z] Copying: 782/1024 [MB] (22 MBps) [2024-12-09T05:41:45.313Z] Copying: 805/1024 [MB] (22 MBps) [2024-12-09T05:41:46.687Z] Copying: 827/1024 [MB] (21 MBps) [2024-12-09T05:41:47.255Z] Copying: 849/1024 [MB] (21 MBps) [2024-12-09T05:41:48.631Z] Copying: 870/1024 [MB] (21 MBps) [2024-12-09T05:41:49.568Z] Copying: 893/1024 [MB] (22 MBps) [2024-12-09T05:41:50.503Z] Copying: 915/1024 [MB] (22 MBps) [2024-12-09T05:41:51.440Z] Copying: 937/1024 [MB] (21 MBps) [2024-12-09T05:41:52.375Z] Copying: 959/1024 [MB] (22 MBps) [2024-12-09T05:41:53.329Z] Copying: 981/1024 [MB] (21 MBps) [2024-12-09T05:41:54.263Z] Copying: 1004/1024 [MB] (22 MBps) [2024-12-09T05:41:54.522Z] Copying: 1024/1024 [MB] (average 21 MBps)[2024-12-09 05:41:54.288002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:47:02.905 [2024-12-09 05:41:54.288330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 01:47:02.905 [2024-12-09 05:41:54.288795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 01:47:02.905 [2024-12-09 05:41:54.288939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:47:02.905 [2024-12-09 05:41:54.289017] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 01:47:02.905 [2024-12-09 05:41:54.292585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:47:02.905 [2024-12-09 05:41:54.292761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 01:47:02.905 [2024-12-09 05:41:54.292875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.316 ms 01:47:02.905 [2024-12-09 05:41:54.292921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:47:02.905 [2024-12-09 05:41:54.293275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:47:02.905 [2024-12-09 05:41:54.293429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 01:47:02.905 [2024-12-09 05:41:54.293536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.226 ms 01:47:02.905 [2024-12-09 05:41:54.293705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:47:02.905 [2024-12-09 05:41:54.299050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:47:02.905 [2024-12-09 05:41:54.299303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 01:47:02.905 [2024-12-09 05:41:54.299415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.278 ms 01:47:02.905 [2024-12-09 05:41:54.299461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:47:02.905 [2024-12-09 05:41:54.305374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:47:02.905 [2024-12-09 05:41:54.305535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 01:47:02.905 [2024-12-09 05:41:54.305633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.843 ms 01:47:02.905 [2024-12-09 05:41:54.305726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:47:02.905 [2024-12-09 05:41:54.332633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:47:02.905 [2024-12-09 05:41:54.332857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 01:47:02.905 [2024-12-09 05:41:54.332978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.764 ms 01:47:02.905 [2024-12-09 05:41:54.333025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:47:02.905 [2024-12-09 05:41:54.348933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:47:02.905 [2024-12-09 05:41:54.349113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 01:47:02.905 [2024-12-09 05:41:54.349218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.824 ms 01:47:02.905 [2024-12-09 05:41:54.349264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:47:02.905 [2024-12-09 05:41:54.484817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:47:02.905 [2024-12-09 05:41:54.485075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 01:47:02.905 [2024-12-09 05:41:54.485201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 135.481 ms 01:47:02.905 [2024-12-09 05:41:54.485250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:47:02.905 [2024-12-09 05:41:54.511901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:47:02.905 [2024-12-09 05:41:54.512030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 01:47:02.905 [2024-12-09 05:41:54.512100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.505 ms 01:47:02.905 [2024-12-09 05:41:54.512139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:47:03.163 [2024-12-09 05:41:54.536904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:47:03.164 [2024-12-09 05:41:54.536941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 01:47:03.164 [2024-12-09 05:41:54.536971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.681 ms 01:47:03.164 [2024-12-09 05:41:54.536982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:47:03.164 [2024-12-09 05:41:54.561456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:47:03.164 [2024-12-09 05:41:54.561507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 01:47:03.164 [2024-12-09 05:41:54.561521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.435 ms 01:47:03.164 [2024-12-09 05:41:54.561530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:47:03.164 [2024-12-09 05:41:54.585980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:47:03.164 [2024-12-09 05:41:54.586031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 01:47:03.164 [2024-12-09 05:41:54.586045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.384 ms 01:47:03.164 [2024-12-09 05:41:54.586054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:47:03.164 [2024-12-09 05:41:54.586093] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 01:47:03.164 [2024-12-09 05:41:54.586115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 01:47:03.164 [2024-12-09 05:41:54.586128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 01:47:03.164 [2024-12-09 05:41:54.586138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 01:47:03.164 [2024-12-09 05:41:54.586147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 01:47:03.164 [2024-12-09 05:41:54.586157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 01:47:03.164 [2024-12-09 05:41:54.586167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 01:47:03.164 [2024-12-09 05:41:54.586177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 01:47:03.164 [2024-12-09 05:41:54.586187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 01:47:03.164 [2024-12-09 05:41:54.586197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 01:47:03.164 [2024-12-09 05:41:54.586206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 01:47:03.164 [2024-12-09 05:41:54.586216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 01:47:03.164 [2024-12-09 05:41:54.586226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 01:47:03.164 [2024-12-09 05:41:54.586236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 01:47:03.164 [2024-12-09 05:41:54.586246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 01:47:03.164 [2024-12-09 05:41:54.586272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 01:47:03.164 [2024-12-09 05:41:54.586298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 01:47:03.164 [2024-12-09 05:41:54.586324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 01:47:03.164 [2024-12-09 05:41:54.586335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 01:47:03.164 [2024-12-09 05:41:54.586346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 01:47:03.164 [2024-12-09 05:41:54.586356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 01:47:03.164 [2024-12-09 05:41:54.586368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 01:47:03.164 [2024-12-09 05:41:54.586378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 01:47:03.164 [2024-12-09 05:41:54.586389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 01:47:03.164 [2024-12-09 05:41:54.586399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 01:47:03.164 [2024-12-09 05:41:54.586419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 01:47:03.164 [2024-12-09 05:41:54.586432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 01:47:03.164 [2024-12-09 05:41:54.586442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 01:47:03.164 [2024-12-09 05:41:54.586453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 01:47:03.164 [2024-12-09 05:41:54.586465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 01:47:03.164 [2024-12-09 05:41:54.586476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 01:47:03.164 [2024-12-09 05:41:54.586487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 01:47:03.164 [2024-12-09 05:41:54.586497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 01:47:03.164 [2024-12-09 05:41:54.586508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 01:47:03.164 [2024-12-09 05:41:54.586519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 01:47:03.164 [2024-12-09 05:41:54.586529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 01:47:03.164 [2024-12-09 05:41:54.586540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 01:47:03.164 [2024-12-09 05:41:54.586550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 01:47:03.164 [2024-12-09 05:41:54.586561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 01:47:03.164 [2024-12-09 05:41:54.586571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 01:47:03.164 [2024-12-09 05:41:54.586582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 01:47:03.164 [2024-12-09 05:41:54.586592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 01:47:03.164 [2024-12-09 05:41:54.586603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 01:47:03.164 [2024-12-09 05:41:54.586613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 01:47:03.164 [2024-12-09 05:41:54.586624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 01:47:03.164 [2024-12-09 05:41:54.586634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 01:47:03.164 [2024-12-09 05:41:54.586644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 01:47:03.164 [2024-12-09 05:41:54.586655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 01:47:03.164 [2024-12-09 05:41:54.586680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 01:47:03.164 [2024-12-09 05:41:54.586694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 01:47:03.164 [2024-12-09 05:41:54.586704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 01:47:03.164 [2024-12-09 05:41:54.586715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 01:47:03.164 [2024-12-09 05:41:54.586725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 01:47:03.164 [2024-12-09 05:41:54.586736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 01:47:03.164 [2024-12-09 05:41:54.586746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 01:47:03.164 [2024-12-09 05:41:54.586756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 01:47:03.164 [2024-12-09 05:41:54.586766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 01:47:03.164 [2024-12-09 05:41:54.586776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 01:47:03.164 [2024-12-09 05:41:54.586787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 01:47:03.164 [2024-12-09 05:41:54.586798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 01:47:03.164 [2024-12-09 05:41:54.586808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 01:47:03.164 [2024-12-09 05:41:54.586819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 01:47:03.164 [2024-12-09 05:41:54.586830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 01:47:03.164 [2024-12-09 05:41:54.586840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 01:47:03.164 [2024-12-09 05:41:54.586851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 01:47:03.164 [2024-12-09 05:41:54.586861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 01:47:03.164 [2024-12-09 05:41:54.586871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 01:47:03.164 [2024-12-09 05:41:54.586881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 01:47:03.164 [2024-12-09 05:41:54.586891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 01:47:03.164 [2024-12-09 05:41:54.586902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 01:47:03.164 [2024-12-09 05:41:54.586913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 01:47:03.164 [2024-12-09 05:41:54.586923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 01:47:03.164 [2024-12-09 05:41:54.586934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 01:47:03.164 [2024-12-09 05:41:54.586944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 01:47:03.164 [2024-12-09 05:41:54.586955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 01:47:03.164 [2024-12-09 05:41:54.586965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 01:47:03.164 [2024-12-09 05:41:54.586975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 01:47:03.164 [2024-12-09 05:41:54.586985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 01:47:03.164 [2024-12-09 05:41:54.586995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 01:47:03.164 [2024-12-09 05:41:54.587005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 01:47:03.164 [2024-12-09 05:41:54.587015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 01:47:03.164 [2024-12-09 05:41:54.587026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 01:47:03.164 [2024-12-09 05:41:54.587036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 01:47:03.164 [2024-12-09 05:41:54.587045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 01:47:03.164 [2024-12-09 05:41:54.587056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 01:47:03.164 [2024-12-09 05:41:54.587067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 01:47:03.164 [2024-12-09 05:41:54.587077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 01:47:03.164 [2024-12-09 05:41:54.587087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 01:47:03.164 [2024-12-09 05:41:54.587098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 01:47:03.164 [2024-12-09 05:41:54.587108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 01:47:03.164 [2024-12-09 05:41:54.587119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 01:47:03.164 [2024-12-09 05:41:54.587129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 01:47:03.164 [2024-12-09 05:41:54.587139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 01:47:03.164 [2024-12-09 05:41:54.587151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 01:47:03.164 [2024-12-09 05:41:54.587161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 01:47:03.164 [2024-12-09 05:41:54.587172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 01:47:03.164 [2024-12-09 05:41:54.587182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 01:47:03.164 [2024-12-09 05:41:54.587193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 01:47:03.164 [2024-12-09 05:41:54.587203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 01:47:03.164 [2024-12-09 05:41:54.587213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 01:47:03.164 [2024-12-09 05:41:54.587223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 01:47:03.164 [2024-12-09 05:41:54.587242] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 01:47:03.164 [2024-12-09 05:41:54.587252] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2fe6c684-ea00-40ee-a25d-c4c960459442 01:47:03.164 [2024-12-09 05:41:54.587264] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 01:47:03.164 [2024-12-09 05:41:54.587274] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 16832 01:47:03.164 [2024-12-09 05:41:54.587284] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 15872 01:47:03.164 [2024-12-09 05:41:54.587295] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0605 01:47:03.164 [2024-12-09 05:41:54.587312] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 01:47:03.164 [2024-12-09 05:41:54.587334] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 01:47:03.164 [2024-12-09 05:41:54.587345] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 01:47:03.164 [2024-12-09 05:41:54.587354] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 01:47:03.164 [2024-12-09 05:41:54.587363] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 01:47:03.164 [2024-12-09 05:41:54.587373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:47:03.164 [2024-12-09 05:41:54.587384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 01:47:03.164 [2024-12-09 05:41:54.587395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.282 ms 01:47:03.164 [2024-12-09 05:41:54.587405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:47:03.164 [2024-12-09 05:41:54.601826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:47:03.164 [2024-12-09 05:41:54.601872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 01:47:03.164 [2024-12-09 05:41:54.601894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.399 ms 01:47:03.164 [2024-12-09 05:41:54.601904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:47:03.164 [2024-12-09 05:41:54.602370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:47:03.164 [2024-12-09 05:41:54.602396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 01:47:03.164 [2024-12-09 05:41:54.602425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.427 ms 01:47:03.164 [2024-12-09 05:41:54.602437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:47:03.164 [2024-12-09 05:41:54.638878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:47:03.164 [2024-12-09 05:41:54.638937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 01:47:03.164 [2024-12-09 05:41:54.638951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:47:03.164 [2024-12-09 05:41:54.638961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:47:03.164 [2024-12-09 05:41:54.639018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:47:03.164 [2024-12-09 05:41:54.639032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 01:47:03.164 [2024-12-09 05:41:54.639042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:47:03.164 [2024-12-09 05:41:54.639052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:47:03.164 [2024-12-09 05:41:54.639121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:47:03.164 [2024-12-09 05:41:54.639140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 01:47:03.164 [2024-12-09 05:41:54.639157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:47:03.164 [2024-12-09 05:41:54.639167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:47:03.164 [2024-12-09 05:41:54.639187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:47:03.164 [2024-12-09 05:41:54.639200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 01:47:03.165 [2024-12-09 05:41:54.639210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:47:03.165 [2024-12-09 05:41:54.639219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:47:03.165 [2024-12-09 05:41:54.725153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:47:03.165 [2024-12-09 05:41:54.725231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 01:47:03.165 [2024-12-09 05:41:54.725248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:47:03.165 [2024-12-09 05:41:54.725259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:47:03.422 [2024-12-09 05:41:54.794549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:47:03.422 [2024-12-09 05:41:54.794601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 01:47:03.422 [2024-12-09 05:41:54.794615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:47:03.422 [2024-12-09 05:41:54.794626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:47:03.422 [2024-12-09 05:41:54.794754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:47:03.422 [2024-12-09 05:41:54.794772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 01:47:03.422 [2024-12-09 05:41:54.794783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:47:03.422 [2024-12-09 05:41:54.794799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:47:03.422 [2024-12-09 05:41:54.794844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:47:03.422 [2024-12-09 05:41:54.794858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 01:47:03.422 [2024-12-09 05:41:54.794868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:47:03.422 [2024-12-09 05:41:54.794878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:47:03.422 [2024-12-09 05:41:54.794993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:47:03.422 [2024-12-09 05:41:54.795011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 01:47:03.422 [2024-12-09 05:41:54.795023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:47:03.422 [2024-12-09 05:41:54.795033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:47:03.422 [2024-12-09 05:41:54.795097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:47:03.422 [2024-12-09 05:41:54.795114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 01:47:03.422 [2024-12-09 05:41:54.795124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:47:03.422 [2024-12-09 05:41:54.795134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:47:03.422 [2024-12-09 05:41:54.795174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:47:03.422 [2024-12-09 05:41:54.795187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 01:47:03.422 [2024-12-09 05:41:54.795198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:47:03.422 [2024-12-09 05:41:54.795207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:47:03.422 [2024-12-09 05:41:54.795269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:47:03.422 [2024-12-09 05:41:54.795288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 01:47:03.422 [2024-12-09 05:41:54.795298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:47:03.422 [2024-12-09 05:41:54.795308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:47:03.422 [2024-12-09 05:41:54.795442] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 507.409 ms, result 0 01:47:04.354 01:47:04.354 01:47:04.354 05:41:55 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 01:47:06.253 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 01:47:06.253 05:41:57 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 01:47:06.253 05:41:57 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 01:47:06.253 05:41:57 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 01:47:06.253 05:41:57 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 01:47:06.253 05:41:57 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 01:47:06.253 Process with pid 79182 is not found 01:47:06.253 Remove shared memory files 01:47:06.253 05:41:57 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 79182 01:47:06.253 05:41:57 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 79182 ']' 01:47:06.253 05:41:57 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 79182 01:47:06.253 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (79182) - No such process 01:47:06.253 05:41:57 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 79182 is not found' 01:47:06.253 05:41:57 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 01:47:06.253 05:41:57 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 01:47:06.253 05:41:57 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 01:47:06.253 05:41:57 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 01:47:06.253 05:41:57 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 01:47:06.253 05:41:57 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 01:47:06.253 05:41:57 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 01:47:06.253 01:47:06.253 real 3m37.336s 01:47:06.253 user 3m22.966s 01:47:06.253 sys 0m16.098s 01:47:06.253 05:41:57 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 01:47:06.253 05:41:57 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 01:47:06.253 ************************************ 01:47:06.253 END TEST ftl_restore 01:47:06.253 ************************************ 01:47:06.253 05:41:57 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 01:47:06.253 05:41:57 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 01:47:06.253 05:41:57 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 01:47:06.253 05:41:57 ftl -- common/autotest_common.sh@10 -- # set +x 01:47:06.253 ************************************ 01:47:06.253 START TEST ftl_dirty_shutdown 01:47:06.253 ************************************ 01:47:06.253 05:41:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 01:47:06.253 * Looking for test storage... 01:47:06.253 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 01:47:06.253 05:41:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:47:06.253 05:41:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 01:47:06.253 05:41:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:47:06.253 05:41:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:47:06.253 05:41:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:47:06.253 05:41:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 01:47:06.253 05:41:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 01:47:06.253 05:41:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 01:47:06.253 05:41:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 01:47:06.253 05:41:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 01:47:06.253 05:41:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 01:47:06.253 05:41:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 01:47:06.253 05:41:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 01:47:06.253 05:41:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 01:47:06.253 05:41:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:47:06.253 05:41:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 01:47:06.253 05:41:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 01:47:06.253 05:41:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 01:47:06.253 05:41:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:47:06.253 05:41:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 01:47:06.253 05:41:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 01:47:06.253 05:41:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:47:06.253 05:41:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 01:47:06.253 05:41:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 01:47:06.253 05:41:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 01:47:06.253 05:41:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 01:47:06.253 05:41:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:47:06.253 05:41:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 01:47:06.253 05:41:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 01:47:06.254 05:41:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:47:06.254 05:41:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:47:06.254 05:41:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 01:47:06.254 05:41:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:47:06.254 05:41:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:47:06.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:47:06.254 --rc genhtml_branch_coverage=1 01:47:06.254 --rc genhtml_function_coverage=1 01:47:06.254 --rc genhtml_legend=1 01:47:06.254 --rc geninfo_all_blocks=1 01:47:06.254 --rc geninfo_unexecuted_blocks=1 01:47:06.254 01:47:06.254 ' 01:47:06.254 05:41:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:47:06.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:47:06.254 --rc genhtml_branch_coverage=1 01:47:06.254 --rc genhtml_function_coverage=1 01:47:06.254 --rc genhtml_legend=1 01:47:06.254 --rc geninfo_all_blocks=1 01:47:06.254 --rc geninfo_unexecuted_blocks=1 01:47:06.254 01:47:06.254 ' 01:47:06.254 05:41:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:47:06.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:47:06.254 --rc genhtml_branch_coverage=1 01:47:06.254 --rc genhtml_function_coverage=1 01:47:06.254 --rc genhtml_legend=1 01:47:06.254 --rc geninfo_all_blocks=1 01:47:06.254 --rc geninfo_unexecuted_blocks=1 01:47:06.254 01:47:06.254 ' 01:47:06.254 05:41:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:47:06.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:47:06.254 --rc genhtml_branch_coverage=1 01:47:06.254 --rc genhtml_function_coverage=1 01:47:06.254 --rc genhtml_legend=1 01:47:06.254 --rc geninfo_all_blocks=1 01:47:06.254 --rc geninfo_unexecuted_blocks=1 01:47:06.254 01:47:06.254 ' 01:47:06.254 05:41:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 01:47:06.254 05:41:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 01:47:06.254 05:41:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 01:47:06.254 05:41:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 01:47:06.254 05:41:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 01:47:06.254 05:41:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 01:47:06.254 05:41:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:47:06.254 05:41:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 01:47:06.254 05:41:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 01:47:06.254 05:41:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:47:06.254 05:41:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:47:06.254 05:41:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 01:47:06.254 05:41:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 01:47:06.254 05:41:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 01:47:06.254 05:41:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 01:47:06.254 05:41:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 01:47:06.254 05:41:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 01:47:06.254 05:41:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:47:06.254 05:41:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:47:06.254 05:41:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 01:47:06.254 05:41:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 01:47:06.254 05:41:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 01:47:06.254 05:41:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 01:47:06.254 05:41:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 01:47:06.254 05:41:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 01:47:06.254 05:41:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 01:47:06.254 05:41:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 01:47:06.254 05:41:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:47:06.254 05:41:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:47:06.254 05:41:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:47:06.254 05:41:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:47:06.254 05:41:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 01:47:06.254 05:41:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 01:47:06.254 05:41:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 01:47:06.254 05:41:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 01:47:06.254 05:41:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 01:47:06.254 05:41:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 01:47:06.254 05:41:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 01:47:06.254 05:41:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 01:47:06.254 05:41:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 01:47:06.254 05:41:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 01:47:06.254 05:41:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 01:47:06.254 05:41:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=81411 01:47:06.254 05:41:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 01:47:06.254 05:41:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 81411 01:47:06.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:47:06.254 05:41:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 81411 ']' 01:47:06.254 05:41:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:47:06.254 05:41:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 01:47:06.254 05:41:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:47:06.254 05:41:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 01:47:06.254 05:41:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 01:47:06.513 [2024-12-09 05:41:57.961124] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:47:06.513 [2024-12-09 05:41:57.961597] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81411 ] 01:47:06.772 [2024-12-09 05:41:58.150332] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:47:06.772 [2024-12-09 05:41:58.305728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:47:07.710 05:41:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:47:07.710 05:41:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 01:47:07.710 05:41:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 01:47:07.710 05:41:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 01:47:07.710 05:41:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 01:47:07.710 05:41:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 01:47:07.710 05:41:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 01:47:07.710 05:41:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 01:47:07.969 05:41:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 01:47:07.969 05:41:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 01:47:07.969 05:41:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 01:47:07.969 05:41:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 01:47:07.969 05:41:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 01:47:07.969 05:41:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 01:47:07.969 05:41:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 01:47:07.969 05:41:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 01:47:08.229 05:41:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 01:47:08.229 { 01:47:08.229 "name": "nvme0n1", 01:47:08.229 "aliases": [ 01:47:08.229 "d67438dc-93b1-47aa-a630-6599cf125da8" 01:47:08.229 ], 01:47:08.229 "product_name": "NVMe disk", 01:47:08.229 "block_size": 4096, 01:47:08.229 "num_blocks": 1310720, 01:47:08.229 "uuid": "d67438dc-93b1-47aa-a630-6599cf125da8", 01:47:08.229 "numa_id": -1, 01:47:08.229 "assigned_rate_limits": { 01:47:08.229 "rw_ios_per_sec": 0, 01:47:08.229 "rw_mbytes_per_sec": 0, 01:47:08.229 "r_mbytes_per_sec": 0, 01:47:08.229 "w_mbytes_per_sec": 0 01:47:08.229 }, 01:47:08.229 "claimed": true, 01:47:08.229 "claim_type": "read_many_write_one", 01:47:08.229 "zoned": false, 01:47:08.229 "supported_io_types": { 01:47:08.229 "read": true, 01:47:08.229 "write": true, 01:47:08.229 "unmap": true, 01:47:08.229 "flush": true, 01:47:08.229 "reset": true, 01:47:08.229 "nvme_admin": true, 01:47:08.229 "nvme_io": true, 01:47:08.229 "nvme_io_md": false, 01:47:08.229 "write_zeroes": true, 01:47:08.229 "zcopy": false, 01:47:08.229 "get_zone_info": false, 01:47:08.229 "zone_management": false, 01:47:08.229 "zone_append": false, 01:47:08.229 "compare": true, 01:47:08.229 "compare_and_write": false, 01:47:08.229 "abort": true, 01:47:08.229 "seek_hole": false, 01:47:08.229 "seek_data": false, 01:47:08.229 "copy": true, 01:47:08.229 "nvme_iov_md": false 01:47:08.229 }, 01:47:08.229 "driver_specific": { 01:47:08.229 "nvme": [ 01:47:08.229 { 01:47:08.229 "pci_address": "0000:00:11.0", 01:47:08.229 "trid": { 01:47:08.229 "trtype": "PCIe", 01:47:08.229 "traddr": "0000:00:11.0" 01:47:08.229 }, 01:47:08.229 "ctrlr_data": { 01:47:08.229 "cntlid": 0, 01:47:08.229 "vendor_id": "0x1b36", 01:47:08.229 "model_number": "QEMU NVMe Ctrl", 01:47:08.229 "serial_number": "12341", 01:47:08.229 "firmware_revision": "8.0.0", 01:47:08.229 "subnqn": "nqn.2019-08.org.qemu:12341", 01:47:08.229 "oacs": { 01:47:08.229 "security": 0, 01:47:08.229 "format": 1, 01:47:08.229 "firmware": 0, 01:47:08.229 "ns_manage": 1 01:47:08.229 }, 01:47:08.229 "multi_ctrlr": false, 01:47:08.229 "ana_reporting": false 01:47:08.229 }, 01:47:08.229 "vs": { 01:47:08.229 "nvme_version": "1.4" 01:47:08.229 }, 01:47:08.229 "ns_data": { 01:47:08.229 "id": 1, 01:47:08.229 "can_share": false 01:47:08.229 } 01:47:08.229 } 01:47:08.229 ], 01:47:08.229 "mp_policy": "active_passive" 01:47:08.229 } 01:47:08.229 } 01:47:08.229 ]' 01:47:08.229 05:41:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 01:47:08.229 05:41:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 01:47:08.229 05:41:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 01:47:08.229 05:41:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 01:47:08.229 05:41:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 01:47:08.229 05:41:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 01:47:08.229 05:41:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 01:47:08.229 05:41:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 01:47:08.229 05:41:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 01:47:08.229 05:41:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 01:47:08.229 05:41:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 01:47:08.489 05:41:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=366a8497-de20-4d4a-afe1-1d22a8c3c95d 01:47:08.489 05:41:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 01:47:08.489 05:41:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 366a8497-de20-4d4a-afe1-1d22a8c3c95d 01:47:08.749 05:42:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 01:47:09.007 05:42:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=7d2d15e4-dece-473b-bcaa-61bd1d89c879 01:47:09.007 05:42:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 7d2d15e4-dece-473b-bcaa-61bd1d89c879 01:47:09.265 05:42:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=a9414063-a665-4794-9ff9-6319d1826313 01:47:09.265 05:42:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 01:47:09.265 05:42:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 a9414063-a665-4794-9ff9-6319d1826313 01:47:09.265 05:42:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 01:47:09.265 05:42:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 01:47:09.265 05:42:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=a9414063-a665-4794-9ff9-6319d1826313 01:47:09.265 05:42:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 01:47:09.265 05:42:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size a9414063-a665-4794-9ff9-6319d1826313 01:47:09.265 05:42:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=a9414063-a665-4794-9ff9-6319d1826313 01:47:09.265 05:42:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 01:47:09.265 05:42:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 01:47:09.265 05:42:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 01:47:09.265 05:42:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a9414063-a665-4794-9ff9-6319d1826313 01:47:09.524 05:42:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 01:47:09.524 { 01:47:09.524 "name": "a9414063-a665-4794-9ff9-6319d1826313", 01:47:09.524 "aliases": [ 01:47:09.524 "lvs/nvme0n1p0" 01:47:09.524 ], 01:47:09.524 "product_name": "Logical Volume", 01:47:09.524 "block_size": 4096, 01:47:09.524 "num_blocks": 26476544, 01:47:09.524 "uuid": "a9414063-a665-4794-9ff9-6319d1826313", 01:47:09.524 "assigned_rate_limits": { 01:47:09.524 "rw_ios_per_sec": 0, 01:47:09.524 "rw_mbytes_per_sec": 0, 01:47:09.524 "r_mbytes_per_sec": 0, 01:47:09.524 "w_mbytes_per_sec": 0 01:47:09.524 }, 01:47:09.524 "claimed": false, 01:47:09.524 "zoned": false, 01:47:09.524 "supported_io_types": { 01:47:09.524 "read": true, 01:47:09.524 "write": true, 01:47:09.524 "unmap": true, 01:47:09.524 "flush": false, 01:47:09.524 "reset": true, 01:47:09.524 "nvme_admin": false, 01:47:09.524 "nvme_io": false, 01:47:09.524 "nvme_io_md": false, 01:47:09.524 "write_zeroes": true, 01:47:09.524 "zcopy": false, 01:47:09.524 "get_zone_info": false, 01:47:09.524 "zone_management": false, 01:47:09.524 "zone_append": false, 01:47:09.524 "compare": false, 01:47:09.524 "compare_and_write": false, 01:47:09.524 "abort": false, 01:47:09.524 "seek_hole": true, 01:47:09.524 "seek_data": true, 01:47:09.524 "copy": false, 01:47:09.524 "nvme_iov_md": false 01:47:09.524 }, 01:47:09.524 "driver_specific": { 01:47:09.524 "lvol": { 01:47:09.524 "lvol_store_uuid": "7d2d15e4-dece-473b-bcaa-61bd1d89c879", 01:47:09.524 "base_bdev": "nvme0n1", 01:47:09.524 "thin_provision": true, 01:47:09.524 "num_allocated_clusters": 0, 01:47:09.524 "snapshot": false, 01:47:09.524 "clone": false, 01:47:09.524 "esnap_clone": false 01:47:09.524 } 01:47:09.524 } 01:47:09.524 } 01:47:09.524 ]' 01:47:09.524 05:42:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 01:47:09.524 05:42:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 01:47:09.524 05:42:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 01:47:09.524 05:42:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 01:47:09.524 05:42:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 01:47:09.524 05:42:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 01:47:09.524 05:42:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 01:47:09.524 05:42:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 01:47:09.524 05:42:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 01:47:10.090 05:42:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 01:47:10.090 05:42:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 01:47:10.090 05:42:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size a9414063-a665-4794-9ff9-6319d1826313 01:47:10.090 05:42:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=a9414063-a665-4794-9ff9-6319d1826313 01:47:10.090 05:42:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 01:47:10.090 05:42:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 01:47:10.090 05:42:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 01:47:10.090 05:42:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a9414063-a665-4794-9ff9-6319d1826313 01:47:10.348 05:42:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 01:47:10.348 { 01:47:10.348 "name": "a9414063-a665-4794-9ff9-6319d1826313", 01:47:10.348 "aliases": [ 01:47:10.348 "lvs/nvme0n1p0" 01:47:10.348 ], 01:47:10.348 "product_name": "Logical Volume", 01:47:10.348 "block_size": 4096, 01:47:10.348 "num_blocks": 26476544, 01:47:10.348 "uuid": "a9414063-a665-4794-9ff9-6319d1826313", 01:47:10.348 "assigned_rate_limits": { 01:47:10.348 "rw_ios_per_sec": 0, 01:47:10.348 "rw_mbytes_per_sec": 0, 01:47:10.348 "r_mbytes_per_sec": 0, 01:47:10.348 "w_mbytes_per_sec": 0 01:47:10.348 }, 01:47:10.348 "claimed": false, 01:47:10.348 "zoned": false, 01:47:10.348 "supported_io_types": { 01:47:10.348 "read": true, 01:47:10.348 "write": true, 01:47:10.348 "unmap": true, 01:47:10.348 "flush": false, 01:47:10.348 "reset": true, 01:47:10.348 "nvme_admin": false, 01:47:10.348 "nvme_io": false, 01:47:10.348 "nvme_io_md": false, 01:47:10.348 "write_zeroes": true, 01:47:10.348 "zcopy": false, 01:47:10.348 "get_zone_info": false, 01:47:10.348 "zone_management": false, 01:47:10.348 "zone_append": false, 01:47:10.348 "compare": false, 01:47:10.348 "compare_and_write": false, 01:47:10.348 "abort": false, 01:47:10.348 "seek_hole": true, 01:47:10.348 "seek_data": true, 01:47:10.348 "copy": false, 01:47:10.348 "nvme_iov_md": false 01:47:10.348 }, 01:47:10.348 "driver_specific": { 01:47:10.348 "lvol": { 01:47:10.348 "lvol_store_uuid": "7d2d15e4-dece-473b-bcaa-61bd1d89c879", 01:47:10.348 "base_bdev": "nvme0n1", 01:47:10.348 "thin_provision": true, 01:47:10.348 "num_allocated_clusters": 0, 01:47:10.348 "snapshot": false, 01:47:10.348 "clone": false, 01:47:10.348 "esnap_clone": false 01:47:10.348 } 01:47:10.348 } 01:47:10.348 } 01:47:10.348 ]' 01:47:10.348 05:42:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 01:47:10.348 05:42:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 01:47:10.348 05:42:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 01:47:10.348 05:42:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 01:47:10.348 05:42:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 01:47:10.348 05:42:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 01:47:10.348 05:42:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 01:47:10.348 05:42:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 01:47:10.607 05:42:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 01:47:10.607 05:42:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size a9414063-a665-4794-9ff9-6319d1826313 01:47:10.607 05:42:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=a9414063-a665-4794-9ff9-6319d1826313 01:47:10.607 05:42:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 01:47:10.607 05:42:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 01:47:10.607 05:42:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 01:47:10.607 05:42:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a9414063-a665-4794-9ff9-6319d1826313 01:47:10.865 05:42:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 01:47:10.865 { 01:47:10.865 "name": "a9414063-a665-4794-9ff9-6319d1826313", 01:47:10.865 "aliases": [ 01:47:10.865 "lvs/nvme0n1p0" 01:47:10.865 ], 01:47:10.865 "product_name": "Logical Volume", 01:47:10.865 "block_size": 4096, 01:47:10.865 "num_blocks": 26476544, 01:47:10.865 "uuid": "a9414063-a665-4794-9ff9-6319d1826313", 01:47:10.866 "assigned_rate_limits": { 01:47:10.866 "rw_ios_per_sec": 0, 01:47:10.866 "rw_mbytes_per_sec": 0, 01:47:10.866 "r_mbytes_per_sec": 0, 01:47:10.866 "w_mbytes_per_sec": 0 01:47:10.866 }, 01:47:10.866 "claimed": false, 01:47:10.866 "zoned": false, 01:47:10.866 "supported_io_types": { 01:47:10.866 "read": true, 01:47:10.866 "write": true, 01:47:10.866 "unmap": true, 01:47:10.866 "flush": false, 01:47:10.866 "reset": true, 01:47:10.866 "nvme_admin": false, 01:47:10.866 "nvme_io": false, 01:47:10.866 "nvme_io_md": false, 01:47:10.866 "write_zeroes": true, 01:47:10.866 "zcopy": false, 01:47:10.866 "get_zone_info": false, 01:47:10.866 "zone_management": false, 01:47:10.866 "zone_append": false, 01:47:10.866 "compare": false, 01:47:10.866 "compare_and_write": false, 01:47:10.866 "abort": false, 01:47:10.866 "seek_hole": true, 01:47:10.866 "seek_data": true, 01:47:10.866 "copy": false, 01:47:10.866 "nvme_iov_md": false 01:47:10.866 }, 01:47:10.866 "driver_specific": { 01:47:10.866 "lvol": { 01:47:10.866 "lvol_store_uuid": "7d2d15e4-dece-473b-bcaa-61bd1d89c879", 01:47:10.866 "base_bdev": "nvme0n1", 01:47:10.866 "thin_provision": true, 01:47:10.866 "num_allocated_clusters": 0, 01:47:10.866 "snapshot": false, 01:47:10.866 "clone": false, 01:47:10.866 "esnap_clone": false 01:47:10.866 } 01:47:10.866 } 01:47:10.866 } 01:47:10.866 ]' 01:47:10.866 05:42:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 01:47:10.866 05:42:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 01:47:10.866 05:42:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 01:47:11.124 05:42:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 01:47:11.124 05:42:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 01:47:11.124 05:42:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 01:47:11.124 05:42:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 01:47:11.124 05:42:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d a9414063-a665-4794-9ff9-6319d1826313 --l2p_dram_limit 10' 01:47:11.124 05:42:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 01:47:11.124 05:42:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 01:47:11.124 05:42:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 01:47:11.124 05:42:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d a9414063-a665-4794-9ff9-6319d1826313 --l2p_dram_limit 10 -c nvc0n1p0 01:47:11.383 [2024-12-09 05:42:02.761308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:47:11.383 [2024-12-09 05:42:02.761518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 01:47:11.383 [2024-12-09 05:42:02.761555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 01:47:11.383 [2024-12-09 05:42:02.761570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:47:11.383 [2024-12-09 05:42:02.761664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:47:11.383 [2024-12-09 05:42:02.761706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 01:47:11.383 [2024-12-09 05:42:02.761725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 01:47:11.383 [2024-12-09 05:42:02.761737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:47:11.383 [2024-12-09 05:42:02.761770] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 01:47:11.383 [2024-12-09 05:42:02.762710] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 01:47:11.383 [2024-12-09 05:42:02.762757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:47:11.383 [2024-12-09 05:42:02.762769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 01:47:11.383 [2024-12-09 05:42:02.762783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.992 ms 01:47:11.383 [2024-12-09 05:42:02.762802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:47:11.383 [2024-12-09 05:42:02.762952] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID f978506d-1366-4345-9c7a-0084bf34ece6 01:47:11.383 [2024-12-09 05:42:02.764807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:47:11.383 [2024-12-09 05:42:02.764849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 01:47:11.383 [2024-12-09 05:42:02.764865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 01:47:11.383 [2024-12-09 05:42:02.764882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:47:11.383 [2024-12-09 05:42:02.774215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:47:11.383 [2024-12-09 05:42:02.774267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 01:47:11.383 [2024-12-09 05:42:02.774283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.283 ms 01:47:11.383 [2024-12-09 05:42:02.774296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:47:11.383 [2024-12-09 05:42:02.774401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:47:11.383 [2024-12-09 05:42:02.774433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 01:47:11.383 [2024-12-09 05:42:02.774446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 01:47:11.383 [2024-12-09 05:42:02.774463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:47:11.383 [2024-12-09 05:42:02.774539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:47:11.383 [2024-12-09 05:42:02.774559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 01:47:11.383 [2024-12-09 05:42:02.774574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 01:47:11.383 [2024-12-09 05:42:02.774587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:47:11.383 [2024-12-09 05:42:02.774617] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 01:47:11.383 [2024-12-09 05:42:02.779180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:47:11.383 [2024-12-09 05:42:02.779364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 01:47:11.383 [2024-12-09 05:42:02.779399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.568 ms 01:47:11.383 [2024-12-09 05:42:02.779414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:47:11.383 [2024-12-09 05:42:02.779463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:47:11.383 [2024-12-09 05:42:02.779478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 01:47:11.383 [2024-12-09 05:42:02.779493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 01:47:11.383 [2024-12-09 05:42:02.779504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:47:11.383 [2024-12-09 05:42:02.779552] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 01:47:11.383 [2024-12-09 05:42:02.779733] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 01:47:11.383 [2024-12-09 05:42:02.779761] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 01:47:11.383 [2024-12-09 05:42:02.779776] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 01:47:11.383 [2024-12-09 05:42:02.779794] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 01:47:11.383 [2024-12-09 05:42:02.779806] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 01:47:11.383 [2024-12-09 05:42:02.779821] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 01:47:11.383 [2024-12-09 05:42:02.779834] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 01:47:11.383 [2024-12-09 05:42:02.779847] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 01:47:11.383 [2024-12-09 05:42:02.779858] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 01:47:11.383 [2024-12-09 05:42:02.779887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:47:11.383 [2024-12-09 05:42:02.779923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 01:47:11.383 [2024-12-09 05:42:02.779937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.322 ms 01:47:11.383 [2024-12-09 05:42:02.779947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:47:11.383 [2024-12-09 05:42:02.780030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:47:11.383 [2024-12-09 05:42:02.780044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 01:47:11.383 [2024-12-09 05:42:02.780057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 01:47:11.383 [2024-12-09 05:42:02.780066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:47:11.383 [2024-12-09 05:42:02.780168] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 01:47:11.383 [2024-12-09 05:42:02.780184] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 01:47:11.383 [2024-12-09 05:42:02.780198] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 01:47:11.383 [2024-12-09 05:42:02.780209] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:47:11.383 [2024-12-09 05:42:02.780221] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 01:47:11.383 [2024-12-09 05:42:02.780230] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 01:47:11.383 [2024-12-09 05:42:02.780242] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 01:47:11.383 [2024-12-09 05:42:02.780251] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 01:47:11.383 [2024-12-09 05:42:02.780263] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 01:47:11.383 [2024-12-09 05:42:02.780273] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 01:47:11.383 [2024-12-09 05:42:02.780284] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 01:47:11.383 [2024-12-09 05:42:02.780294] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 01:47:11.383 [2024-12-09 05:42:02.780306] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 01:47:11.383 [2024-12-09 05:42:02.780315] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 01:47:11.383 [2024-12-09 05:42:02.780327] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 01:47:11.383 [2024-12-09 05:42:02.780337] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:47:11.383 [2024-12-09 05:42:02.780351] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 01:47:11.383 [2024-12-09 05:42:02.780361] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 01:47:11.383 [2024-12-09 05:42:02.780374] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:47:11.383 [2024-12-09 05:42:02.780385] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 01:47:11.383 [2024-12-09 05:42:02.780397] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 01:47:11.383 [2024-12-09 05:42:02.780406] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:47:11.383 [2024-12-09 05:42:02.780417] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 01:47:11.383 [2024-12-09 05:42:02.780427] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 01:47:11.384 [2024-12-09 05:42:02.780439] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:47:11.384 [2024-12-09 05:42:02.780448] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 01:47:11.384 [2024-12-09 05:42:02.780460] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 01:47:11.384 [2024-12-09 05:42:02.780469] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:47:11.384 [2024-12-09 05:42:02.780480] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 01:47:11.384 [2024-12-09 05:42:02.780490] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 01:47:11.384 [2024-12-09 05:42:02.780502] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:47:11.384 [2024-12-09 05:42:02.780511] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 01:47:11.384 [2024-12-09 05:42:02.780525] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 01:47:11.384 [2024-12-09 05:42:02.780535] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 01:47:11.384 [2024-12-09 05:42:02.780546] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 01:47:11.384 [2024-12-09 05:42:02.780556] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 01:47:11.384 [2024-12-09 05:42:02.780567] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 01:47:11.384 [2024-12-09 05:42:02.780576] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 01:47:11.384 [2024-12-09 05:42:02.780588] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 01:47:11.384 [2024-12-09 05:42:02.780597] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:47:11.384 [2024-12-09 05:42:02.780609] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 01:47:11.384 [2024-12-09 05:42:02.780619] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 01:47:11.384 [2024-12-09 05:42:02.780630] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:47:11.384 [2024-12-09 05:42:02.780639] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 01:47:11.384 [2024-12-09 05:42:02.780652] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 01:47:11.384 [2024-12-09 05:42:02.780663] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 01:47:11.384 [2024-12-09 05:42:02.780676] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:47:11.384 [2024-12-09 05:42:02.780688] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 01:47:11.384 [2024-12-09 05:42:02.780724] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 01:47:11.384 [2024-12-09 05:42:02.780735] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 01:47:11.384 [2024-12-09 05:42:02.780748] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 01:47:11.384 [2024-12-09 05:42:02.780758] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 01:47:11.384 [2024-12-09 05:42:02.780769] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 01:47:11.384 [2024-12-09 05:42:02.780800] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 01:47:11.384 [2024-12-09 05:42:02.780819] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 01:47:11.384 [2024-12-09 05:42:02.780831] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 01:47:11.384 [2024-12-09 05:42:02.780844] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 01:47:11.384 [2024-12-09 05:42:02.780855] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 01:47:11.384 [2024-12-09 05:42:02.780868] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 01:47:11.384 [2024-12-09 05:42:02.780878] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 01:47:11.384 [2024-12-09 05:42:02.780890] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 01:47:11.384 [2024-12-09 05:42:02.780901] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 01:47:11.384 [2024-12-09 05:42:02.780914] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 01:47:11.384 [2024-12-09 05:42:02.780924] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 01:47:11.384 [2024-12-09 05:42:02.780939] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 01:47:11.384 [2024-12-09 05:42:02.780949] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 01:47:11.384 [2024-12-09 05:42:02.780962] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 01:47:11.384 [2024-12-09 05:42:02.780973] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 01:47:11.384 [2024-12-09 05:42:02.780987] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 01:47:11.384 [2024-12-09 05:42:02.780998] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 01:47:11.384 [2024-12-09 05:42:02.781012] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 01:47:11.384 [2024-12-09 05:42:02.781024] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 01:47:11.384 [2024-12-09 05:42:02.781037] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 01:47:11.384 [2024-12-09 05:42:02.781047] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 01:47:11.384 [2024-12-09 05:42:02.781060] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 01:47:11.384 [2024-12-09 05:42:02.781103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:47:11.384 [2024-12-09 05:42:02.781116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 01:47:11.384 [2024-12-09 05:42:02.781129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.977 ms 01:47:11.384 [2024-12-09 05:42:02.781142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:47:11.384 [2024-12-09 05:42:02.781218] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 01:47:11.384 [2024-12-09 05:42:02.781240] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 01:47:13.916 [2024-12-09 05:42:05.480100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:47:13.916 [2024-12-09 05:42:05.480186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 01:47:13.916 [2024-12-09 05:42:05.480209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2698.898 ms 01:47:13.916 [2024-12-09 05:42:05.480224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:47:13.916 [2024-12-09 05:42:05.518921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:47:13.916 [2024-12-09 05:42:05.518994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 01:47:13.916 [2024-12-09 05:42:05.519044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.449 ms 01:47:13.916 [2024-12-09 05:42:05.519058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:47:13.916 [2024-12-09 05:42:05.519220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:47:13.916 [2024-12-09 05:42:05.519243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 01:47:13.916 [2024-12-09 05:42:05.519257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 01:47:13.916 [2024-12-09 05:42:05.519277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:47:14.174 [2024-12-09 05:42:05.556772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:47:14.174 [2024-12-09 05:42:05.556839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 01:47:14.174 [2024-12-09 05:42:05.556857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.447 ms 01:47:14.174 [2024-12-09 05:42:05.556870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:47:14.174 [2024-12-09 05:42:05.556915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:47:14.174 [2024-12-09 05:42:05.556932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 01:47:14.174 [2024-12-09 05:42:05.556944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 01:47:14.174 [2024-12-09 05:42:05.556968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:47:14.174 [2024-12-09 05:42:05.557585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:47:14.174 [2024-12-09 05:42:05.557606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 01:47:14.174 [2024-12-09 05:42:05.557619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.534 ms 01:47:14.174 [2024-12-09 05:42:05.557631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:47:14.174 [2024-12-09 05:42:05.557835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:47:14.174 [2024-12-09 05:42:05.557860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 01:47:14.174 [2024-12-09 05:42:05.557873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.178 ms 01:47:14.174 [2024-12-09 05:42:05.557889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:47:14.174 [2024-12-09 05:42:05.575992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:47:14.174 [2024-12-09 05:42:05.576052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 01:47:14.174 [2024-12-09 05:42:05.576070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.077 ms 01:47:14.174 [2024-12-09 05:42:05.576097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:47:14.174 [2024-12-09 05:42:05.598641] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 01:47:14.174 [2024-12-09 05:42:05.602692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:47:14.174 [2024-12-09 05:42:05.602729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 01:47:14.174 [2024-12-09 05:42:05.602780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.505 ms 01:47:14.174 [2024-12-09 05:42:05.602792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:47:14.174 [2024-12-09 05:42:05.674582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:47:14.174 [2024-12-09 05:42:05.674970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 01:47:14.174 [2024-12-09 05:42:05.675008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 71.747 ms 01:47:14.174 [2024-12-09 05:42:05.675023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:47:14.174 [2024-12-09 05:42:05.675271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:47:14.174 [2024-12-09 05:42:05.675290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 01:47:14.174 [2024-12-09 05:42:05.675310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.188 ms 01:47:14.174 [2024-12-09 05:42:05.675321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:47:14.174 [2024-12-09 05:42:05.700209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:47:14.174 [2024-12-09 05:42:05.700249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 01:47:14.174 [2024-12-09 05:42:05.700270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.820 ms 01:47:14.174 [2024-12-09 05:42:05.700281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:47:14.174 [2024-12-09 05:42:05.724586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:47:14.174 [2024-12-09 05:42:05.724625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 01:47:14.174 [2024-12-09 05:42:05.724646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.252 ms 01:47:14.174 [2024-12-09 05:42:05.724657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:47:14.174 [2024-12-09 05:42:05.725598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:47:14.174 [2024-12-09 05:42:05.725644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 01:47:14.174 [2024-12-09 05:42:05.725675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.848 ms 01:47:14.174 [2024-12-09 05:42:05.725688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:47:14.432 [2024-12-09 05:42:05.798203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:47:14.432 [2024-12-09 05:42:05.798249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 01:47:14.432 [2024-12-09 05:42:05.798274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 72.468 ms 01:47:14.432 [2024-12-09 05:42:05.798286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:47:14.432 [2024-12-09 05:42:05.824612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:47:14.432 [2024-12-09 05:42:05.824652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 01:47:14.432 [2024-12-09 05:42:05.824705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.235 ms 01:47:14.432 [2024-12-09 05:42:05.824719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:47:14.432 [2024-12-09 05:42:05.849456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:47:14.432 [2024-12-09 05:42:05.849494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 01:47:14.432 [2024-12-09 05:42:05.849514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.688 ms 01:47:14.432 [2024-12-09 05:42:05.849524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:47:14.432 [2024-12-09 05:42:05.874356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:47:14.432 [2024-12-09 05:42:05.874395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 01:47:14.432 [2024-12-09 05:42:05.874438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.785 ms 01:47:14.432 [2024-12-09 05:42:05.874451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:47:14.432 [2024-12-09 05:42:05.874504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:47:14.432 [2024-12-09 05:42:05.874522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 01:47:14.432 [2024-12-09 05:42:05.874539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 01:47:14.432 [2024-12-09 05:42:05.874550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:47:14.432 [2024-12-09 05:42:05.874651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:47:14.432 [2024-12-09 05:42:05.874724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 01:47:14.432 [2024-12-09 05:42:05.874756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 01:47:14.432 [2024-12-09 05:42:05.874767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:47:14.432 [2024-12-09 05:42:05.876315] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3114.435 ms, result 0 01:47:14.432 { 01:47:14.432 "name": "ftl0", 01:47:14.432 "uuid": "f978506d-1366-4345-9c7a-0084bf34ece6" 01:47:14.432 } 01:47:14.432 05:42:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 01:47:14.432 05:42:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 01:47:14.689 05:42:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 01:47:14.689 05:42:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 01:47:14.689 05:42:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 01:47:14.946 /dev/nbd0 01:47:15.204 05:42:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 01:47:15.204 05:42:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 01:47:15.204 05:42:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 01:47:15.204 05:42:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:47:15.204 05:42:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:47:15.204 05:42:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 01:47:15.204 05:42:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 01:47:15.204 05:42:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:47:15.204 05:42:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:47:15.204 05:42:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 01:47:15.204 1+0 records in 01:47:15.204 1+0 records out 01:47:15.204 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000249682 s, 16.4 MB/s 01:47:15.204 05:42:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 01:47:15.204 05:42:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 01:47:15.204 05:42:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 01:47:15.204 05:42:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:47:15.204 05:42:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 01:47:15.204 05:42:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 01:47:15.204 [2024-12-09 05:42:06.691888] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:47:15.204 [2024-12-09 05:42:06.692094] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81560 ] 01:47:15.463 [2024-12-09 05:42:06.882223] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:47:15.463 [2024-12-09 05:42:07.026877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:47:16.839  [2024-12-09T05:42:09.393Z] Copying: 210/1024 [MB] (210 MBps) [2024-12-09T05:42:10.328Z] Copying: 415/1024 [MB] (205 MBps) [2024-12-09T05:42:11.706Z] Copying: 621/1024 [MB] (205 MBps) [2024-12-09T05:42:12.642Z] Copying: 822/1024 [MB] (200 MBps) [2024-12-09T05:42:12.642Z] Copying: 1011/1024 [MB] (189 MBps) [2024-12-09T05:42:13.579Z] Copying: 1024/1024 [MB] (average 202 MBps) 01:47:21.962 01:47:21.962 05:42:13 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 01:47:23.859 05:42:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 01:47:23.859 [2024-12-09 05:42:15.221212] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:47:23.859 [2024-12-09 05:42:15.221360] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81648 ] 01:47:23.859 [2024-12-09 05:42:15.393011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:47:24.116 [2024-12-09 05:42:15.530985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:47:25.511  [2024-12-09T05:42:18.062Z] Copying: 12/1024 [MB] (12 MBps) [2024-12-09T05:42:18.997Z] Copying: 24/1024 [MB] (12 MBps) [2024-12-09T05:42:19.934Z] Copying: 36/1024 [MB] (12 MBps) [2024-12-09T05:42:20.870Z] Copying: 49/1024 [MB] (12 MBps) [2024-12-09T05:42:22.246Z] Copying: 62/1024 [MB] (12 MBps) [2024-12-09T05:42:22.813Z] Copying: 75/1024 [MB] (13 MBps) [2024-12-09T05:42:24.202Z] Copying: 90/1024 [MB] (14 MBps) [2024-12-09T05:42:25.152Z] Copying: 105/1024 [MB] (14 MBps) [2024-12-09T05:42:26.085Z] Copying: 120/1024 [MB] (15 MBps) [2024-12-09T05:42:27.020Z] Copying: 134/1024 [MB] (14 MBps) [2024-12-09T05:42:27.968Z] Copying: 149/1024 [MB] (15 MBps) [2024-12-09T05:42:28.903Z] Copying: 164/1024 [MB] (14 MBps) [2024-12-09T05:42:29.845Z] Copying: 180/1024 [MB] (15 MBps) [2024-12-09T05:42:31.221Z] Copying: 195/1024 [MB] (15 MBps) [2024-12-09T05:42:32.158Z] Copying: 210/1024 [MB] (15 MBps) [2024-12-09T05:42:33.095Z] Copying: 225/1024 [MB] (14 MBps) [2024-12-09T05:42:34.032Z] Copying: 240/1024 [MB] (14 MBps) [2024-12-09T05:42:34.969Z] Copying: 255/1024 [MB] (15 MBps) [2024-12-09T05:42:35.904Z] Copying: 270/1024 [MB] (14 MBps) [2024-12-09T05:42:36.840Z] Copying: 285/1024 [MB] (14 MBps) [2024-12-09T05:42:38.239Z] Copying: 300/1024 [MB] (14 MBps) [2024-12-09T05:42:39.173Z] Copying: 315/1024 [MB] (15 MBps) [2024-12-09T05:42:40.106Z] Copying: 329/1024 [MB] (14 MBps) [2024-12-09T05:42:41.042Z] Copying: 344/1024 [MB] (14 MBps) [2024-12-09T05:42:41.977Z] Copying: 359/1024 [MB] (14 MBps) [2024-12-09T05:42:42.914Z] Copying: 374/1024 [MB] (14 MBps) [2024-12-09T05:42:43.850Z] Copying: 388/1024 [MB] (14 MBps) [2024-12-09T05:42:45.226Z] Copying: 403/1024 [MB] (14 MBps) [2024-12-09T05:42:46.160Z] Copying: 418/1024 [MB] (14 MBps) [2024-12-09T05:42:47.095Z] Copying: 433/1024 [MB] (15 MBps) [2024-12-09T05:42:48.032Z] Copying: 447/1024 [MB] (14 MBps) [2024-12-09T05:42:48.970Z] Copying: 462/1024 [MB] (14 MBps) [2024-12-09T05:42:49.906Z] Copying: 477/1024 [MB] (14 MBps) [2024-12-09T05:42:50.864Z] Copying: 492/1024 [MB] (14 MBps) [2024-12-09T05:42:52.240Z] Copying: 507/1024 [MB] (14 MBps) [2024-12-09T05:42:53.174Z] Copying: 521/1024 [MB] (14 MBps) [2024-12-09T05:42:54.111Z] Copying: 536/1024 [MB] (14 MBps) [2024-12-09T05:42:55.046Z] Copying: 550/1024 [MB] (14 MBps) [2024-12-09T05:42:55.981Z] Copying: 565/1024 [MB] (14 MBps) [2024-12-09T05:42:56.916Z] Copying: 580/1024 [MB] (14 MBps) [2024-12-09T05:42:57.866Z] Copying: 594/1024 [MB] (13 MBps) [2024-12-09T05:42:59.243Z] Copying: 609/1024 [MB] (14 MBps) [2024-12-09T05:42:59.830Z] Copying: 624/1024 [MB] (15 MBps) [2024-12-09T05:43:01.206Z] Copying: 639/1024 [MB] (14 MBps) [2024-12-09T05:43:02.140Z] Copying: 654/1024 [MB] (14 MBps) [2024-12-09T05:43:03.074Z] Copying: 669/1024 [MB] (14 MBps) [2024-12-09T05:43:04.077Z] Copying: 684/1024 [MB] (14 MBps) [2024-12-09T05:43:05.013Z] Copying: 698/1024 [MB] (14 MBps) [2024-12-09T05:43:05.957Z] Copying: 713/1024 [MB] (14 MBps) [2024-12-09T05:43:06.892Z] Copying: 728/1024 [MB] (14 MBps) [2024-12-09T05:43:07.831Z] Copying: 743/1024 [MB] (14 MBps) [2024-12-09T05:43:09.209Z] Copying: 758/1024 [MB] (14 MBps) [2024-12-09T05:43:10.144Z] Copying: 773/1024 [MB] (14 MBps) [2024-12-09T05:43:11.078Z] Copying: 788/1024 [MB] (14 MBps) [2024-12-09T05:43:12.011Z] Copying: 802/1024 [MB] (14 MBps) [2024-12-09T05:43:12.944Z] Copying: 817/1024 [MB] (15 MBps) [2024-12-09T05:43:13.877Z] Copying: 832/1024 [MB] (14 MBps) [2024-12-09T05:43:14.811Z] Copying: 847/1024 [MB] (14 MBps) [2024-12-09T05:43:16.187Z] Copying: 862/1024 [MB] (14 MBps) [2024-12-09T05:43:17.152Z] Copying: 877/1024 [MB] (15 MBps) [2024-12-09T05:43:18.102Z] Copying: 891/1024 [MB] (14 MBps) [2024-12-09T05:43:19.038Z] Copying: 906/1024 [MB] (14 MBps) [2024-12-09T05:43:19.974Z] Copying: 921/1024 [MB] (15 MBps) [2024-12-09T05:43:20.911Z] Copying: 936/1024 [MB] (15 MBps) [2024-12-09T05:43:21.845Z] Copying: 950/1024 [MB] (14 MBps) [2024-12-09T05:43:23.216Z] Copying: 965/1024 [MB] (14 MBps) [2024-12-09T05:43:24.161Z] Copying: 980/1024 [MB] (14 MBps) [2024-12-09T05:43:25.095Z] Copying: 995/1024 [MB] (14 MBps) [2024-12-09T05:43:26.028Z] Copying: 1009/1024 [MB] (14 MBps) [2024-12-09T05:43:26.963Z] Copying: 1024/1024 [MB] (average 14 MBps) 01:48:35.346 01:48:35.346 05:43:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 01:48:35.346 05:43:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 01:48:35.605 05:43:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 01:48:35.864 [2024-12-09 05:43:27.231185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:48:35.864 [2024-12-09 05:43:27.231261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 01:48:35.864 [2024-12-09 05:43:27.231281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 01:48:35.864 [2024-12-09 05:43:27.231298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:48:35.864 [2024-12-09 05:43:27.231332] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 01:48:35.864 [2024-12-09 05:43:27.234453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:48:35.864 [2024-12-09 05:43:27.234500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 01:48:35.864 [2024-12-09 05:43:27.234518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.096 ms 01:48:35.864 [2024-12-09 05:43:27.234530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:48:35.864 [2024-12-09 05:43:27.237137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:48:35.864 [2024-12-09 05:43:27.237176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 01:48:35.864 [2024-12-09 05:43:27.237210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.568 ms 01:48:35.864 [2024-12-09 05:43:27.237222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:48:35.864 [2024-12-09 05:43:27.253198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:48:35.864 [2024-12-09 05:43:27.253257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 01:48:35.864 [2024-12-09 05:43:27.253293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.945 ms 01:48:35.864 [2024-12-09 05:43:27.253305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:48:35.864 [2024-12-09 05:43:27.258568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:48:35.864 [2024-12-09 05:43:27.258614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 01:48:35.864 [2024-12-09 05:43:27.258646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.219 ms 01:48:35.864 [2024-12-09 05:43:27.258658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:48:35.864 [2024-12-09 05:43:27.284130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:48:35.864 [2024-12-09 05:43:27.284169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 01:48:35.864 [2024-12-09 05:43:27.284203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.330 ms 01:48:35.864 [2024-12-09 05:43:27.284215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:48:35.864 [2024-12-09 05:43:27.299906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:48:35.864 [2024-12-09 05:43:27.299963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 01:48:35.864 [2024-12-09 05:43:27.300002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.641 ms 01:48:35.864 [2024-12-09 05:43:27.300014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:48:35.864 [2024-12-09 05:43:27.300243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:48:35.865 [2024-12-09 05:43:27.300279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 01:48:35.865 [2024-12-09 05:43:27.300296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.164 ms 01:48:35.865 [2024-12-09 05:43:27.300308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:48:35.865 [2024-12-09 05:43:27.325141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:48:35.865 [2024-12-09 05:43:27.325181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 01:48:35.865 [2024-12-09 05:43:27.325216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.806 ms 01:48:35.865 [2024-12-09 05:43:27.325227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:48:35.865 [2024-12-09 05:43:27.349753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:48:35.865 [2024-12-09 05:43:27.349793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 01:48:35.865 [2024-12-09 05:43:27.349827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.477 ms 01:48:35.865 [2024-12-09 05:43:27.349839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:48:35.865 [2024-12-09 05:43:27.373601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:48:35.865 [2024-12-09 05:43:27.373640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 01:48:35.865 [2024-12-09 05:43:27.373674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.713 ms 01:48:35.865 [2024-12-09 05:43:27.373695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:48:35.865 [2024-12-09 05:43:27.397496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:48:35.865 [2024-12-09 05:43:27.397537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 01:48:35.865 [2024-12-09 05:43:27.397571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.708 ms 01:48:35.865 [2024-12-09 05:43:27.397581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:48:35.865 [2024-12-09 05:43:27.397628] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 01:48:35.865 [2024-12-09 05:43:27.397651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 01:48:35.865 [2024-12-09 05:43:27.397698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 01:48:35.865 [2024-12-09 05:43:27.397713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 01:48:35.865 [2024-12-09 05:43:27.397727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 01:48:35.865 [2024-12-09 05:43:27.397738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 01:48:35.865 [2024-12-09 05:43:27.397751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 01:48:35.865 [2024-12-09 05:43:27.397762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 01:48:35.865 [2024-12-09 05:43:27.397778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 01:48:35.865 [2024-12-09 05:43:27.397790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 01:48:35.865 [2024-12-09 05:43:27.397803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 01:48:35.865 [2024-12-09 05:43:27.397815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 01:48:35.865 [2024-12-09 05:43:27.397828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 01:48:35.865 [2024-12-09 05:43:27.397855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 01:48:35.865 [2024-12-09 05:43:27.397868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 01:48:35.865 [2024-12-09 05:43:27.397880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 01:48:35.865 [2024-12-09 05:43:27.397893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 01:48:35.865 [2024-12-09 05:43:27.397905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 01:48:35.865 [2024-12-09 05:43:27.397918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 01:48:35.865 [2024-12-09 05:43:27.397929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 01:48:35.865 [2024-12-09 05:43:27.397943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 01:48:35.865 [2024-12-09 05:43:27.397954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 01:48:35.865 [2024-12-09 05:43:27.397970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 01:48:35.865 [2024-12-09 05:43:27.397981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 01:48:35.865 [2024-12-09 05:43:27.397997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 01:48:35.865 [2024-12-09 05:43:27.398008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 01:48:35.865 [2024-12-09 05:43:27.398022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 01:48:35.865 [2024-12-09 05:43:27.398033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 01:48:35.865 [2024-12-09 05:43:27.398047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 01:48:35.865 [2024-12-09 05:43:27.398058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 01:48:35.865 [2024-12-09 05:43:27.398072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 01:48:35.865 [2024-12-09 05:43:27.398084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 01:48:35.865 [2024-12-09 05:43:27.398100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 01:48:35.865 [2024-12-09 05:43:27.398112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 01:48:35.865 [2024-12-09 05:43:27.398126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 01:48:35.865 [2024-12-09 05:43:27.398137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 01:48:35.865 [2024-12-09 05:43:27.398150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 01:48:35.865 [2024-12-09 05:43:27.398162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 01:48:35.865 [2024-12-09 05:43:27.398176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 01:48:35.865 [2024-12-09 05:43:27.398187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 01:48:35.865 [2024-12-09 05:43:27.398204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 01:48:35.865 [2024-12-09 05:43:27.398216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 01:48:35.865 [2024-12-09 05:43:27.398229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 01:48:35.865 [2024-12-09 05:43:27.398241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 01:48:35.865 [2024-12-09 05:43:27.398254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 01:48:35.865 [2024-12-09 05:43:27.398265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 01:48:35.865 [2024-12-09 05:43:27.398279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 01:48:35.865 [2024-12-09 05:43:27.398303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 01:48:35.865 [2024-12-09 05:43:27.398318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 01:48:35.865 [2024-12-09 05:43:27.398330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 01:48:35.865 [2024-12-09 05:43:27.398343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 01:48:35.865 [2024-12-09 05:43:27.398355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 01:48:35.865 [2024-12-09 05:43:27.398368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 01:48:35.865 [2024-12-09 05:43:27.398380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 01:48:35.865 [2024-12-09 05:43:27.398393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 01:48:35.865 [2024-12-09 05:43:27.398418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 01:48:35.865 [2024-12-09 05:43:27.398435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 01:48:35.865 [2024-12-09 05:43:27.398447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 01:48:35.865 [2024-12-09 05:43:27.398460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 01:48:35.865 [2024-12-09 05:43:27.398472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 01:48:35.865 [2024-12-09 05:43:27.398486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 01:48:35.865 [2024-12-09 05:43:27.398497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 01:48:35.865 [2024-12-09 05:43:27.398511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 01:48:35.865 [2024-12-09 05:43:27.398523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 01:48:35.865 [2024-12-09 05:43:27.398538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 01:48:35.865 [2024-12-09 05:43:27.398550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 01:48:35.865 [2024-12-09 05:43:27.398564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 01:48:35.865 [2024-12-09 05:43:27.398576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 01:48:35.865 [2024-12-09 05:43:27.398590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 01:48:35.865 [2024-12-09 05:43:27.398601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 01:48:35.865 [2024-12-09 05:43:27.398615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 01:48:35.865 [2024-12-09 05:43:27.398626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 01:48:35.865 [2024-12-09 05:43:27.398644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 01:48:35.866 [2024-12-09 05:43:27.398656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 01:48:35.866 [2024-12-09 05:43:27.398682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 01:48:35.866 [2024-12-09 05:43:27.398695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 01:48:35.866 [2024-12-09 05:43:27.398709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 01:48:35.866 [2024-12-09 05:43:27.398721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 01:48:35.866 [2024-12-09 05:43:27.398735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 01:48:35.866 [2024-12-09 05:43:27.398747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 01:48:35.866 [2024-12-09 05:43:27.398760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 01:48:35.866 [2024-12-09 05:43:27.398772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 01:48:35.866 [2024-12-09 05:43:27.398785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 01:48:35.866 [2024-12-09 05:43:27.398797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 01:48:35.866 [2024-12-09 05:43:27.398810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 01:48:35.866 [2024-12-09 05:43:27.398822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 01:48:35.866 [2024-12-09 05:43:27.398836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 01:48:35.866 [2024-12-09 05:43:27.398847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 01:48:35.866 [2024-12-09 05:43:27.398863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 01:48:35.866 [2024-12-09 05:43:27.398875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 01:48:35.866 [2024-12-09 05:43:27.398889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 01:48:35.866 [2024-12-09 05:43:27.398900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 01:48:35.866 [2024-12-09 05:43:27.398913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 01:48:35.866 [2024-12-09 05:43:27.398925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 01:48:35.866 [2024-12-09 05:43:27.398938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 01:48:35.866 [2024-12-09 05:43:27.398951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 01:48:35.866 [2024-12-09 05:43:27.398965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 01:48:35.866 [2024-12-09 05:43:27.398976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 01:48:35.866 [2024-12-09 05:43:27.398990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 01:48:35.866 [2024-12-09 05:43:27.399002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 01:48:35.866 [2024-12-09 05:43:27.399017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 01:48:35.866 [2024-12-09 05:43:27.399037] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 01:48:35.866 [2024-12-09 05:43:27.399051] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f978506d-1366-4345-9c7a-0084bf34ece6 01:48:35.866 [2024-12-09 05:43:27.399063] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 01:48:35.866 [2024-12-09 05:43:27.399087] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 01:48:35.866 [2024-12-09 05:43:27.399101] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 01:48:35.866 [2024-12-09 05:43:27.399114] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 01:48:35.866 [2024-12-09 05:43:27.399125] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 01:48:35.866 [2024-12-09 05:43:27.399138] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 01:48:35.866 [2024-12-09 05:43:27.399149] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 01:48:35.866 [2024-12-09 05:43:27.399161] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 01:48:35.866 [2024-12-09 05:43:27.399171] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 01:48:35.866 [2024-12-09 05:43:27.399184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:48:35.866 [2024-12-09 05:43:27.399195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 01:48:35.866 [2024-12-09 05:43:27.399210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.560 ms 01:48:35.866 [2024-12-09 05:43:27.399220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:48:35.866 [2024-12-09 05:43:27.413667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:48:35.866 [2024-12-09 05:43:27.413761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 01:48:35.866 [2024-12-09 05:43:27.413798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.401 ms 01:48:35.866 [2024-12-09 05:43:27.413820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:48:35.866 [2024-12-09 05:43:27.414330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:48:35.866 [2024-12-09 05:43:27.414357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 01:48:35.866 [2024-12-09 05:43:27.414374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.457 ms 01:48:35.866 [2024-12-09 05:43:27.414386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:48:35.866 [2024-12-09 05:43:27.464949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:48:35.866 [2024-12-09 05:43:27.464994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 01:48:35.866 [2024-12-09 05:43:27.465029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:48:35.866 [2024-12-09 05:43:27.465041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:48:35.866 [2024-12-09 05:43:27.465106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:48:35.866 [2024-12-09 05:43:27.465120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 01:48:35.866 [2024-12-09 05:43:27.465134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:48:35.866 [2024-12-09 05:43:27.465144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:48:35.866 [2024-12-09 05:43:27.465245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:48:35.866 [2024-12-09 05:43:27.465297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 01:48:35.866 [2024-12-09 05:43:27.465311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:48:35.866 [2024-12-09 05:43:27.465323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:48:35.866 [2024-12-09 05:43:27.465355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:48:35.866 [2024-12-09 05:43:27.465369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 01:48:35.866 [2024-12-09 05:43:27.465382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:48:35.866 [2024-12-09 05:43:27.465393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:48:36.125 [2024-12-09 05:43:27.550484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:48:36.125 [2024-12-09 05:43:27.550543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 01:48:36.125 [2024-12-09 05:43:27.550579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:48:36.125 [2024-12-09 05:43:27.550592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:48:36.125 [2024-12-09 05:43:27.618821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:48:36.125 [2024-12-09 05:43:27.618871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 01:48:36.125 [2024-12-09 05:43:27.618907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:48:36.125 [2024-12-09 05:43:27.618920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:48:36.125 [2024-12-09 05:43:27.619041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:48:36.125 [2024-12-09 05:43:27.619059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 01:48:36.125 [2024-12-09 05:43:27.619077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:48:36.125 [2024-12-09 05:43:27.619088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:48:36.125 [2024-12-09 05:43:27.619189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:48:36.125 [2024-12-09 05:43:27.619207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 01:48:36.125 [2024-12-09 05:43:27.619222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:48:36.125 [2024-12-09 05:43:27.619232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:48:36.125 [2024-12-09 05:43:27.619359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:48:36.125 [2024-12-09 05:43:27.619377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 01:48:36.125 [2024-12-09 05:43:27.619392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:48:36.125 [2024-12-09 05:43:27.619410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:48:36.125 [2024-12-09 05:43:27.619465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:48:36.125 [2024-12-09 05:43:27.619482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 01:48:36.125 [2024-12-09 05:43:27.619497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:48:36.125 [2024-12-09 05:43:27.619508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:48:36.125 [2024-12-09 05:43:27.619559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:48:36.125 [2024-12-09 05:43:27.619573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 01:48:36.125 [2024-12-09 05:43:27.619587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:48:36.125 [2024-12-09 05:43:27.619601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:48:36.125 [2024-12-09 05:43:27.619660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:48:36.125 [2024-12-09 05:43:27.619676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 01:48:36.125 [2024-12-09 05:43:27.619690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:48:36.125 [2024-12-09 05:43:27.619701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:48:36.125 [2024-12-09 05:43:27.619889] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 388.669 ms, result 0 01:48:36.125 true 01:48:36.125 05:43:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 81411 01:48:36.125 05:43:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid81411 01:48:36.125 05:43:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 01:48:36.382 [2024-12-09 05:43:27.757346] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:48:36.382 [2024-12-09 05:43:27.757520] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82353 ] 01:48:36.382 [2024-12-09 05:43:27.937045] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:48:36.640 [2024-12-09 05:43:28.040718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:48:38.099  [2024-12-09T05:43:30.317Z] Copying: 211/1024 [MB] (211 MBps) [2024-12-09T05:43:31.693Z] Copying: 421/1024 [MB] (209 MBps) [2024-12-09T05:43:32.628Z] Copying: 631/1024 [MB] (209 MBps) [2024-12-09T05:43:33.561Z] Copying: 837/1024 [MB] (205 MBps) [2024-12-09T05:43:34.495Z] Copying: 1024/1024 [MB] (average 207 MBps) 01:48:42.878 01:48:42.878 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 81411 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 01:48:42.878 05:43:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 01:48:42.878 [2024-12-09 05:43:34.272202] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:48:42.878 [2024-12-09 05:43:34.272391] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82422 ] 01:48:42.878 [2024-12-09 05:43:34.449255] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:48:43.136 [2024-12-09 05:43:34.548875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:48:43.394 [2024-12-09 05:43:34.862381] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 01:48:43.394 [2024-12-09 05:43:34.862515] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 01:48:43.395 [2024-12-09 05:43:34.928231] blobstore.c:4896:bs_recover: *NOTICE*: Performing recovery on blobstore 01:48:43.395 [2024-12-09 05:43:34.928691] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 01:48:43.395 [2024-12-09 05:43:34.929019] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 01:48:43.665 [2024-12-09 05:43:35.203946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:48:43.665 [2024-12-09 05:43:35.203988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 01:48:43.665 [2024-12-09 05:43:35.204005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 01:48:43.665 [2024-12-09 05:43:35.204021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:48:43.665 [2024-12-09 05:43:35.204078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:48:43.665 [2024-12-09 05:43:35.204095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 01:48:43.665 [2024-12-09 05:43:35.204106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 01:48:43.665 [2024-12-09 05:43:35.204115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:48:43.665 [2024-12-09 05:43:35.204142] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 01:48:43.665 [2024-12-09 05:43:35.204914] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 01:48:43.665 [2024-12-09 05:43:35.204939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:48:43.665 [2024-12-09 05:43:35.204952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 01:48:43.665 [2024-12-09 05:43:35.204964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.803 ms 01:48:43.665 [2024-12-09 05:43:35.204974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:48:43.665 [2024-12-09 05:43:35.206966] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 01:48:43.665 [2024-12-09 05:43:35.220633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:48:43.665 [2024-12-09 05:43:35.220693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 01:48:43.665 [2024-12-09 05:43:35.220710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.669 ms 01:48:43.665 [2024-12-09 05:43:35.220729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:48:43.665 [2024-12-09 05:43:35.220791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:48:43.665 [2024-12-09 05:43:35.220810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 01:48:43.665 [2024-12-09 05:43:35.220823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 01:48:43.665 [2024-12-09 05:43:35.220833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:48:43.665 [2024-12-09 05:43:35.229390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:48:43.665 [2024-12-09 05:43:35.229425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 01:48:43.665 [2024-12-09 05:43:35.229440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.479 ms 01:48:43.665 [2024-12-09 05:43:35.229450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:48:43.665 [2024-12-09 05:43:35.229536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:48:43.665 [2024-12-09 05:43:35.229553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 01:48:43.665 [2024-12-09 05:43:35.229564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 01:48:43.665 [2024-12-09 05:43:35.229574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:48:43.665 [2024-12-09 05:43:35.229645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:48:43.665 [2024-12-09 05:43:35.229696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 01:48:43.665 [2024-12-09 05:43:35.229728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 01:48:43.665 [2024-12-09 05:43:35.229739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:48:43.665 [2024-12-09 05:43:35.229787] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 01:48:43.665 [2024-12-09 05:43:35.234159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:48:43.665 [2024-12-09 05:43:35.234188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 01:48:43.665 [2024-12-09 05:43:35.234202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.395 ms 01:48:43.665 [2024-12-09 05:43:35.234212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:48:43.665 [2024-12-09 05:43:35.234258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:48:43.665 [2024-12-09 05:43:35.234273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 01:48:43.665 [2024-12-09 05:43:35.234284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 01:48:43.665 [2024-12-09 05:43:35.234294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:48:43.665 [2024-12-09 05:43:35.234342] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 01:48:43.665 [2024-12-09 05:43:35.234371] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 01:48:43.665 [2024-12-09 05:43:35.234444] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 01:48:43.665 [2024-12-09 05:43:35.234465] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 01:48:43.666 [2024-12-09 05:43:35.234558] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 01:48:43.666 [2024-12-09 05:43:35.234573] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 01:48:43.666 [2024-12-09 05:43:35.234587] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 01:48:43.666 [2024-12-09 05:43:35.234605] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 01:48:43.666 [2024-12-09 05:43:35.234618] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 01:48:43.666 [2024-12-09 05:43:35.234629] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 01:48:43.666 [2024-12-09 05:43:35.234639] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 01:48:43.666 [2024-12-09 05:43:35.234650] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 01:48:43.666 [2024-12-09 05:43:35.234660] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 01:48:43.666 [2024-12-09 05:43:35.234672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:48:43.666 [2024-12-09 05:43:35.234712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 01:48:43.666 [2024-12-09 05:43:35.234741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.331 ms 01:48:43.666 [2024-12-09 05:43:35.234751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:48:43.666 [2024-12-09 05:43:35.234830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:48:43.666 [2024-12-09 05:43:35.234849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 01:48:43.666 [2024-12-09 05:43:35.234861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 01:48:43.666 [2024-12-09 05:43:35.234877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:48:43.666 [2024-12-09 05:43:35.234980] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 01:48:43.666 [2024-12-09 05:43:35.235004] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 01:48:43.666 [2024-12-09 05:43:35.235017] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 01:48:43.666 [2024-12-09 05:43:35.235028] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:48:43.666 [2024-12-09 05:43:35.235038] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 01:48:43.666 [2024-12-09 05:43:35.235047] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 01:48:43.666 [2024-12-09 05:43:35.235072] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 01:48:43.666 [2024-12-09 05:43:35.235084] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 01:48:43.666 [2024-12-09 05:43:35.235094] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 01:48:43.666 [2024-12-09 05:43:35.235115] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 01:48:43.666 [2024-12-09 05:43:35.235126] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 01:48:43.666 [2024-12-09 05:43:35.235136] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 01:48:43.666 [2024-12-09 05:43:35.235145] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 01:48:43.666 [2024-12-09 05:43:35.235156] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 01:48:43.666 [2024-12-09 05:43:35.235166] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 01:48:43.666 [2024-12-09 05:43:35.235176] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:48:43.666 [2024-12-09 05:43:35.235185] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 01:48:43.666 [2024-12-09 05:43:35.235195] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 01:48:43.666 [2024-12-09 05:43:35.235204] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:48:43.666 [2024-12-09 05:43:35.235213] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 01:48:43.666 [2024-12-09 05:43:35.235222] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 01:48:43.666 [2024-12-09 05:43:35.235232] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:48:43.666 [2024-12-09 05:43:35.235242] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 01:48:43.666 [2024-12-09 05:43:35.235251] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 01:48:43.666 [2024-12-09 05:43:35.235260] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:48:43.666 [2024-12-09 05:43:35.235269] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 01:48:43.666 [2024-12-09 05:43:35.235279] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 01:48:43.666 [2024-12-09 05:43:35.235289] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:48:43.666 [2024-12-09 05:43:35.235298] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 01:48:43.666 [2024-12-09 05:43:35.235307] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 01:48:43.666 [2024-12-09 05:43:35.235325] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:48:43.666 [2024-12-09 05:43:35.235334] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 01:48:43.666 [2024-12-09 05:43:35.235348] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 01:48:43.666 [2024-12-09 05:43:35.235357] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 01:48:43.666 [2024-12-09 05:43:35.235366] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 01:48:43.666 [2024-12-09 05:43:35.235376] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 01:48:43.666 [2024-12-09 05:43:35.235385] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 01:48:43.666 [2024-12-09 05:43:35.235395] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 01:48:43.666 [2024-12-09 05:43:35.235404] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 01:48:43.666 [2024-12-09 05:43:35.235416] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:48:43.666 [2024-12-09 05:43:35.235425] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 01:48:43.666 [2024-12-09 05:43:35.235434] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 01:48:43.666 [2024-12-09 05:43:35.235446] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:48:43.666 [2024-12-09 05:43:35.235455] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 01:48:43.666 [2024-12-09 05:43:35.235466] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 01:48:43.666 [2024-12-09 05:43:35.235493] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 01:48:43.666 [2024-12-09 05:43:35.235503] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:48:43.666 [2024-12-09 05:43:35.235514] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 01:48:43.666 [2024-12-09 05:43:35.235524] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 01:48:43.666 [2024-12-09 05:43:35.235533] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 01:48:43.666 [2024-12-09 05:43:35.235543] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 01:48:43.666 [2024-12-09 05:43:35.235552] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 01:48:43.666 [2024-12-09 05:43:35.235562] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 01:48:43.666 [2024-12-09 05:43:35.235573] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 01:48:43.666 [2024-12-09 05:43:35.235586] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 01:48:43.666 [2024-12-09 05:43:35.235597] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 01:48:43.666 [2024-12-09 05:43:35.235607] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 01:48:43.666 [2024-12-09 05:43:35.235617] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 01:48:43.666 [2024-12-09 05:43:35.235628] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 01:48:43.666 [2024-12-09 05:43:35.235638] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 01:48:43.666 [2024-12-09 05:43:35.235647] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 01:48:43.666 [2024-12-09 05:43:35.235658] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 01:48:43.666 [2024-12-09 05:43:35.235681] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 01:48:43.666 [2024-12-09 05:43:35.235693] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 01:48:43.666 [2024-12-09 05:43:35.235703] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 01:48:43.666 [2024-12-09 05:43:35.235713] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 01:48:43.666 [2024-12-09 05:43:35.235723] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 01:48:43.666 [2024-12-09 05:43:35.235735] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 01:48:43.666 [2024-12-09 05:43:35.235746] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 01:48:43.666 [2024-12-09 05:43:35.235756] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 01:48:43.666 [2024-12-09 05:43:35.235768] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 01:48:43.666 [2024-12-09 05:43:35.235796] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 01:48:43.666 [2024-12-09 05:43:35.235808] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 01:48:43.666 [2024-12-09 05:43:35.235819] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 01:48:43.666 [2024-12-09 05:43:35.235830] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 01:48:43.666 [2024-12-09 05:43:35.235841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:48:43.666 [2024-12-09 05:43:35.235853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 01:48:43.666 [2024-12-09 05:43:35.235864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.917 ms 01:48:43.666 [2024-12-09 05:43:35.235875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:48:43.666 [2024-12-09 05:43:35.271266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:48:43.666 [2024-12-09 05:43:35.271343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 01:48:43.666 [2024-12-09 05:43:35.271363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.326 ms 01:48:43.666 [2024-12-09 05:43:35.271376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:48:43.666 [2024-12-09 05:43:35.271492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:48:43.667 [2024-12-09 05:43:35.271508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 01:48:43.667 [2024-12-09 05:43:35.271520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 01:48:43.667 [2024-12-09 05:43:35.271531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:48:43.927 [2024-12-09 05:43:35.319767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:48:43.927 [2024-12-09 05:43:35.319845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 01:48:43.927 [2024-12-09 05:43:35.319871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.112 ms 01:48:43.927 [2024-12-09 05:43:35.319882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:48:43.927 [2024-12-09 05:43:35.319961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:48:43.927 [2024-12-09 05:43:35.319977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 01:48:43.928 [2024-12-09 05:43:35.319990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 01:48:43.928 [2024-12-09 05:43:35.320001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:48:43.928 [2024-12-09 05:43:35.320621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:48:43.928 [2024-12-09 05:43:35.320640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 01:48:43.928 [2024-12-09 05:43:35.320653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.517 ms 01:48:43.928 [2024-12-09 05:43:35.320671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:48:43.928 [2024-12-09 05:43:35.320859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:48:43.928 [2024-12-09 05:43:35.320880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 01:48:43.928 [2024-12-09 05:43:35.320892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.129 ms 01:48:43.928 [2024-12-09 05:43:35.320903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:48:43.928 [2024-12-09 05:43:35.338587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:48:43.928 [2024-12-09 05:43:35.338630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 01:48:43.928 [2024-12-09 05:43:35.338647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.655 ms 01:48:43.928 [2024-12-09 05:43:35.338659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:48:43.928 [2024-12-09 05:43:35.352968] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 01:48:43.928 [2024-12-09 05:43:35.353001] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 01:48:43.928 [2024-12-09 05:43:35.353018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:48:43.928 [2024-12-09 05:43:35.353028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 01:48:43.928 [2024-12-09 05:43:35.353040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.152 ms 01:48:43.928 [2024-12-09 05:43:35.353050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:48:43.928 [2024-12-09 05:43:35.377217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:48:43.928 [2024-12-09 05:43:35.377278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 01:48:43.928 [2024-12-09 05:43:35.377296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.124 ms 01:48:43.928 [2024-12-09 05:43:35.377307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:48:43.928 [2024-12-09 05:43:35.390634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:48:43.928 [2024-12-09 05:43:35.390677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 01:48:43.928 [2024-12-09 05:43:35.390710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.263 ms 01:48:43.928 [2024-12-09 05:43:35.390735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:48:43.928 [2024-12-09 05:43:35.403403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:48:43.928 [2024-12-09 05:43:35.403429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 01:48:43.928 [2024-12-09 05:43:35.403443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.594 ms 01:48:43.928 [2024-12-09 05:43:35.403453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:48:43.928 [2024-12-09 05:43:35.404352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:48:43.928 [2024-12-09 05:43:35.404378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 01:48:43.928 [2024-12-09 05:43:35.404392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.759 ms 01:48:43.928 [2024-12-09 05:43:35.404418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:48:43.928 [2024-12-09 05:43:35.476747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:48:43.928 [2024-12-09 05:43:35.476826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 01:48:43.928 [2024-12-09 05:43:35.476847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 72.281 ms 01:48:43.928 [2024-12-09 05:43:35.476859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:48:43.928 [2024-12-09 05:43:35.488323] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 01:48:43.928 [2024-12-09 05:43:35.492534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:48:43.928 [2024-12-09 05:43:35.492582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 01:48:43.928 [2024-12-09 05:43:35.492606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.603 ms 01:48:43.928 [2024-12-09 05:43:35.492619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:48:43.928 [2024-12-09 05:43:35.492778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:48:43.928 [2024-12-09 05:43:35.492803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 01:48:43.928 [2024-12-09 05:43:35.492817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 01:48:43.928 [2024-12-09 05:43:35.492829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:48:43.928 [2024-12-09 05:43:35.492932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:48:43.928 [2024-12-09 05:43:35.492952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 01:48:43.928 [2024-12-09 05:43:35.492965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 01:48:43.928 [2024-12-09 05:43:35.492983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:48:43.928 [2024-12-09 05:43:35.493034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:48:43.928 [2024-12-09 05:43:35.493050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 01:48:43.928 [2024-12-09 05:43:35.493062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 01:48:43.928 [2024-12-09 05:43:35.493088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:48:43.928 [2024-12-09 05:43:35.493150] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 01:48:43.928 [2024-12-09 05:43:35.493184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:48:43.928 [2024-12-09 05:43:35.493197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 01:48:43.928 [2024-12-09 05:43:35.493215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 01:48:43.928 [2024-12-09 05:43:35.493226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:48:43.928 [2024-12-09 05:43:35.521273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:48:43.928 [2024-12-09 05:43:35.521323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 01:48:43.928 [2024-12-09 05:43:35.521340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.011 ms 01:48:43.928 [2024-12-09 05:43:35.521352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:48:43.928 [2024-12-09 05:43:35.521446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:48:43.928 [2024-12-09 05:43:35.521465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 01:48:43.928 [2024-12-09 05:43:35.521478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 01:48:43.928 [2024-12-09 05:43:35.521494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:48:43.928 [2024-12-09 05:43:35.523225] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 318.608 ms, result 0 01:48:45.303  [2024-12-09T05:43:37.857Z] Copying: 22/1024 [MB] (22 MBps) [2024-12-09T05:43:38.812Z] Copying: 46/1024 [MB] (23 MBps) [2024-12-09T05:43:39.748Z] Copying: 69/1024 [MB] (23 MBps) [2024-12-09T05:43:40.685Z] Copying: 93/1024 [MB] (23 MBps) [2024-12-09T05:43:41.621Z] Copying: 116/1024 [MB] (23 MBps) [2024-12-09T05:43:42.561Z] Copying: 140/1024 [MB] (23 MBps) [2024-12-09T05:43:43.940Z] Copying: 164/1024 [MB] (24 MBps) [2024-12-09T05:43:44.879Z] Copying: 188/1024 [MB] (24 MBps) [2024-12-09T05:43:45.815Z] Copying: 212/1024 [MB] (24 MBps) [2024-12-09T05:43:46.753Z] Copying: 236/1024 [MB] (24 MBps) [2024-12-09T05:43:47.728Z] Copying: 261/1024 [MB] (24 MBps) [2024-12-09T05:43:48.663Z] Copying: 285/1024 [MB] (24 MBps) [2024-12-09T05:43:49.598Z] Copying: 310/1024 [MB] (24 MBps) [2024-12-09T05:43:50.974Z] Copying: 334/1024 [MB] (23 MBps) [2024-12-09T05:43:51.541Z] Copying: 358/1024 [MB] (23 MBps) [2024-12-09T05:43:52.919Z] Copying: 381/1024 [MB] (23 MBps) [2024-12-09T05:43:53.853Z] Copying: 405/1024 [MB] (23 MBps) [2024-12-09T05:43:54.789Z] Copying: 429/1024 [MB] (23 MBps) [2024-12-09T05:43:55.724Z] Copying: 452/1024 [MB] (23 MBps) [2024-12-09T05:43:56.660Z] Copying: 476/1024 [MB] (23 MBps) [2024-12-09T05:43:57.594Z] Copying: 499/1024 [MB] (23 MBps) [2024-12-09T05:43:58.967Z] Copying: 523/1024 [MB] (23 MBps) [2024-12-09T05:43:59.901Z] Copying: 546/1024 [MB] (23 MBps) [2024-12-09T05:44:00.867Z] Copying: 571/1024 [MB] (24 MBps) [2024-12-09T05:44:01.801Z] Copying: 594/1024 [MB] (23 MBps) [2024-12-09T05:44:02.737Z] Copying: 618/1024 [MB] (23 MBps) [2024-12-09T05:44:03.674Z] Copying: 642/1024 [MB] (23 MBps) [2024-12-09T05:44:04.610Z] Copying: 665/1024 [MB] (23 MBps) [2024-12-09T05:44:05.547Z] Copying: 689/1024 [MB] (23 MBps) [2024-12-09T05:44:06.922Z] Copying: 714/1024 [MB] (24 MBps) [2024-12-09T05:44:07.858Z] Copying: 738/1024 [MB] (24 MBps) [2024-12-09T05:44:08.795Z] Copying: 763/1024 [MB] (24 MBps) [2024-12-09T05:44:09.738Z] Copying: 787/1024 [MB] (24 MBps) [2024-12-09T05:44:10.670Z] Copying: 811/1024 [MB] (23 MBps) [2024-12-09T05:44:11.604Z] Copying: 835/1024 [MB] (24 MBps) [2024-12-09T05:44:12.537Z] Copying: 858/1024 [MB] (23 MBps) [2024-12-09T05:44:13.918Z] Copying: 882/1024 [MB] (23 MBps) [2024-12-09T05:44:14.855Z] Copying: 906/1024 [MB] (23 MBps) [2024-12-09T05:44:15.850Z] Copying: 929/1024 [MB] (23 MBps) [2024-12-09T05:44:16.783Z] Copying: 953/1024 [MB] (23 MBps) [2024-12-09T05:44:17.721Z] Copying: 976/1024 [MB] (23 MBps) [2024-12-09T05:44:18.657Z] Copying: 1000/1024 [MB] (23 MBps) [2024-12-09T05:44:19.594Z] Copying: 1023/1024 [MB] (22 MBps) [2024-12-09T05:44:19.594Z] Copying: 1048548/1048576 [kB] (880 kBps) [2024-12-09T05:44:19.594Z] Copying: 1024/1024 [MB] (average 23 MBps)[2024-12-09 05:44:19.576249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:49:27.977 [2024-12-09 05:44:19.576354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 01:49:27.977 [2024-12-09 05:44:19.576393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 01:49:27.977 [2024-12-09 05:44:19.576422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:49:27.977 [2024-12-09 05:44:19.578813] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 01:49:27.977 [2024-12-09 05:44:19.585377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:49:27.977 [2024-12-09 05:44:19.585416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 01:49:27.977 [2024-12-09 05:44:19.585454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.517 ms 01:49:27.977 [2024-12-09 05:44:19.585466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:49:28.236 [2024-12-09 05:44:19.596822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:49:28.236 [2024-12-09 05:44:19.596872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 01:49:28.236 [2024-12-09 05:44:19.596904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.499 ms 01:49:28.236 [2024-12-09 05:44:19.596916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:49:28.236 [2024-12-09 05:44:19.618594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:49:28.236 [2024-12-09 05:44:19.618642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 01:49:28.236 [2024-12-09 05:44:19.618678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.657 ms 01:49:28.236 [2024-12-09 05:44:19.618737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:49:28.236 [2024-12-09 05:44:19.624103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:49:28.236 [2024-12-09 05:44:19.624141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 01:49:28.236 [2024-12-09 05:44:19.624175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.321 ms 01:49:28.236 [2024-12-09 05:44:19.624185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:49:28.236 [2024-12-09 05:44:19.649788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:49:28.236 [2024-12-09 05:44:19.649829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 01:49:28.236 [2024-12-09 05:44:19.649860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.541 ms 01:49:28.236 [2024-12-09 05:44:19.649871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:49:28.236 [2024-12-09 05:44:19.664739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:49:28.236 [2024-12-09 05:44:19.664778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 01:49:28.236 [2024-12-09 05:44:19.664810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.828 ms 01:49:28.236 [2024-12-09 05:44:19.664821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:49:28.236 [2024-12-09 05:44:19.785444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:49:28.236 [2024-12-09 05:44:19.785513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 01:49:28.236 [2024-12-09 05:44:19.785545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 120.580 ms 01:49:28.236 [2024-12-09 05:44:19.785557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:49:28.236 [2024-12-09 05:44:19.810450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:49:28.236 [2024-12-09 05:44:19.810491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 01:49:28.237 [2024-12-09 05:44:19.810521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.873 ms 01:49:28.237 [2024-12-09 05:44:19.810546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:49:28.237 [2024-12-09 05:44:19.834826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:49:28.237 [2024-12-09 05:44:19.834865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 01:49:28.237 [2024-12-09 05:44:19.834895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.241 ms 01:49:28.237 [2024-12-09 05:44:19.834905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:49:28.497 [2024-12-09 05:44:19.858676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:49:28.497 [2024-12-09 05:44:19.858715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 01:49:28.497 [2024-12-09 05:44:19.858746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.732 ms 01:49:28.497 [2024-12-09 05:44:19.858756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:49:28.497 [2024-12-09 05:44:19.883345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:49:28.497 [2024-12-09 05:44:19.883412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 01:49:28.497 [2024-12-09 05:44:19.883427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.526 ms 01:49:28.497 [2024-12-09 05:44:19.883438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:49:28.497 [2024-12-09 05:44:19.883491] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 01:49:28.497 [2024-12-09 05:44:19.883518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 128256 / 261120 wr_cnt: 1 state: open 01:49:28.497 [2024-12-09 05:44:19.883531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 01:49:28.497 [2024-12-09 05:44:19.883542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 01:49:28.497 [2024-12-09 05:44:19.883568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 01:49:28.497 [2024-12-09 05:44:19.883610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 01:49:28.497 [2024-12-09 05:44:19.883622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 01:49:28.497 [2024-12-09 05:44:19.883634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 01:49:28.497 [2024-12-09 05:44:19.883645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 01:49:28.497 [2024-12-09 05:44:19.883657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 01:49:28.497 [2024-12-09 05:44:19.883669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 01:49:28.497 [2024-12-09 05:44:19.883680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 01:49:28.497 [2024-12-09 05:44:19.883692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 01:49:28.497 [2024-12-09 05:44:19.883704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 01:49:28.497 [2024-12-09 05:44:19.883729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 01:49:28.497 [2024-12-09 05:44:19.883741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 01:49:28.497 [2024-12-09 05:44:19.883753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 01:49:28.497 [2024-12-09 05:44:19.883765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 01:49:28.497 [2024-12-09 05:44:19.883777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 01:49:28.497 [2024-12-09 05:44:19.883788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 01:49:28.497 [2024-12-09 05:44:19.883799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 01:49:28.497 [2024-12-09 05:44:19.883810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 01:49:28.497 [2024-12-09 05:44:19.883822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 01:49:28.497 [2024-12-09 05:44:19.883833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 01:49:28.497 [2024-12-09 05:44:19.883845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 01:49:28.497 [2024-12-09 05:44:19.883864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 01:49:28.497 [2024-12-09 05:44:19.883875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 01:49:28.497 [2024-12-09 05:44:19.883887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 01:49:28.497 [2024-12-09 05:44:19.883898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 01:49:28.497 [2024-12-09 05:44:19.883910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 01:49:28.497 [2024-12-09 05:44:19.883921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 01:49:28.497 [2024-12-09 05:44:19.883932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 01:49:28.497 [2024-12-09 05:44:19.883944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 01:49:28.497 [2024-12-09 05:44:19.883955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 01:49:28.497 [2024-12-09 05:44:19.883968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 01:49:28.497 [2024-12-09 05:44:19.883979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 01:49:28.497 [2024-12-09 05:44:19.883991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 01:49:28.497 [2024-12-09 05:44:19.884017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 01:49:28.497 [2024-12-09 05:44:19.884030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 01:49:28.497 [2024-12-09 05:44:19.884042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 01:49:28.497 [2024-12-09 05:44:19.884054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 01:49:28.497 [2024-12-09 05:44:19.884066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 01:49:28.497 [2024-12-09 05:44:19.884077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 01:49:28.497 [2024-12-09 05:44:19.884089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 01:49:28.497 [2024-12-09 05:44:19.884100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 01:49:28.497 [2024-12-09 05:44:19.884112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 01:49:28.497 [2024-12-09 05:44:19.884124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 01:49:28.497 [2024-12-09 05:44:19.884135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 01:49:28.497 [2024-12-09 05:44:19.884147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 01:49:28.497 [2024-12-09 05:44:19.884158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 01:49:28.497 [2024-12-09 05:44:19.884170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 01:49:28.497 [2024-12-09 05:44:19.884182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 01:49:28.497 [2024-12-09 05:44:19.884194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 01:49:28.497 [2024-12-09 05:44:19.884206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 01:49:28.497 [2024-12-09 05:44:19.884218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 01:49:28.497 [2024-12-09 05:44:19.884230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 01:49:28.497 [2024-12-09 05:44:19.884242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 01:49:28.497 [2024-12-09 05:44:19.884254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 01:49:28.498 [2024-12-09 05:44:19.884265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 01:49:28.498 [2024-12-09 05:44:19.884278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 01:49:28.498 [2024-12-09 05:44:19.884289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 01:49:28.498 [2024-12-09 05:44:19.884311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 01:49:28.498 [2024-12-09 05:44:19.884323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 01:49:28.498 [2024-12-09 05:44:19.884334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 01:49:28.498 [2024-12-09 05:44:19.884361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 01:49:28.498 [2024-12-09 05:44:19.884373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 01:49:28.498 [2024-12-09 05:44:19.884386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 01:49:28.498 [2024-12-09 05:44:19.884398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 01:49:28.498 [2024-12-09 05:44:19.884410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 01:49:28.498 [2024-12-09 05:44:19.884422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 01:49:28.498 [2024-12-09 05:44:19.884434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 01:49:28.498 [2024-12-09 05:44:19.884445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 01:49:28.498 [2024-12-09 05:44:19.884457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 01:49:28.498 [2024-12-09 05:44:19.884468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 01:49:28.498 [2024-12-09 05:44:19.884480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 01:49:28.498 [2024-12-09 05:44:19.884492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 01:49:28.498 [2024-12-09 05:44:19.884504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 01:49:28.498 [2024-12-09 05:44:19.884515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 01:49:28.498 [2024-12-09 05:44:19.884527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 01:49:28.498 [2024-12-09 05:44:19.884539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 01:49:28.498 [2024-12-09 05:44:19.884551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 01:49:28.498 [2024-12-09 05:44:19.884563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 01:49:28.498 [2024-12-09 05:44:19.884575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 01:49:28.498 [2024-12-09 05:44:19.884587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 01:49:28.498 [2024-12-09 05:44:19.884598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 01:49:28.498 [2024-12-09 05:44:19.884610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 01:49:28.498 [2024-12-09 05:44:19.884622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 01:49:28.498 [2024-12-09 05:44:19.884633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 01:49:28.498 [2024-12-09 05:44:19.884644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 01:49:28.498 [2024-12-09 05:44:19.884656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 01:49:28.498 [2024-12-09 05:44:19.884668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 01:49:28.498 [2024-12-09 05:44:19.884679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 01:49:28.498 [2024-12-09 05:44:19.884700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 01:49:28.498 [2024-12-09 05:44:19.884713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 01:49:28.498 [2024-12-09 05:44:19.884725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 01:49:28.498 [2024-12-09 05:44:19.884737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 01:49:28.498 [2024-12-09 05:44:19.884749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 01:49:28.498 [2024-12-09 05:44:19.884761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 01:49:28.498 [2024-12-09 05:44:19.884774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 01:49:28.498 [2024-12-09 05:44:19.884786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 01:49:28.498 [2024-12-09 05:44:19.884798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 01:49:28.498 [2024-12-09 05:44:19.884818] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 01:49:28.498 [2024-12-09 05:44:19.884835] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f978506d-1366-4345-9c7a-0084bf34ece6 01:49:28.498 [2024-12-09 05:44:19.884858] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 128256 01:49:28.498 [2024-12-09 05:44:19.884870] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 129216 01:49:28.498 [2024-12-09 05:44:19.884880] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 128256 01:49:28.498 [2024-12-09 05:44:19.884893] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0075 01:49:28.498 [2024-12-09 05:44:19.884903] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 01:49:28.498 [2024-12-09 05:44:19.884914] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 01:49:28.498 [2024-12-09 05:44:19.884925] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 01:49:28.498 [2024-12-09 05:44:19.884935] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 01:49:28.498 [2024-12-09 05:44:19.884945] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 01:49:28.498 [2024-12-09 05:44:19.884957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:49:28.498 [2024-12-09 05:44:19.884969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 01:49:28.498 [2024-12-09 05:44:19.884980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.483 ms 01:49:28.498 [2024-12-09 05:44:19.884991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:49:28.498 [2024-12-09 05:44:19.899375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:49:28.498 [2024-12-09 05:44:19.899411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 01:49:28.498 [2024-12-09 05:44:19.899442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.345 ms 01:49:28.498 [2024-12-09 05:44:19.899454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:49:28.498 [2024-12-09 05:44:19.900028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:49:28.498 [2024-12-09 05:44:19.900088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 01:49:28.498 [2024-12-09 05:44:19.900140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.536 ms 01:49:28.498 [2024-12-09 05:44:19.900152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:49:28.498 [2024-12-09 05:44:19.936157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:49:28.498 [2024-12-09 05:44:19.936200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 01:49:28.498 [2024-12-09 05:44:19.936231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:49:28.498 [2024-12-09 05:44:19.936242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:49:28.498 [2024-12-09 05:44:19.936296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:49:28.498 [2024-12-09 05:44:19.936309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 01:49:28.498 [2024-12-09 05:44:19.936327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:49:28.498 [2024-12-09 05:44:19.936337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:49:28.498 [2024-12-09 05:44:19.936459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:49:28.498 [2024-12-09 05:44:19.936478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 01:49:28.498 [2024-12-09 05:44:19.936491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:49:28.498 [2024-12-09 05:44:19.936502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:49:28.498 [2024-12-09 05:44:19.936523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:49:28.498 [2024-12-09 05:44:19.936538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 01:49:28.498 [2024-12-09 05:44:19.936549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:49:28.498 [2024-12-09 05:44:19.936568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:49:28.498 [2024-12-09 05:44:20.022933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:49:28.498 [2024-12-09 05:44:20.023030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 01:49:28.498 [2024-12-09 05:44:20.023078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:49:28.498 [2024-12-09 05:44:20.023090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:49:28.498 [2024-12-09 05:44:20.097087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:49:28.498 [2024-12-09 05:44:20.097150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 01:49:28.498 [2024-12-09 05:44:20.097190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:49:28.498 [2024-12-09 05:44:20.097202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:49:28.498 [2024-12-09 05:44:20.097302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:49:28.498 [2024-12-09 05:44:20.097319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 01:49:28.498 [2024-12-09 05:44:20.097330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:49:28.498 [2024-12-09 05:44:20.097347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:49:28.498 [2024-12-09 05:44:20.097395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:49:28.498 [2024-12-09 05:44:20.097410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 01:49:28.498 [2024-12-09 05:44:20.097421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:49:28.498 [2024-12-09 05:44:20.097431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:49:28.498 [2024-12-09 05:44:20.097615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:49:28.498 [2024-12-09 05:44:20.097635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 01:49:28.498 [2024-12-09 05:44:20.097648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:49:28.498 [2024-12-09 05:44:20.097660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:49:28.499 [2024-12-09 05:44:20.097761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:49:28.499 [2024-12-09 05:44:20.097781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 01:49:28.499 [2024-12-09 05:44:20.097795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:49:28.499 [2024-12-09 05:44:20.097807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:49:28.499 [2024-12-09 05:44:20.097861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:49:28.499 [2024-12-09 05:44:20.097878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 01:49:28.499 [2024-12-09 05:44:20.097891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:49:28.499 [2024-12-09 05:44:20.097903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:49:28.499 [2024-12-09 05:44:20.097954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:49:28.499 [2024-12-09 05:44:20.097972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 01:49:28.499 [2024-12-09 05:44:20.097984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:49:28.499 [2024-12-09 05:44:20.097996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:49:28.499 [2024-12-09 05:44:20.098159] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 522.803 ms, result 0 01:49:30.400 01:49:30.400 01:49:30.400 05:44:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 01:49:31.774 05:44:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 01:49:32.031 [2024-12-09 05:44:23.422904] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:49:32.031 [2024-12-09 05:44:23.423123] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82894 ] 01:49:32.031 [2024-12-09 05:44:23.614582] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:49:32.289 [2024-12-09 05:44:23.759756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:49:32.547 [2024-12-09 05:44:24.069984] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 01:49:32.547 [2024-12-09 05:44:24.070084] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 01:49:32.806 [2024-12-09 05:44:24.228928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:49:32.806 [2024-12-09 05:44:24.228992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 01:49:32.806 [2024-12-09 05:44:24.229028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 01:49:32.806 [2024-12-09 05:44:24.229039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:49:32.806 [2024-12-09 05:44:24.229111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:49:32.806 [2024-12-09 05:44:24.229131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 01:49:32.806 [2024-12-09 05:44:24.229143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 01:49:32.806 [2024-12-09 05:44:24.229153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:49:32.806 [2024-12-09 05:44:24.229184] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 01:49:32.806 [2024-12-09 05:44:24.230066] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 01:49:32.806 [2024-12-09 05:44:24.230135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:49:32.806 [2024-12-09 05:44:24.230148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 01:49:32.806 [2024-12-09 05:44:24.230160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.961 ms 01:49:32.806 [2024-12-09 05:44:24.230171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:49:32.806 [2024-12-09 05:44:24.232156] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 01:49:32.806 [2024-12-09 05:44:24.246051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:49:32.806 [2024-12-09 05:44:24.246110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 01:49:32.806 [2024-12-09 05:44:24.246142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.896 ms 01:49:32.807 [2024-12-09 05:44:24.246153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:49:32.807 [2024-12-09 05:44:24.246229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:49:32.807 [2024-12-09 05:44:24.246249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 01:49:32.807 [2024-12-09 05:44:24.246261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 01:49:32.807 [2024-12-09 05:44:24.246272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:49:32.807 [2024-12-09 05:44:24.254872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:49:32.807 [2024-12-09 05:44:24.254915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 01:49:32.807 [2024-12-09 05:44:24.254944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.449 ms 01:49:32.807 [2024-12-09 05:44:24.254960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:49:32.807 [2024-12-09 05:44:24.255051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:49:32.807 [2024-12-09 05:44:24.255070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 01:49:32.807 [2024-12-09 05:44:24.255081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 01:49:32.807 [2024-12-09 05:44:24.255091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:49:32.807 [2024-12-09 05:44:24.255189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:49:32.807 [2024-12-09 05:44:24.255207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 01:49:32.807 [2024-12-09 05:44:24.255220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 01:49:32.807 [2024-12-09 05:44:24.255231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:49:32.807 [2024-12-09 05:44:24.255268] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 01:49:32.807 [2024-12-09 05:44:24.259643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:49:32.807 [2024-12-09 05:44:24.259715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 01:49:32.807 [2024-12-09 05:44:24.259750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.382 ms 01:49:32.807 [2024-12-09 05:44:24.259762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:49:32.807 [2024-12-09 05:44:24.259798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:49:32.807 [2024-12-09 05:44:24.259812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 01:49:32.807 [2024-12-09 05:44:24.259824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 01:49:32.807 [2024-12-09 05:44:24.259835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:49:32.807 [2024-12-09 05:44:24.259915] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 01:49:32.807 [2024-12-09 05:44:24.259947] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 01:49:32.807 [2024-12-09 05:44:24.259988] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 01:49:32.807 [2024-12-09 05:44:24.260012] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 01:49:32.807 [2024-12-09 05:44:24.260120] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 01:49:32.807 [2024-12-09 05:44:24.260149] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 01:49:32.807 [2024-12-09 05:44:24.260165] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 01:49:32.807 [2024-12-09 05:44:24.260181] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 01:49:32.807 [2024-12-09 05:44:24.260194] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 01:49:32.807 [2024-12-09 05:44:24.260207] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 01:49:32.807 [2024-12-09 05:44:24.260218] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 01:49:32.807 [2024-12-09 05:44:24.260235] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 01:49:32.807 [2024-12-09 05:44:24.260245] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 01:49:32.807 [2024-12-09 05:44:24.260257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:49:32.807 [2024-12-09 05:44:24.260269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 01:49:32.807 [2024-12-09 05:44:24.260280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.345 ms 01:49:32.807 [2024-12-09 05:44:24.260291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:49:32.807 [2024-12-09 05:44:24.260381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:49:32.807 [2024-12-09 05:44:24.260397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 01:49:32.807 [2024-12-09 05:44:24.260409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 01:49:32.807 [2024-12-09 05:44:24.260420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:49:32.807 [2024-12-09 05:44:24.260533] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 01:49:32.807 [2024-12-09 05:44:24.260553] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 01:49:32.807 [2024-12-09 05:44:24.260565] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 01:49:32.807 [2024-12-09 05:44:24.260576] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:49:32.807 [2024-12-09 05:44:24.260588] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 01:49:32.807 [2024-12-09 05:44:24.260598] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 01:49:32.807 [2024-12-09 05:44:24.260608] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 01:49:32.807 [2024-12-09 05:44:24.260618] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 01:49:32.807 [2024-12-09 05:44:24.260628] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 01:49:32.807 [2024-12-09 05:44:24.260638] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 01:49:32.807 [2024-12-09 05:44:24.260648] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 01:49:32.807 [2024-12-09 05:44:24.260658] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 01:49:32.807 [2024-12-09 05:44:24.260683] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 01:49:32.807 [2024-12-09 05:44:24.260707] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 01:49:32.807 [2024-12-09 05:44:24.260719] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 01:49:32.807 [2024-12-09 05:44:24.260729] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:49:32.807 [2024-12-09 05:44:24.260740] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 01:49:32.807 [2024-12-09 05:44:24.260751] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 01:49:32.807 [2024-12-09 05:44:24.260761] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:49:32.807 [2024-12-09 05:44:24.260772] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 01:49:32.807 [2024-12-09 05:44:24.260782] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 01:49:32.807 [2024-12-09 05:44:24.260793] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:49:32.807 [2024-12-09 05:44:24.260803] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 01:49:32.807 [2024-12-09 05:44:24.260813] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 01:49:32.807 [2024-12-09 05:44:24.260823] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:49:32.807 [2024-12-09 05:44:24.260834] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 01:49:32.807 [2024-12-09 05:44:24.260844] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 01:49:32.807 [2024-12-09 05:44:24.260854] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:49:32.807 [2024-12-09 05:44:24.260863] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 01:49:32.807 [2024-12-09 05:44:24.260874] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 01:49:32.807 [2024-12-09 05:44:24.260884] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:49:32.807 [2024-12-09 05:44:24.260910] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 01:49:32.807 [2024-12-09 05:44:24.260920] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 01:49:32.807 [2024-12-09 05:44:24.260930] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 01:49:32.807 [2024-12-09 05:44:24.260940] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 01:49:32.807 [2024-12-09 05:44:24.260951] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 01:49:32.807 [2024-12-09 05:44:24.260962] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 01:49:32.807 [2024-12-09 05:44:24.260972] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 01:49:32.807 [2024-12-09 05:44:24.260982] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 01:49:32.807 [2024-12-09 05:44:24.260992] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:49:32.807 [2024-12-09 05:44:24.261002] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 01:49:32.807 [2024-12-09 05:44:24.261013] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 01:49:32.807 [2024-12-09 05:44:24.261024] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:49:32.807 [2024-12-09 05:44:24.261034] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 01:49:32.807 [2024-12-09 05:44:24.261045] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 01:49:32.807 [2024-12-09 05:44:24.261056] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 01:49:32.807 [2024-12-09 05:44:24.261067] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:49:32.807 [2024-12-09 05:44:24.261079] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 01:49:32.807 [2024-12-09 05:44:24.261090] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 01:49:32.807 [2024-12-09 05:44:24.261101] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 01:49:32.807 [2024-12-09 05:44:24.261112] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 01:49:32.807 [2024-12-09 05:44:24.261123] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 01:49:32.807 [2024-12-09 05:44:24.261133] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 01:49:32.807 [2024-12-09 05:44:24.261146] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 01:49:32.807 [2024-12-09 05:44:24.261160] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 01:49:32.808 [2024-12-09 05:44:24.261178] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 01:49:32.808 [2024-12-09 05:44:24.261190] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 01:49:32.808 [2024-12-09 05:44:24.261201] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 01:49:32.808 [2024-12-09 05:44:24.261212] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 01:49:32.808 [2024-12-09 05:44:24.261224] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 01:49:32.808 [2024-12-09 05:44:24.261249] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 01:49:32.808 [2024-12-09 05:44:24.261261] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 01:49:32.808 [2024-12-09 05:44:24.261271] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 01:49:32.808 [2024-12-09 05:44:24.261282] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 01:49:32.808 [2024-12-09 05:44:24.261293] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 01:49:32.808 [2024-12-09 05:44:24.261304] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 01:49:32.808 [2024-12-09 05:44:24.261315] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 01:49:32.808 [2024-12-09 05:44:24.261325] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 01:49:32.808 [2024-12-09 05:44:24.261336] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 01:49:32.808 [2024-12-09 05:44:24.261348] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 01:49:32.808 [2024-12-09 05:44:24.261359] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 01:49:32.808 [2024-12-09 05:44:24.261371] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 01:49:32.808 [2024-12-09 05:44:24.261382] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 01:49:32.808 [2024-12-09 05:44:24.261393] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 01:49:32.808 [2024-12-09 05:44:24.261404] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 01:49:32.808 [2024-12-09 05:44:24.261415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:49:32.808 [2024-12-09 05:44:24.261427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 01:49:32.808 [2024-12-09 05:44:24.261439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.944 ms 01:49:32.808 [2024-12-09 05:44:24.261450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:49:32.808 [2024-12-09 05:44:24.295941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:49:32.808 [2024-12-09 05:44:24.296015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 01:49:32.808 [2024-12-09 05:44:24.296049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.431 ms 01:49:32.808 [2024-12-09 05:44:24.296082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:49:32.808 [2024-12-09 05:44:24.296189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:49:32.808 [2024-12-09 05:44:24.296204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 01:49:32.808 [2024-12-09 05:44:24.296216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 01:49:32.808 [2024-12-09 05:44:24.296226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:49:32.808 [2024-12-09 05:44:24.345845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:49:32.808 [2024-12-09 05:44:24.345893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 01:49:32.808 [2024-12-09 05:44:24.345925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.488 ms 01:49:32.808 [2024-12-09 05:44:24.345936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:49:32.808 [2024-12-09 05:44:24.345987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:49:32.808 [2024-12-09 05:44:24.346004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 01:49:32.808 [2024-12-09 05:44:24.346022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 01:49:32.808 [2024-12-09 05:44:24.346033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:49:32.808 [2024-12-09 05:44:24.346770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:49:32.808 [2024-12-09 05:44:24.346812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 01:49:32.808 [2024-12-09 05:44:24.346827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.649 ms 01:49:32.808 [2024-12-09 05:44:24.346839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:49:32.808 [2024-12-09 05:44:24.347017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:49:32.808 [2024-12-09 05:44:24.347066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 01:49:32.808 [2024-12-09 05:44:24.347101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.147 ms 01:49:32.808 [2024-12-09 05:44:24.347112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:49:32.808 [2024-12-09 05:44:24.363630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:49:32.808 [2024-12-09 05:44:24.363707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 01:49:32.808 [2024-12-09 05:44:24.363740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.492 ms 01:49:32.808 [2024-12-09 05:44:24.363751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:49:32.808 [2024-12-09 05:44:24.377502] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 01:49:32.808 [2024-12-09 05:44:24.377544] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 01:49:32.808 [2024-12-09 05:44:24.377576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:49:32.808 [2024-12-09 05:44:24.377587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 01:49:32.808 [2024-12-09 05:44:24.377602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.709 ms 01:49:32.808 [2024-12-09 05:44:24.377613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:49:32.808 [2024-12-09 05:44:24.400994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:49:32.808 [2024-12-09 05:44:24.401036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 01:49:32.808 [2024-12-09 05:44:24.401067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.339 ms 01:49:32.808 [2024-12-09 05:44:24.401078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:49:32.808 [2024-12-09 05:44:24.413478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:49:32.808 [2024-12-09 05:44:24.413519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 01:49:32.808 [2024-12-09 05:44:24.413548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.366 ms 01:49:32.808 [2024-12-09 05:44:24.413559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:49:33.067 [2024-12-09 05:44:24.425685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:49:33.067 [2024-12-09 05:44:24.425724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 01:49:33.067 [2024-12-09 05:44:24.425753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.086 ms 01:49:33.067 [2024-12-09 05:44:24.425763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:49:33.067 [2024-12-09 05:44:24.426566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:49:33.067 [2024-12-09 05:44:24.426614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 01:49:33.067 [2024-12-09 05:44:24.426632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.698 ms 01:49:33.067 [2024-12-09 05:44:24.426657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:49:33.067 [2024-12-09 05:44:24.491161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:49:33.067 [2024-12-09 05:44:24.491235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 01:49:33.067 [2024-12-09 05:44:24.491276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.480 ms 01:49:33.067 [2024-12-09 05:44:24.491288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:49:33.067 [2024-12-09 05:44:24.501164] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 01:49:33.067 [2024-12-09 05:44:24.503270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:49:33.067 [2024-12-09 05:44:24.503304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 01:49:33.067 [2024-12-09 05:44:24.503335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.919 ms 01:49:33.067 [2024-12-09 05:44:24.503350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:49:33.067 [2024-12-09 05:44:24.503445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:49:33.067 [2024-12-09 05:44:24.503463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 01:49:33.067 [2024-12-09 05:44:24.503480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 01:49:33.067 [2024-12-09 05:44:24.503490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:49:33.067 [2024-12-09 05:44:24.505405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:49:33.067 [2024-12-09 05:44:24.505436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 01:49:33.067 [2024-12-09 05:44:24.505464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.817 ms 01:49:33.067 [2024-12-09 05:44:24.505474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:49:33.067 [2024-12-09 05:44:24.505506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:49:33.067 [2024-12-09 05:44:24.505520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 01:49:33.067 [2024-12-09 05:44:24.505532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 01:49:33.067 [2024-12-09 05:44:24.505541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:49:33.067 [2024-12-09 05:44:24.505587] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 01:49:33.067 [2024-12-09 05:44:24.505603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:49:33.067 [2024-12-09 05:44:24.505613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 01:49:33.067 [2024-12-09 05:44:24.505624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 01:49:33.067 [2024-12-09 05:44:24.505634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:49:33.067 [2024-12-09 05:44:24.531355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:49:33.067 [2024-12-09 05:44:24.531410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 01:49:33.067 [2024-12-09 05:44:24.531446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.668 ms 01:49:33.067 [2024-12-09 05:44:24.531458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:49:33.067 [2024-12-09 05:44:24.531541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:49:33.067 [2024-12-09 05:44:24.531559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 01:49:33.067 [2024-12-09 05:44:24.531571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 01:49:33.068 [2024-12-09 05:44:24.531581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:49:33.068 [2024-12-09 05:44:24.535430] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 304.819 ms, result 0 01:49:34.441  [2024-12-09T05:44:26.993Z] Copying: 988/1048576 [kB] (988 kBps) [2024-12-09T05:44:27.930Z] Copying: 5588/1048576 [kB] (4600 kBps) [2024-12-09T05:44:28.867Z] Copying: 30/1024 [MB] (25 MBps) [2024-12-09T05:44:29.823Z] Copying: 58/1024 [MB] (27 MBps) [2024-12-09T05:44:30.761Z] Copying: 85/1024 [MB] (27 MBps) [2024-12-09T05:44:32.137Z] Copying: 112/1024 [MB] (26 MBps) [2024-12-09T05:44:33.073Z] Copying: 140/1024 [MB] (27 MBps) [2024-12-09T05:44:34.009Z] Copying: 168/1024 [MB] (28 MBps) [2024-12-09T05:44:34.943Z] Copying: 196/1024 [MB] (28 MBps) [2024-12-09T05:44:35.879Z] Copying: 224/1024 [MB] (28 MBps) [2024-12-09T05:44:36.813Z] Copying: 252/1024 [MB] (27 MBps) [2024-12-09T05:44:37.748Z] Copying: 280/1024 [MB] (28 MBps) [2024-12-09T05:44:39.125Z] Copying: 308/1024 [MB] (27 MBps) [2024-12-09T05:44:40.061Z] Copying: 336/1024 [MB] (27 MBps) [2024-12-09T05:44:40.997Z] Copying: 363/1024 [MB] (27 MBps) [2024-12-09T05:44:41.933Z] Copying: 390/1024 [MB] (27 MBps) [2024-12-09T05:44:42.869Z] Copying: 418/1024 [MB] (27 MBps) [2024-12-09T05:44:43.804Z] Copying: 445/1024 [MB] (27 MBps) [2024-12-09T05:44:44.741Z] Copying: 473/1024 [MB] (27 MBps) [2024-12-09T05:44:46.120Z] Copying: 501/1024 [MB] (27 MBps) [2024-12-09T05:44:47.066Z] Copying: 528/1024 [MB] (27 MBps) [2024-12-09T05:44:48.000Z] Copying: 556/1024 [MB] (27 MBps) [2024-12-09T05:44:48.933Z] Copying: 584/1024 [MB] (27 MBps) [2024-12-09T05:44:49.867Z] Copying: 611/1024 [MB] (27 MBps) [2024-12-09T05:44:50.801Z] Copying: 639/1024 [MB] (27 MBps) [2024-12-09T05:44:51.736Z] Copying: 666/1024 [MB] (27 MBps) [2024-12-09T05:44:53.112Z] Copying: 694/1024 [MB] (27 MBps) [2024-12-09T05:44:54.049Z] Copying: 722/1024 [MB] (27 MBps) [2024-12-09T05:44:54.987Z] Copying: 750/1024 [MB] (27 MBps) [2024-12-09T05:44:55.937Z] Copying: 778/1024 [MB] (28 MBps) [2024-12-09T05:44:56.873Z] Copying: 806/1024 [MB] (28 MBps) [2024-12-09T05:44:57.810Z] Copying: 833/1024 [MB] (26 MBps) [2024-12-09T05:44:58.746Z] Copying: 861/1024 [MB] (27 MBps) [2024-12-09T05:45:00.121Z] Copying: 888/1024 [MB] (27 MBps) [2024-12-09T05:45:01.069Z] Copying: 916/1024 [MB] (27 MBps) [2024-12-09T05:45:02.003Z] Copying: 944/1024 [MB] (27 MBps) [2024-12-09T05:45:02.938Z] Copying: 971/1024 [MB] (26 MBps) [2024-12-09T05:45:03.901Z] Copying: 997/1024 [MB] (26 MBps) [2024-12-09T05:45:03.901Z] Copying: 1023/1024 [MB] (25 MBps) [2024-12-09T05:45:03.901Z] Copying: 1024/1024 [MB] (average 26 MBps)[2024-12-09 05:45:03.806191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:50:12.284 [2024-12-09 05:45:03.806289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 01:50:12.284 [2024-12-09 05:45:03.806335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 01:50:12.284 [2024-12-09 05:45:03.806349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:50:12.284 [2024-12-09 05:45:03.806381] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 01:50:12.284 [2024-12-09 05:45:03.809826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:50:12.284 [2024-12-09 05:45:03.809875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 01:50:12.284 [2024-12-09 05:45:03.809889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.423 ms 01:50:12.284 [2024-12-09 05:45:03.809900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:50:12.284 [2024-12-09 05:45:03.810169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:50:12.284 [2024-12-09 05:45:03.810192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 01:50:12.284 [2024-12-09 05:45:03.810205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.243 ms 01:50:12.284 [2024-12-09 05:45:03.810215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:50:12.284 [2024-12-09 05:45:03.821174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:50:12.284 [2024-12-09 05:45:03.821243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 01:50:12.284 [2024-12-09 05:45:03.821265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.938 ms 01:50:12.284 [2024-12-09 05:45:03.821278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:50:12.284 [2024-12-09 05:45:03.827066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:50:12.284 [2024-12-09 05:45:03.827115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 01:50:12.284 [2024-12-09 05:45:03.827152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.748 ms 01:50:12.284 [2024-12-09 05:45:03.827163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:50:12.284 [2024-12-09 05:45:03.853881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:50:12.284 [2024-12-09 05:45:03.853939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 01:50:12.284 [2024-12-09 05:45:03.853970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.657 ms 01:50:12.284 [2024-12-09 05:45:03.853980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:50:12.284 [2024-12-09 05:45:03.870886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:50:12.284 [2024-12-09 05:45:03.870926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 01:50:12.285 [2024-12-09 05:45:03.870966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.865 ms 01:50:12.285 [2024-12-09 05:45:03.870977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:50:12.285 [2024-12-09 05:45:03.872976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:50:12.285 [2024-12-09 05:45:03.873025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 01:50:12.285 [2024-12-09 05:45:03.873040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.964 ms 01:50:12.285 [2024-12-09 05:45:03.873059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:50:12.285 [2024-12-09 05:45:03.899095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:50:12.285 [2024-12-09 05:45:03.899135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 01:50:12.285 [2024-12-09 05:45:03.899165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.016 ms 01:50:12.285 [2024-12-09 05:45:03.899175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:50:12.545 [2024-12-09 05:45:03.926671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:50:12.545 [2024-12-09 05:45:03.926739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 01:50:12.545 [2024-12-09 05:45:03.926771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.457 ms 01:50:12.545 [2024-12-09 05:45:03.926787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:50:12.545 [2024-12-09 05:45:03.953657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:50:12.545 [2024-12-09 05:45:03.953738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 01:50:12.545 [2024-12-09 05:45:03.953775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.827 ms 01:50:12.545 [2024-12-09 05:45:03.953786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:50:12.545 [2024-12-09 05:45:03.978566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:50:12.545 [2024-12-09 05:45:03.978624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 01:50:12.545 [2024-12-09 05:45:03.978661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.696 ms 01:50:12.545 [2024-12-09 05:45:03.978686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:50:12.545 [2024-12-09 05:45:03.978727] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 01:50:12.545 [2024-12-09 05:45:03.978749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 01:50:12.545 [2024-12-09 05:45:03.978762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 01:50:12.545 [2024-12-09 05:45:03.978774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 01:50:12.545 [2024-12-09 05:45:03.978793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 01:50:12.545 [2024-12-09 05:45:03.978804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 01:50:12.545 [2024-12-09 05:45:03.978814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 01:50:12.545 [2024-12-09 05:45:03.978826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 01:50:12.545 [2024-12-09 05:45:03.978836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 01:50:12.545 [2024-12-09 05:45:03.978847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 01:50:12.545 [2024-12-09 05:45:03.978890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 01:50:12.545 [2024-12-09 05:45:03.978902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 01:50:12.546 [2024-12-09 05:45:03.978913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 01:50:12.546 [2024-12-09 05:45:03.978924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 01:50:12.546 [2024-12-09 05:45:03.978936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 01:50:12.546 [2024-12-09 05:45:03.978947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 01:50:12.546 [2024-12-09 05:45:03.978958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 01:50:12.546 [2024-12-09 05:45:03.978970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 01:50:12.546 [2024-12-09 05:45:03.978981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 01:50:12.546 [2024-12-09 05:45:03.978992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 01:50:12.546 [2024-12-09 05:45:03.979003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 01:50:12.546 [2024-12-09 05:45:03.979016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 01:50:12.546 [2024-12-09 05:45:03.979027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 01:50:12.546 [2024-12-09 05:45:03.979045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 01:50:12.546 [2024-12-09 05:45:03.979056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 01:50:12.546 [2024-12-09 05:45:03.979068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 01:50:12.546 [2024-12-09 05:45:03.979079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 01:50:12.546 [2024-12-09 05:45:03.979089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 01:50:12.546 [2024-12-09 05:45:03.979100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 01:50:12.546 [2024-12-09 05:45:03.979111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 01:50:12.546 [2024-12-09 05:45:03.979123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 01:50:12.546 [2024-12-09 05:45:03.979136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 01:50:12.546 [2024-12-09 05:45:03.979147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 01:50:12.546 [2024-12-09 05:45:03.979158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 01:50:12.546 [2024-12-09 05:45:03.979172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 01:50:12.546 [2024-12-09 05:45:03.979184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 01:50:12.546 [2024-12-09 05:45:03.979196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 01:50:12.546 [2024-12-09 05:45:03.979207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 01:50:12.546 [2024-12-09 05:45:03.979218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 01:50:12.546 [2024-12-09 05:45:03.979245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 01:50:12.546 [2024-12-09 05:45:03.979255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 01:50:12.546 [2024-12-09 05:45:03.979266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 01:50:12.546 [2024-12-09 05:45:03.979277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 01:50:12.546 [2024-12-09 05:45:03.979311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 01:50:12.546 [2024-12-09 05:45:03.979323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 01:50:12.546 [2024-12-09 05:45:03.979334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 01:50:12.546 [2024-12-09 05:45:03.979353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 01:50:12.546 [2024-12-09 05:45:03.979364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 01:50:12.546 [2024-12-09 05:45:03.979376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 01:50:12.546 [2024-12-09 05:45:03.979388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 01:50:12.546 [2024-12-09 05:45:03.979399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 01:50:12.546 [2024-12-09 05:45:03.979410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 01:50:12.546 [2024-12-09 05:45:03.979428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 01:50:12.546 [2024-12-09 05:45:03.979440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 01:50:12.546 [2024-12-09 05:45:03.979451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 01:50:12.546 [2024-12-09 05:45:03.979463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 01:50:12.546 [2024-12-09 05:45:03.979475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 01:50:12.546 [2024-12-09 05:45:03.979494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 01:50:12.546 [2024-12-09 05:45:03.979505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 01:50:12.546 [2024-12-09 05:45:03.979515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 01:50:12.546 [2024-12-09 05:45:03.979526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 01:50:12.546 [2024-12-09 05:45:03.979538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 01:50:12.546 [2024-12-09 05:45:03.979549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 01:50:12.546 [2024-12-09 05:45:03.979561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 01:50:12.546 [2024-12-09 05:45:03.979572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 01:50:12.546 [2024-12-09 05:45:03.979583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 01:50:12.546 [2024-12-09 05:45:03.979595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 01:50:12.546 [2024-12-09 05:45:03.979607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 01:50:12.546 [2024-12-09 05:45:03.979618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 01:50:12.546 [2024-12-09 05:45:03.979629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 01:50:12.546 [2024-12-09 05:45:03.979640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 01:50:12.546 [2024-12-09 05:45:03.979651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 01:50:12.546 [2024-12-09 05:45:03.979663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 01:50:12.546 [2024-12-09 05:45:03.979674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 01:50:12.546 [2024-12-09 05:45:03.979685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 01:50:12.546 [2024-12-09 05:45:03.979696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 01:50:12.546 [2024-12-09 05:45:03.979706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 01:50:12.546 [2024-12-09 05:45:03.979718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 01:50:12.546 [2024-12-09 05:45:03.979743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 01:50:12.546 [2024-12-09 05:45:03.979755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 01:50:12.546 [2024-12-09 05:45:03.979767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 01:50:12.546 [2024-12-09 05:45:03.979779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 01:50:12.546 [2024-12-09 05:45:03.979790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 01:50:12.546 [2024-12-09 05:45:03.979801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 01:50:12.546 [2024-12-09 05:45:03.979812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 01:50:12.546 [2024-12-09 05:45:03.979823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 01:50:12.546 [2024-12-09 05:45:03.979835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 01:50:12.546 [2024-12-09 05:45:03.979846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 01:50:12.546 [2024-12-09 05:45:03.979858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 01:50:12.546 [2024-12-09 05:45:03.979869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 01:50:12.546 [2024-12-09 05:45:03.979886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 01:50:12.546 [2024-12-09 05:45:03.979898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 01:50:12.546 [2024-12-09 05:45:03.979910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 01:50:12.546 [2024-12-09 05:45:03.979922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 01:50:12.546 [2024-12-09 05:45:03.979934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 01:50:12.546 [2024-12-09 05:45:03.979945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 01:50:12.546 [2024-12-09 05:45:03.979956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 01:50:12.546 [2024-12-09 05:45:03.979968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 01:50:12.546 [2024-12-09 05:45:03.979979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 01:50:12.546 [2024-12-09 05:45:03.979991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 01:50:12.547 [2024-12-09 05:45:03.980026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 01:50:12.547 [2024-12-09 05:45:03.980046] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 01:50:12.547 [2024-12-09 05:45:03.980058] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f978506d-1366-4345-9c7a-0084bf34ece6 01:50:12.547 [2024-12-09 05:45:03.980077] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 01:50:12.547 [2024-12-09 05:45:03.980087] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 136384 01:50:12.547 [2024-12-09 05:45:03.980102] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 134400 01:50:12.547 [2024-12-09 05:45:03.980113] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0148 01:50:12.547 [2024-12-09 05:45:03.980123] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 01:50:12.547 [2024-12-09 05:45:03.980146] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 01:50:12.547 [2024-12-09 05:45:03.980156] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 01:50:12.547 [2024-12-09 05:45:03.980166] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 01:50:12.547 [2024-12-09 05:45:03.980175] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 01:50:12.547 [2024-12-09 05:45:03.980185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:50:12.547 [2024-12-09 05:45:03.980196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 01:50:12.547 [2024-12-09 05:45:03.980208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.459 ms 01:50:12.547 [2024-12-09 05:45:03.980218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:50:12.547 [2024-12-09 05:45:03.994900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:50:12.547 [2024-12-09 05:45:03.994953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 01:50:12.547 [2024-12-09 05:45:03.994983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.660 ms 01:50:12.547 [2024-12-09 05:45:03.995002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:50:12.547 [2024-12-09 05:45:03.995496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:50:12.547 [2024-12-09 05:45:03.995521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 01:50:12.547 [2024-12-09 05:45:03.995550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.449 ms 01:50:12.547 [2024-12-09 05:45:03.995561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:50:12.547 [2024-12-09 05:45:04.032995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:50:12.547 [2024-12-09 05:45:04.033047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 01:50:12.547 [2024-12-09 05:45:04.033078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:50:12.547 [2024-12-09 05:45:04.033089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:50:12.547 [2024-12-09 05:45:04.033147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:50:12.547 [2024-12-09 05:45:04.033163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 01:50:12.547 [2024-12-09 05:45:04.033174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:50:12.547 [2024-12-09 05:45:04.033184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:50:12.547 [2024-12-09 05:45:04.033339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:50:12.547 [2024-12-09 05:45:04.033359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 01:50:12.547 [2024-12-09 05:45:04.033371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:50:12.547 [2024-12-09 05:45:04.033391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:50:12.547 [2024-12-09 05:45:04.033414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:50:12.547 [2024-12-09 05:45:04.033427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 01:50:12.547 [2024-12-09 05:45:04.033439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:50:12.547 [2024-12-09 05:45:04.033450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:50:12.547 [2024-12-09 05:45:04.120514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:50:12.547 [2024-12-09 05:45:04.120591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 01:50:12.547 [2024-12-09 05:45:04.120624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:50:12.547 [2024-12-09 05:45:04.120637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:50:12.808 [2024-12-09 05:45:04.191454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:50:12.808 [2024-12-09 05:45:04.191505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 01:50:12.808 [2024-12-09 05:45:04.191537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:50:12.808 [2024-12-09 05:45:04.191547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:50:12.808 [2024-12-09 05:45:04.191619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:50:12.808 [2024-12-09 05:45:04.191643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 01:50:12.808 [2024-12-09 05:45:04.191655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:50:12.808 [2024-12-09 05:45:04.191665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:50:12.808 [2024-12-09 05:45:04.191760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:50:12.808 [2024-12-09 05:45:04.191809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 01:50:12.808 [2024-12-09 05:45:04.191821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:50:12.808 [2024-12-09 05:45:04.191833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:50:12.808 [2024-12-09 05:45:04.191954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:50:12.808 [2024-12-09 05:45:04.191973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 01:50:12.808 [2024-12-09 05:45:04.191992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:50:12.808 [2024-12-09 05:45:04.192003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:50:12.808 [2024-12-09 05:45:04.192064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:50:12.808 [2024-12-09 05:45:04.192082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 01:50:12.808 [2024-12-09 05:45:04.192094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:50:12.808 [2024-12-09 05:45:04.192104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:50:12.808 [2024-12-09 05:45:04.192150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:50:12.808 [2024-12-09 05:45:04.192179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 01:50:12.808 [2024-12-09 05:45:04.192196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:50:12.808 [2024-12-09 05:45:04.192207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:50:12.808 [2024-12-09 05:45:04.192257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:50:12.808 [2024-12-09 05:45:04.192273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 01:50:12.809 [2024-12-09 05:45:04.192289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:50:12.809 [2024-12-09 05:45:04.192300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:50:12.809 [2024-12-09 05:45:04.192466] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 386.253 ms, result 0 01:50:13.747 01:50:13.747 01:50:13.747 05:45:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 01:50:15.648 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 01:50:15.648 05:45:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 01:50:15.648 [2024-12-09 05:45:06.914192] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:50:15.648 [2024-12-09 05:45:06.914580] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83316 ] 01:50:15.648 [2024-12-09 05:45:07.080445] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:50:15.648 [2024-12-09 05:45:07.180049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:50:15.907 [2024-12-09 05:45:07.487632] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 01:50:15.908 [2024-12-09 05:45:07.487718] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 01:50:16.168 [2024-12-09 05:45:07.647022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:50:16.168 [2024-12-09 05:45:07.647066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 01:50:16.168 [2024-12-09 05:45:07.647083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 01:50:16.168 [2024-12-09 05:45:07.647093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:50:16.168 [2024-12-09 05:45:07.647150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:50:16.168 [2024-12-09 05:45:07.647169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 01:50:16.168 [2024-12-09 05:45:07.647181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 01:50:16.168 [2024-12-09 05:45:07.647190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:50:16.168 [2024-12-09 05:45:07.647217] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 01:50:16.168 [2024-12-09 05:45:07.647980] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 01:50:16.168 [2024-12-09 05:45:07.648005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:50:16.168 [2024-12-09 05:45:07.648016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 01:50:16.168 [2024-12-09 05:45:07.648028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.794 ms 01:50:16.168 [2024-12-09 05:45:07.648047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:50:16.168 [2024-12-09 05:45:07.650001] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 01:50:16.168 [2024-12-09 05:45:07.664559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:50:16.168 [2024-12-09 05:45:07.664596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 01:50:16.168 [2024-12-09 05:45:07.664612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.560 ms 01:50:16.168 [2024-12-09 05:45:07.664622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:50:16.168 [2024-12-09 05:45:07.664713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:50:16.168 [2024-12-09 05:45:07.664732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 01:50:16.168 [2024-12-09 05:45:07.664744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 01:50:16.168 [2024-12-09 05:45:07.664754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:50:16.168 [2024-12-09 05:45:07.673229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:50:16.168 [2024-12-09 05:45:07.673264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 01:50:16.168 [2024-12-09 05:45:07.673278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.394 ms 01:50:16.168 [2024-12-09 05:45:07.673295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:50:16.168 [2024-12-09 05:45:07.673380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:50:16.168 [2024-12-09 05:45:07.673397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 01:50:16.168 [2024-12-09 05:45:07.673409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 01:50:16.168 [2024-12-09 05:45:07.673418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:50:16.168 [2024-12-09 05:45:07.673470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:50:16.168 [2024-12-09 05:45:07.673485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 01:50:16.168 [2024-12-09 05:45:07.673496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 01:50:16.168 [2024-12-09 05:45:07.673506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:50:16.169 [2024-12-09 05:45:07.673587] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 01:50:16.169 [2024-12-09 05:45:07.677856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:50:16.169 [2024-12-09 05:45:07.677884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 01:50:16.169 [2024-12-09 05:45:07.677902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.276 ms 01:50:16.169 [2024-12-09 05:45:07.677911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:50:16.169 [2024-12-09 05:45:07.677949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:50:16.169 [2024-12-09 05:45:07.677964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 01:50:16.169 [2024-12-09 05:45:07.677975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 01:50:16.169 [2024-12-09 05:45:07.677984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:50:16.169 [2024-12-09 05:45:07.678043] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 01:50:16.169 [2024-12-09 05:45:07.678072] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 01:50:16.169 [2024-12-09 05:45:07.678109] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 01:50:16.169 [2024-12-09 05:45:07.678132] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 01:50:16.169 [2024-12-09 05:45:07.678223] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 01:50:16.169 [2024-12-09 05:45:07.678237] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 01:50:16.169 [2024-12-09 05:45:07.678250] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 01:50:16.169 [2024-12-09 05:45:07.678263] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 01:50:16.169 [2024-12-09 05:45:07.678275] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 01:50:16.169 [2024-12-09 05:45:07.678286] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 01:50:16.169 [2024-12-09 05:45:07.678296] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 01:50:16.169 [2024-12-09 05:45:07.678338] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 01:50:16.169 [2024-12-09 05:45:07.678358] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 01:50:16.169 [2024-12-09 05:45:07.678370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:50:16.169 [2024-12-09 05:45:07.678380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 01:50:16.169 [2024-12-09 05:45:07.678391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.330 ms 01:50:16.169 [2024-12-09 05:45:07.678402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:50:16.169 [2024-12-09 05:45:07.678487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:50:16.169 [2024-12-09 05:45:07.678503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 01:50:16.169 [2024-12-09 05:45:07.678514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 01:50:16.169 [2024-12-09 05:45:07.678525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:50:16.169 [2024-12-09 05:45:07.678634] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 01:50:16.169 [2024-12-09 05:45:07.678683] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 01:50:16.169 [2024-12-09 05:45:07.678694] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 01:50:16.169 [2024-12-09 05:45:07.678704] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:50:16.169 [2024-12-09 05:45:07.678731] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 01:50:16.169 [2024-12-09 05:45:07.678740] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 01:50:16.169 [2024-12-09 05:45:07.678750] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 01:50:16.169 [2024-12-09 05:45:07.678761] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 01:50:16.169 [2024-12-09 05:45:07.678771] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 01:50:16.169 [2024-12-09 05:45:07.678780] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 01:50:16.169 [2024-12-09 05:45:07.678788] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 01:50:16.169 [2024-12-09 05:45:07.678798] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 01:50:16.169 [2024-12-09 05:45:07.678807] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 01:50:16.169 [2024-12-09 05:45:07.678828] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 01:50:16.169 [2024-12-09 05:45:07.678839] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 01:50:16.169 [2024-12-09 05:45:07.678849] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:50:16.169 [2024-12-09 05:45:07.678859] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 01:50:16.169 [2024-12-09 05:45:07.678868] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 01:50:16.169 [2024-12-09 05:45:07.678877] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:50:16.169 [2024-12-09 05:45:07.678886] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 01:50:16.169 [2024-12-09 05:45:07.678895] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 01:50:16.169 [2024-12-09 05:45:07.678905] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:50:16.169 [2024-12-09 05:45:07.678914] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 01:50:16.169 [2024-12-09 05:45:07.678923] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 01:50:16.169 [2024-12-09 05:45:07.678931] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:50:16.169 [2024-12-09 05:45:07.678941] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 01:50:16.169 [2024-12-09 05:45:07.678950] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 01:50:16.169 [2024-12-09 05:45:07.678959] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:50:16.169 [2024-12-09 05:45:07.678968] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 01:50:16.169 [2024-12-09 05:45:07.678978] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 01:50:16.169 [2024-12-09 05:45:07.678987] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 01:50:16.169 [2024-12-09 05:45:07.678996] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 01:50:16.169 [2024-12-09 05:45:07.679006] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 01:50:16.169 [2024-12-09 05:45:07.679015] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 01:50:16.169 [2024-12-09 05:45:07.679024] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 01:50:16.169 [2024-12-09 05:45:07.679033] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 01:50:16.169 [2024-12-09 05:45:07.679042] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 01:50:16.169 [2024-12-09 05:45:07.679052] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 01:50:16.169 [2024-12-09 05:45:07.679061] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 01:50:16.169 [2024-12-09 05:45:07.679070] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:50:16.169 [2024-12-09 05:45:07.679080] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 01:50:16.169 [2024-12-09 05:45:07.679089] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 01:50:16.169 [2024-12-09 05:45:07.679099] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:50:16.169 [2024-12-09 05:45:07.679110] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 01:50:16.169 [2024-12-09 05:45:07.679121] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 01:50:16.169 [2024-12-09 05:45:07.679130] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 01:50:16.169 [2024-12-09 05:45:07.679142] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 01:50:16.169 [2024-12-09 05:45:07.679153] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 01:50:16.169 [2024-12-09 05:45:07.679162] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 01:50:16.169 [2024-12-09 05:45:07.679172] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 01:50:16.169 [2024-12-09 05:45:07.679181] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 01:50:16.169 [2024-12-09 05:45:07.679190] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 01:50:16.169 [2024-12-09 05:45:07.679199] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 01:50:16.169 [2024-12-09 05:45:07.679210] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 01:50:16.169 [2024-12-09 05:45:07.679224] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 01:50:16.169 [2024-12-09 05:45:07.679239] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 01:50:16.169 [2024-12-09 05:45:07.679249] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 01:50:16.169 [2024-12-09 05:45:07.679259] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 01:50:16.169 [2024-12-09 05:45:07.679268] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 01:50:16.169 [2024-12-09 05:45:07.679278] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 01:50:16.169 [2024-12-09 05:45:07.679287] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 01:50:16.169 [2024-12-09 05:45:07.679297] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 01:50:16.169 [2024-12-09 05:45:07.679307] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 01:50:16.169 [2024-12-09 05:45:07.679316] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 01:50:16.169 [2024-12-09 05:45:07.679325] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 01:50:16.169 [2024-12-09 05:45:07.679335] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 01:50:16.169 [2024-12-09 05:45:07.679344] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 01:50:16.169 [2024-12-09 05:45:07.679354] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 01:50:16.169 [2024-12-09 05:45:07.679364] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 01:50:16.170 [2024-12-09 05:45:07.679374] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 01:50:16.170 [2024-12-09 05:45:07.679384] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 01:50:16.170 [2024-12-09 05:45:07.679396] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 01:50:16.170 [2024-12-09 05:45:07.679406] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 01:50:16.170 [2024-12-09 05:45:07.679416] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 01:50:16.170 [2024-12-09 05:45:07.679425] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 01:50:16.170 [2024-12-09 05:45:07.679436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:50:16.170 [2024-12-09 05:45:07.679447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 01:50:16.170 [2024-12-09 05:45:07.679457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.860 ms 01:50:16.170 [2024-12-09 05:45:07.679468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:50:16.170 [2024-12-09 05:45:07.713807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:50:16.170 [2024-12-09 05:45:07.713854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 01:50:16.170 [2024-12-09 05:45:07.713871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.283 ms 01:50:16.170 [2024-12-09 05:45:07.713887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:50:16.170 [2024-12-09 05:45:07.713991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:50:16.170 [2024-12-09 05:45:07.714006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 01:50:16.170 [2024-12-09 05:45:07.714018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 01:50:16.170 [2024-12-09 05:45:07.714027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:50:16.170 [2024-12-09 05:45:07.758544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:50:16.170 [2024-12-09 05:45:07.758588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 01:50:16.170 [2024-12-09 05:45:07.758606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.437 ms 01:50:16.170 [2024-12-09 05:45:07.758617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:50:16.170 [2024-12-09 05:45:07.758711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:50:16.170 [2024-12-09 05:45:07.758729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 01:50:16.170 [2024-12-09 05:45:07.758747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 01:50:16.170 [2024-12-09 05:45:07.758757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:50:16.170 [2024-12-09 05:45:07.759424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:50:16.170 [2024-12-09 05:45:07.759448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 01:50:16.170 [2024-12-09 05:45:07.759462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.563 ms 01:50:16.170 [2024-12-09 05:45:07.759472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:50:16.170 [2024-12-09 05:45:07.759624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:50:16.170 [2024-12-09 05:45:07.759643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 01:50:16.170 [2024-12-09 05:45:07.759673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 01:50:16.170 [2024-12-09 05:45:07.759686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:50:16.170 [2024-12-09 05:45:07.776515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:50:16.170 [2024-12-09 05:45:07.776551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 01:50:16.170 [2024-12-09 05:45:07.776566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.802 ms 01:50:16.170 [2024-12-09 05:45:07.776576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:50:16.431 [2024-12-09 05:45:07.790938] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 01:50:16.431 [2024-12-09 05:45:07.790989] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 01:50:16.431 [2024-12-09 05:45:07.791006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:50:16.431 [2024-12-09 05:45:07.791017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 01:50:16.431 [2024-12-09 05:45:07.791028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.266 ms 01:50:16.431 [2024-12-09 05:45:07.791039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:50:16.431 [2024-12-09 05:45:07.816814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:50:16.431 [2024-12-09 05:45:07.816867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 01:50:16.431 [2024-12-09 05:45:07.816884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.730 ms 01:50:16.431 [2024-12-09 05:45:07.816896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:50:16.431 [2024-12-09 05:45:07.831764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:50:16.431 [2024-12-09 05:45:07.831816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 01:50:16.431 [2024-12-09 05:45:07.831831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.800 ms 01:50:16.431 [2024-12-09 05:45:07.831842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:50:16.431 [2024-12-09 05:45:07.845349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:50:16.431 [2024-12-09 05:45:07.845400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 01:50:16.431 [2024-12-09 05:45:07.845414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.465 ms 01:50:16.431 [2024-12-09 05:45:07.845424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:50:16.431 [2024-12-09 05:45:07.846348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:50:16.431 [2024-12-09 05:45:07.846378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 01:50:16.431 [2024-12-09 05:45:07.846398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.817 ms 01:50:16.431 [2024-12-09 05:45:07.846408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:50:16.431 [2024-12-09 05:45:07.927094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:50:16.431 [2024-12-09 05:45:07.927172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 01:50:16.431 [2024-12-09 05:45:07.927197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 80.660 ms 01:50:16.431 [2024-12-09 05:45:07.927209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:50:16.431 [2024-12-09 05:45:07.937926] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 01:50:16.431 [2024-12-09 05:45:07.940445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:50:16.431 [2024-12-09 05:45:07.940488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 01:50:16.431 [2024-12-09 05:45:07.940503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.171 ms 01:50:16.431 [2024-12-09 05:45:07.940514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:50:16.431 [2024-12-09 05:45:07.940610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:50:16.431 [2024-12-09 05:45:07.940630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 01:50:16.431 [2024-12-09 05:45:07.940646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 01:50:16.431 [2024-12-09 05:45:07.940657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:50:16.431 [2024-12-09 05:45:07.941777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:50:16.431 [2024-12-09 05:45:07.941818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 01:50:16.431 [2024-12-09 05:45:07.941832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.009 ms 01:50:16.431 [2024-12-09 05:45:07.941842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:50:16.431 [2024-12-09 05:45:07.941877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:50:16.431 [2024-12-09 05:45:07.941892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 01:50:16.431 [2024-12-09 05:45:07.941903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 01:50:16.431 [2024-12-09 05:45:07.941914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:50:16.431 [2024-12-09 05:45:07.941962] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 01:50:16.431 [2024-12-09 05:45:07.941978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:50:16.431 [2024-12-09 05:45:07.941988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 01:50:16.431 [2024-12-09 05:45:07.942000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 01:50:16.431 [2024-12-09 05:45:07.942011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:50:16.431 [2024-12-09 05:45:07.967832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:50:16.431 [2024-12-09 05:45:07.967884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 01:50:16.431 [2024-12-09 05:45:07.967906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.783 ms 01:50:16.431 [2024-12-09 05:45:07.967917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:50:16.431 [2024-12-09 05:45:07.967997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:50:16.431 [2024-12-09 05:45:07.968022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 01:50:16.431 [2024-12-09 05:45:07.968034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 01:50:16.431 [2024-12-09 05:45:07.968045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:50:16.431 [2024-12-09 05:45:07.969645] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 322.056 ms, result 0 01:50:17.808  [2024-12-09T05:45:10.360Z] Copying: 22/1024 [MB] (22 MBps) [2024-12-09T05:45:11.297Z] Copying: 44/1024 [MB] (22 MBps) [2024-12-09T05:45:12.263Z] Copying: 66/1024 [MB] (22 MBps) [2024-12-09T05:45:13.199Z] Copying: 89/1024 [MB] (22 MBps) [2024-12-09T05:45:14.575Z] Copying: 111/1024 [MB] (22 MBps) [2024-12-09T05:45:15.143Z] Copying: 134/1024 [MB] (22 MBps) [2024-12-09T05:45:16.517Z] Copying: 157/1024 [MB] (23 MBps) [2024-12-09T05:45:17.453Z] Copying: 180/1024 [MB] (22 MBps) [2024-12-09T05:45:18.466Z] Copying: 202/1024 [MB] (22 MBps) [2024-12-09T05:45:19.402Z] Copying: 225/1024 [MB] (22 MBps) [2024-12-09T05:45:20.339Z] Copying: 247/1024 [MB] (22 MBps) [2024-12-09T05:45:21.276Z] Copying: 270/1024 [MB] (22 MBps) [2024-12-09T05:45:22.213Z] Copying: 292/1024 [MB] (22 MBps) [2024-12-09T05:45:23.150Z] Copying: 315/1024 [MB] (22 MBps) [2024-12-09T05:45:24.527Z] Copying: 338/1024 [MB] (22 MBps) [2024-12-09T05:45:25.471Z] Copying: 361/1024 [MB] (22 MBps) [2024-12-09T05:45:26.405Z] Copying: 384/1024 [MB] (22 MBps) [2024-12-09T05:45:27.340Z] Copying: 407/1024 [MB] (22 MBps) [2024-12-09T05:45:28.275Z] Copying: 429/1024 [MB] (22 MBps) [2024-12-09T05:45:29.210Z] Copying: 452/1024 [MB] (22 MBps) [2024-12-09T05:45:30.159Z] Copying: 475/1024 [MB] (22 MBps) [2024-12-09T05:45:31.535Z] Copying: 498/1024 [MB] (23 MBps) [2024-12-09T05:45:32.491Z] Copying: 521/1024 [MB] (22 MBps) [2024-12-09T05:45:33.437Z] Copying: 544/1024 [MB] (22 MBps) [2024-12-09T05:45:34.372Z] Copying: 566/1024 [MB] (22 MBps) [2024-12-09T05:45:35.304Z] Copying: 588/1024 [MB] (21 MBps) [2024-12-09T05:45:36.239Z] Copying: 610/1024 [MB] (22 MBps) [2024-12-09T05:45:37.174Z] Copying: 633/1024 [MB] (22 MBps) [2024-12-09T05:45:38.556Z] Copying: 656/1024 [MB] (22 MBps) [2024-12-09T05:45:39.492Z] Copying: 679/1024 [MB] (22 MBps) [2024-12-09T05:45:40.428Z] Copying: 702/1024 [MB] (22 MBps) [2024-12-09T05:45:41.365Z] Copying: 724/1024 [MB] (22 MBps) [2024-12-09T05:45:42.301Z] Copying: 746/1024 [MB] (22 MBps) [2024-12-09T05:45:43.235Z] Copying: 768/1024 [MB] (22 MBps) [2024-12-09T05:45:44.170Z] Copying: 790/1024 [MB] (21 MBps) [2024-12-09T05:45:45.543Z] Copying: 812/1024 [MB] (21 MBps) [2024-12-09T05:45:46.476Z] Copying: 834/1024 [MB] (22 MBps) [2024-12-09T05:45:47.434Z] Copying: 857/1024 [MB] (22 MBps) [2024-12-09T05:45:48.369Z] Copying: 879/1024 [MB] (22 MBps) [2024-12-09T05:45:49.306Z] Copying: 901/1024 [MB] (22 MBps) [2024-12-09T05:45:50.242Z] Copying: 924/1024 [MB] (22 MBps) [2024-12-09T05:45:51.178Z] Copying: 946/1024 [MB] (22 MBps) [2024-12-09T05:45:52.555Z] Copying: 969/1024 [MB] (22 MBps) [2024-12-09T05:45:53.493Z] Copying: 991/1024 [MB] (22 MBps) [2024-12-09T05:45:53.752Z] Copying: 1014/1024 [MB] (22 MBps) [2024-12-09T05:45:54.011Z] Copying: 1024/1024 [MB] (average 22 MBps)[2024-12-09 05:45:53.961028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:51:02.394 [2024-12-09 05:45:53.961127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 01:51:02.394 [2024-12-09 05:45:53.961163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 01:51:02.394 [2024-12-09 05:45:53.961186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:51:02.394 [2024-12-09 05:45:53.961242] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 01:51:02.394 [2024-12-09 05:45:53.967379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:51:02.394 [2024-12-09 05:45:53.967437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 01:51:02.394 [2024-12-09 05:45:53.967468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.100 ms 01:51:02.394 [2024-12-09 05:45:53.967479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:51:02.394 [2024-12-09 05:45:53.967834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:51:02.394 [2024-12-09 05:45:53.967867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 01:51:02.394 [2024-12-09 05:45:53.967886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.326 ms 01:51:02.394 [2024-12-09 05:45:53.967898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:51:02.394 [2024-12-09 05:45:53.971177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:51:02.394 [2024-12-09 05:45:53.971237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 01:51:02.394 [2024-12-09 05:45:53.971268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.258 ms 01:51:02.394 [2024-12-09 05:45:53.971286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:51:02.394 [2024-12-09 05:45:53.977297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:51:02.394 [2024-12-09 05:45:53.977350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 01:51:02.394 [2024-12-09 05:45:53.977380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.986 ms 01:51:02.394 [2024-12-09 05:45:53.977391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:51:02.394 [2024-12-09 05:45:54.007726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:51:02.394 [2024-12-09 05:45:54.007784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 01:51:02.394 [2024-12-09 05:45:54.007827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.257 ms 01:51:02.394 [2024-12-09 05:45:54.007839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:51:02.654 [2024-12-09 05:45:54.024542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:51:02.654 [2024-12-09 05:45:54.024597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 01:51:02.654 [2024-12-09 05:45:54.024629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.657 ms 01:51:02.654 [2024-12-09 05:45:54.024641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:51:02.654 [2024-12-09 05:45:54.026736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:51:02.654 [2024-12-09 05:45:54.026795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 01:51:02.654 [2024-12-09 05:45:54.026828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.035 ms 01:51:02.654 [2024-12-09 05:45:54.026839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:51:02.654 [2024-12-09 05:45:54.052017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:51:02.654 [2024-12-09 05:45:54.052084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 01:51:02.654 [2024-12-09 05:45:54.052116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.141 ms 01:51:02.654 [2024-12-09 05:45:54.052126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:51:02.654 [2024-12-09 05:45:54.077472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:51:02.654 [2024-12-09 05:45:54.077524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 01:51:02.654 [2024-12-09 05:45:54.077556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.306 ms 01:51:02.654 [2024-12-09 05:45:54.077566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:51:02.654 [2024-12-09 05:45:54.102423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:51:02.654 [2024-12-09 05:45:54.102465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 01:51:02.654 [2024-12-09 05:45:54.102497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.818 ms 01:51:02.654 [2024-12-09 05:45:54.102507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:51:02.654 [2024-12-09 05:45:54.127219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:51:02.654 [2024-12-09 05:45:54.127275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 01:51:02.654 [2024-12-09 05:45:54.127306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.647 ms 01:51:02.654 [2024-12-09 05:45:54.127315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:51:02.654 [2024-12-09 05:45:54.127355] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 01:51:02.654 [2024-12-09 05:45:54.127383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 01:51:02.654 [2024-12-09 05:45:54.127400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 01:51:02.654 [2024-12-09 05:45:54.127411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 01:51:02.654 [2024-12-09 05:45:54.127422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 01:51:02.654 [2024-12-09 05:45:54.127432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 01:51:02.654 [2024-12-09 05:45:54.127459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 01:51:02.654 [2024-12-09 05:45:54.127488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 01:51:02.654 [2024-12-09 05:45:54.127509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 01:51:02.654 [2024-12-09 05:45:54.127532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 01:51:02.654 [2024-12-09 05:45:54.127553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 01:51:02.654 [2024-12-09 05:45:54.127572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 01:51:02.654 [2024-12-09 05:45:54.127585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 01:51:02.654 [2024-12-09 05:45:54.127601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 01:51:02.654 [2024-12-09 05:45:54.127621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 01:51:02.654 [2024-12-09 05:45:54.127640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 01:51:02.654 [2024-12-09 05:45:54.127657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 01:51:02.654 [2024-12-09 05:45:54.127669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 01:51:02.654 [2024-12-09 05:45:54.127687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 01:51:02.654 [2024-12-09 05:45:54.127728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 01:51:02.654 [2024-12-09 05:45:54.127753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 01:51:02.654 [2024-12-09 05:45:54.127770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 01:51:02.654 [2024-12-09 05:45:54.127786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 01:51:02.654 [2024-12-09 05:45:54.127813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 01:51:02.654 [2024-12-09 05:45:54.127834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 01:51:02.654 [2024-12-09 05:45:54.127855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 01:51:02.654 [2024-12-09 05:45:54.127873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 01:51:02.654 [2024-12-09 05:45:54.127884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 01:51:02.654 [2024-12-09 05:45:54.127901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 01:51:02.654 [2024-12-09 05:45:54.127922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 01:51:02.654 [2024-12-09 05:45:54.127944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 01:51:02.654 [2024-12-09 05:45:54.127964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 01:51:02.654 [2024-12-09 05:45:54.127983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 01:51:02.654 [2024-12-09 05:45:54.128003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 01:51:02.654 [2024-12-09 05:45:54.128024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 01:51:02.654 [2024-12-09 05:45:54.128046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 01:51:02.654 [2024-12-09 05:45:54.128069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 01:51:02.654 [2024-12-09 05:45:54.128089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 01:51:02.654 [2024-12-09 05:45:54.128111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 01:51:02.654 [2024-12-09 05:45:54.128132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 01:51:02.654 [2024-12-09 05:45:54.128152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 01:51:02.654 [2024-12-09 05:45:54.128172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 01:51:02.654 [2024-12-09 05:45:54.128193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 01:51:02.654 [2024-12-09 05:45:54.128216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 01:51:02.654 [2024-12-09 05:45:54.128235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 01:51:02.654 [2024-12-09 05:45:54.128249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 01:51:02.654 [2024-12-09 05:45:54.128269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 01:51:02.654 [2024-12-09 05:45:54.128286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 01:51:02.654 [2024-12-09 05:45:54.128307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 01:51:02.654 [2024-12-09 05:45:54.128328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 01:51:02.654 [2024-12-09 05:45:54.128348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 01:51:02.654 [2024-12-09 05:45:54.128369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 01:51:02.654 [2024-12-09 05:45:54.128389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 01:51:02.654 [2024-12-09 05:45:54.128409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 01:51:02.654 [2024-12-09 05:45:54.128428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 01:51:02.654 [2024-12-09 05:45:54.128440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 01:51:02.654 [2024-12-09 05:45:54.128458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 01:51:02.654 [2024-12-09 05:45:54.128477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 01:51:02.654 [2024-12-09 05:45:54.128500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 01:51:02.654 [2024-12-09 05:45:54.128522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 01:51:02.654 [2024-12-09 05:45:54.128543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 01:51:02.654 [2024-12-09 05:45:54.128563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 01:51:02.654 [2024-12-09 05:45:54.128582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 01:51:02.654 [2024-12-09 05:45:54.128596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 01:51:02.654 [2024-12-09 05:45:54.128607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 01:51:02.654 [2024-12-09 05:45:54.128618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 01:51:02.654 [2024-12-09 05:45:54.128633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 01:51:02.654 [2024-12-09 05:45:54.128651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 01:51:02.654 [2024-12-09 05:45:54.128692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 01:51:02.654 [2024-12-09 05:45:54.128715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 01:51:02.654 [2024-12-09 05:45:54.128735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 01:51:02.654 [2024-12-09 05:45:54.128756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 01:51:02.654 [2024-12-09 05:45:54.128772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 01:51:02.654 [2024-12-09 05:45:54.128783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 01:51:02.654 [2024-12-09 05:45:54.128794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 01:51:02.654 [2024-12-09 05:45:54.128806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 01:51:02.654 [2024-12-09 05:45:54.128818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 01:51:02.654 [2024-12-09 05:45:54.128837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 01:51:02.655 [2024-12-09 05:45:54.128860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 01:51:02.655 [2024-12-09 05:45:54.128882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 01:51:02.655 [2024-12-09 05:45:54.128902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 01:51:02.655 [2024-12-09 05:45:54.128923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 01:51:02.655 [2024-12-09 05:45:54.128936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 01:51:02.655 [2024-12-09 05:45:54.128947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 01:51:02.655 [2024-12-09 05:45:54.128959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 01:51:02.655 [2024-12-09 05:45:54.128970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 01:51:02.655 [2024-12-09 05:45:54.128985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 01:51:02.655 [2024-12-09 05:45:54.129005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 01:51:02.655 [2024-12-09 05:45:54.129028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 01:51:02.655 [2024-12-09 05:45:54.129048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 01:51:02.655 [2024-12-09 05:45:54.129070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 01:51:02.655 [2024-12-09 05:45:54.129089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 01:51:02.655 [2024-12-09 05:45:54.129102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 01:51:02.655 [2024-12-09 05:45:54.129113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 01:51:02.655 [2024-12-09 05:45:54.129130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 01:51:02.655 [2024-12-09 05:45:54.129151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 01:51:02.655 [2024-12-09 05:45:54.129175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 01:51:02.655 [2024-12-09 05:45:54.129195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 01:51:02.655 [2024-12-09 05:45:54.129215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 01:51:02.655 [2024-12-09 05:45:54.129236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 01:51:02.655 [2024-12-09 05:45:54.129256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 01:51:02.655 [2024-12-09 05:45:54.129287] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 01:51:02.655 [2024-12-09 05:45:54.129310] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f978506d-1366-4345-9c7a-0084bf34ece6 01:51:02.655 [2024-12-09 05:45:54.129331] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 01:51:02.655 [2024-12-09 05:45:54.129350] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 01:51:02.655 [2024-12-09 05:45:54.129364] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 01:51:02.655 [2024-12-09 05:45:54.129382] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 01:51:02.655 [2024-12-09 05:45:54.129419] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 01:51:02.655 [2024-12-09 05:45:54.129435] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 01:51:02.655 [2024-12-09 05:45:54.129446] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 01:51:02.655 [2024-12-09 05:45:54.129458] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 01:51:02.655 [2024-12-09 05:45:54.129475] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 01:51:02.655 [2024-12-09 05:45:54.129496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:51:02.655 [2024-12-09 05:45:54.129517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 01:51:02.655 [2024-12-09 05:45:54.129532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.142 ms 01:51:02.655 [2024-12-09 05:45:54.129557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:51:02.655 [2024-12-09 05:45:54.144304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:51:02.655 [2024-12-09 05:45:54.144341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 01:51:02.655 [2024-12-09 05:45:54.144373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.712 ms 01:51:02.655 [2024-12-09 05:45:54.144383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:51:02.655 [2024-12-09 05:45:54.144909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 01:51:02.655 [2024-12-09 05:45:54.144960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 01:51:02.655 [2024-12-09 05:45:54.144987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.482 ms 01:51:02.655 [2024-12-09 05:45:54.145008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:51:02.655 [2024-12-09 05:45:54.182316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:51:02.655 [2024-12-09 05:45:54.182369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 01:51:02.655 [2024-12-09 05:45:54.182401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:51:02.655 [2024-12-09 05:45:54.182412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:51:02.655 [2024-12-09 05:45:54.182485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:51:02.655 [2024-12-09 05:45:54.182506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 01:51:02.655 [2024-12-09 05:45:54.182517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:51:02.655 [2024-12-09 05:45:54.182528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:51:02.655 [2024-12-09 05:45:54.182635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:51:02.655 [2024-12-09 05:45:54.182653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 01:51:02.655 [2024-12-09 05:45:54.182665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:51:02.655 [2024-12-09 05:45:54.182675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:51:02.655 [2024-12-09 05:45:54.182733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:51:02.655 [2024-12-09 05:45:54.182749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 01:51:02.655 [2024-12-09 05:45:54.182766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:51:02.655 [2024-12-09 05:45:54.182777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:51:02.913 [2024-12-09 05:45:54.269851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:51:02.913 [2024-12-09 05:45:54.270066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 01:51:02.913 [2024-12-09 05:45:54.270106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:51:02.913 [2024-12-09 05:45:54.270127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:51:02.913 [2024-12-09 05:45:54.338681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:51:02.913 [2024-12-09 05:45:54.338727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 01:51:02.913 [2024-12-09 05:45:54.338759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:51:02.913 [2024-12-09 05:45:54.338769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:51:02.913 [2024-12-09 05:45:54.338838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:51:02.913 [2024-12-09 05:45:54.338854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 01:51:02.914 [2024-12-09 05:45:54.338865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:51:02.914 [2024-12-09 05:45:54.338875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:51:02.914 [2024-12-09 05:45:54.338983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:51:02.914 [2024-12-09 05:45:54.339011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 01:51:02.914 [2024-12-09 05:45:54.339025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:51:02.914 [2024-12-09 05:45:54.339043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:51:02.914 [2024-12-09 05:45:54.339201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:51:02.914 [2024-12-09 05:45:54.339246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 01:51:02.914 [2024-12-09 05:45:54.339271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:51:02.914 [2024-12-09 05:45:54.339292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:51:02.914 [2024-12-09 05:45:54.339367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:51:02.914 [2024-12-09 05:45:54.339404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 01:51:02.914 [2024-12-09 05:45:54.339417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:51:02.914 [2024-12-09 05:45:54.339427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:51:02.914 [2024-12-09 05:45:54.339481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:51:02.914 [2024-12-09 05:45:54.339511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 01:51:02.914 [2024-12-09 05:45:54.339523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:51:02.914 [2024-12-09 05:45:54.339534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:51:02.914 [2024-12-09 05:45:54.339581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 01:51:02.914 [2024-12-09 05:45:54.339597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 01:51:02.914 [2024-12-09 05:45:54.339608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 01:51:02.914 [2024-12-09 05:45:54.339623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 01:51:02.914 [2024-12-09 05:45:54.339791] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 378.718 ms, result 0 01:51:03.848 01:51:03.848 01:51:03.848 05:45:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 01:51:05.750 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 01:51:05.750 05:45:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 01:51:05.750 05:45:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 01:51:05.750 05:45:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 01:51:05.750 05:45:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 01:51:05.750 05:45:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 01:51:05.750 05:45:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 01:51:05.750 05:45:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 01:51:05.750 05:45:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 81411 01:51:05.750 05:45:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 81411 ']' 01:51:05.750 05:45:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 81411 01:51:05.750 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (81411) - No such process 01:51:05.750 Process with pid 81411 is not found 01:51:05.750 05:45:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 81411 is not found' 01:51:05.750 05:45:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 01:51:06.039 Remove shared memory files 01:51:06.039 05:45:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 01:51:06.039 05:45:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 01:51:06.039 05:45:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 01:51:06.039 05:45:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 01:51:06.039 05:45:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 01:51:06.039 05:45:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 01:51:06.039 05:45:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 01:51:06.039 ************************************ 01:51:06.039 END TEST ftl_dirty_shutdown 01:51:06.039 ************************************ 01:51:06.039 01:51:06.039 real 3m59.974s 01:51:06.039 user 4m38.819s 01:51:06.039 sys 0m35.143s 01:51:06.039 05:45:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 01:51:06.039 05:45:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 01:51:06.039 05:45:57 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 01:51:06.039 05:45:57 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:51:06.039 05:45:57 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 01:51:06.039 05:45:57 ftl -- common/autotest_common.sh@10 -- # set +x 01:51:06.039 ************************************ 01:51:06.039 START TEST ftl_upgrade_shutdown 01:51:06.039 ************************************ 01:51:06.039 05:45:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 01:51:06.298 * Looking for test storage... 01:51:06.298 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 01:51:06.298 05:45:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:51:06.298 05:45:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 01:51:06.298 05:45:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:51:06.298 05:45:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:51:06.298 05:45:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:51:06.298 05:45:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 01:51:06.298 05:45:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 01:51:06.298 05:45:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 01:51:06.298 05:45:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 01:51:06.298 05:45:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 01:51:06.298 05:45:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 01:51:06.298 05:45:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 01:51:06.298 05:45:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 01:51:06.298 05:45:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 01:51:06.298 05:45:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:51:06.298 05:45:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 01:51:06.298 05:45:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 01:51:06.298 05:45:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 01:51:06.298 05:45:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:51:06.298 05:45:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 01:51:06.298 05:45:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 01:51:06.298 05:45:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:51:06.299 05:45:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 01:51:06.299 05:45:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 01:51:06.299 05:45:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 01:51:06.299 05:45:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 01:51:06.299 05:45:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:51:06.299 05:45:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 01:51:06.299 05:45:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 01:51:06.299 05:45:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:51:06.299 05:45:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:51:06.299 05:45:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 01:51:06.299 05:45:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:51:06.299 05:45:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:51:06.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:51:06.299 --rc genhtml_branch_coverage=1 01:51:06.299 --rc genhtml_function_coverage=1 01:51:06.299 --rc genhtml_legend=1 01:51:06.299 --rc geninfo_all_blocks=1 01:51:06.299 --rc geninfo_unexecuted_blocks=1 01:51:06.299 01:51:06.299 ' 01:51:06.299 05:45:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:51:06.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:51:06.299 --rc genhtml_branch_coverage=1 01:51:06.299 --rc genhtml_function_coverage=1 01:51:06.299 --rc genhtml_legend=1 01:51:06.299 --rc geninfo_all_blocks=1 01:51:06.299 --rc geninfo_unexecuted_blocks=1 01:51:06.299 01:51:06.299 ' 01:51:06.299 05:45:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:51:06.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:51:06.299 --rc genhtml_branch_coverage=1 01:51:06.299 --rc genhtml_function_coverage=1 01:51:06.299 --rc genhtml_legend=1 01:51:06.299 --rc geninfo_all_blocks=1 01:51:06.299 --rc geninfo_unexecuted_blocks=1 01:51:06.299 01:51:06.299 ' 01:51:06.299 05:45:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:51:06.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:51:06.299 --rc genhtml_branch_coverage=1 01:51:06.299 --rc genhtml_function_coverage=1 01:51:06.299 --rc genhtml_legend=1 01:51:06.299 --rc geninfo_all_blocks=1 01:51:06.299 --rc geninfo_unexecuted_blocks=1 01:51:06.299 01:51:06.299 ' 01:51:06.299 05:45:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 01:51:06.299 05:45:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 01:51:06.299 05:45:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 01:51:06.299 05:45:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 01:51:06.299 05:45:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 01:51:06.299 05:45:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 01:51:06.299 05:45:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:51:06.299 05:45:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 01:51:06.299 05:45:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 01:51:06.299 05:45:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:51:06.299 05:45:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:51:06.299 05:45:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 01:51:06.299 05:45:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 01:51:06.299 05:45:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 01:51:06.299 05:45:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 01:51:06.299 05:45:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 01:51:06.299 05:45:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 01:51:06.299 05:45:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:51:06.299 05:45:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:51:06.299 05:45:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 01:51:06.299 05:45:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 01:51:06.299 05:45:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 01:51:06.299 05:45:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 01:51:06.299 05:45:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 01:51:06.299 05:45:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 01:51:06.299 05:45:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 01:51:06.299 05:45:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 01:51:06.299 05:45:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:51:06.299 05:45:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:51:06.299 05:45:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 01:51:06.299 05:45:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 01:51:06.299 05:45:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 01:51:06.299 05:45:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 01:51:06.299 05:45:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 01:51:06.299 05:45:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 01:51:06.299 05:45:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 01:51:06.299 05:45:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 01:51:06.299 05:45:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 01:51:06.299 05:45:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 01:51:06.299 05:45:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 01:51:06.299 05:45:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 01:51:06.299 05:45:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 01:51:06.299 05:45:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 01:51:06.299 05:45:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 01:51:06.299 05:45:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 01:51:06.299 05:45:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 01:51:06.299 05:45:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=83871 01:51:06.299 05:45:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 01:51:06.299 05:45:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 01:51:06.299 05:45:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 83871 01:51:06.299 05:45:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83871 ']' 01:51:06.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:51:06.299 05:45:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:51:06.299 05:45:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 01:51:06.299 05:45:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:51:06.299 05:45:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 01:51:06.299 05:45:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 01:51:06.557 [2024-12-09 05:45:58.021164] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:51:06.557 [2024-12-09 05:45:58.021520] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83871 ] 01:51:06.816 [2024-12-09 05:45:58.208498] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:51:06.816 [2024-12-09 05:45:58.324955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:51:07.749 05:45:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:51:07.749 05:45:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 01:51:07.749 05:45:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 01:51:07.749 05:45:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 01:51:07.749 05:45:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 01:51:07.749 05:45:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 01:51:07.749 05:45:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 01:51:07.749 05:45:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 01:51:07.749 05:45:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 01:51:07.749 05:45:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 01:51:07.749 05:45:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 01:51:07.749 05:45:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 01:51:07.749 05:45:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 01:51:07.749 05:45:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 01:51:07.749 05:45:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 01:51:07.749 05:45:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 01:51:07.749 05:45:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 01:51:07.749 05:45:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 01:51:07.749 05:45:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 01:51:07.749 05:45:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 01:51:07.749 05:45:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 01:51:07.749 05:45:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 01:51:07.749 05:45:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 01:51:08.008 05:45:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 01:51:08.008 05:45:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 01:51:08.008 05:45:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 01:51:08.008 05:45:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 01:51:08.008 05:45:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 01:51:08.008 05:45:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 01:51:08.008 05:45:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 01:51:08.008 05:45:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 01:51:08.266 05:45:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 01:51:08.266 { 01:51:08.266 "name": "basen1", 01:51:08.266 "aliases": [ 01:51:08.266 "9157fd5e-8071-476a-9215-946f9ebd0e27" 01:51:08.266 ], 01:51:08.266 "product_name": "NVMe disk", 01:51:08.266 "block_size": 4096, 01:51:08.266 "num_blocks": 1310720, 01:51:08.266 "uuid": "9157fd5e-8071-476a-9215-946f9ebd0e27", 01:51:08.266 "numa_id": -1, 01:51:08.266 "assigned_rate_limits": { 01:51:08.266 "rw_ios_per_sec": 0, 01:51:08.266 "rw_mbytes_per_sec": 0, 01:51:08.266 "r_mbytes_per_sec": 0, 01:51:08.266 "w_mbytes_per_sec": 0 01:51:08.266 }, 01:51:08.266 "claimed": true, 01:51:08.266 "claim_type": "read_many_write_one", 01:51:08.266 "zoned": false, 01:51:08.266 "supported_io_types": { 01:51:08.266 "read": true, 01:51:08.266 "write": true, 01:51:08.266 "unmap": true, 01:51:08.266 "flush": true, 01:51:08.266 "reset": true, 01:51:08.266 "nvme_admin": true, 01:51:08.266 "nvme_io": true, 01:51:08.266 "nvme_io_md": false, 01:51:08.266 "write_zeroes": true, 01:51:08.266 "zcopy": false, 01:51:08.266 "get_zone_info": false, 01:51:08.266 "zone_management": false, 01:51:08.266 "zone_append": false, 01:51:08.266 "compare": true, 01:51:08.267 "compare_and_write": false, 01:51:08.267 "abort": true, 01:51:08.267 "seek_hole": false, 01:51:08.267 "seek_data": false, 01:51:08.267 "copy": true, 01:51:08.267 "nvme_iov_md": false 01:51:08.267 }, 01:51:08.267 "driver_specific": { 01:51:08.267 "nvme": [ 01:51:08.267 { 01:51:08.267 "pci_address": "0000:00:11.0", 01:51:08.267 "trid": { 01:51:08.267 "trtype": "PCIe", 01:51:08.267 "traddr": "0000:00:11.0" 01:51:08.267 }, 01:51:08.267 "ctrlr_data": { 01:51:08.267 "cntlid": 0, 01:51:08.267 "vendor_id": "0x1b36", 01:51:08.267 "model_number": "QEMU NVMe Ctrl", 01:51:08.267 "serial_number": "12341", 01:51:08.267 "firmware_revision": "8.0.0", 01:51:08.267 "subnqn": "nqn.2019-08.org.qemu:12341", 01:51:08.267 "oacs": { 01:51:08.267 "security": 0, 01:51:08.267 "format": 1, 01:51:08.267 "firmware": 0, 01:51:08.267 "ns_manage": 1 01:51:08.267 }, 01:51:08.267 "multi_ctrlr": false, 01:51:08.267 "ana_reporting": false 01:51:08.267 }, 01:51:08.267 "vs": { 01:51:08.267 "nvme_version": "1.4" 01:51:08.267 }, 01:51:08.267 "ns_data": { 01:51:08.267 "id": 1, 01:51:08.267 "can_share": false 01:51:08.267 } 01:51:08.267 } 01:51:08.267 ], 01:51:08.267 "mp_policy": "active_passive" 01:51:08.267 } 01:51:08.267 } 01:51:08.267 ]' 01:51:08.267 05:45:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 01:51:08.267 05:45:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 01:51:08.267 05:45:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 01:51:08.267 05:45:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 01:51:08.267 05:45:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 01:51:08.267 05:45:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 01:51:08.267 05:45:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 01:51:08.267 05:45:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 01:51:08.267 05:45:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 01:51:08.267 05:45:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 01:51:08.267 05:45:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 01:51:08.526 05:46:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=7d2d15e4-dece-473b-bcaa-61bd1d89c879 01:51:08.526 05:46:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 01:51:08.526 05:46:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7d2d15e4-dece-473b-bcaa-61bd1d89c879 01:51:08.795 05:46:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 01:51:09.055 05:46:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=2a9a0461-2746-4b9a-87cf-4553cbb3216d 01:51:09.055 05:46:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 2a9a0461-2746-4b9a-87cf-4553cbb3216d 01:51:09.312 05:46:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=f4211dee-c285-48cb-8ae2-1dcee9fa49b7 01:51:09.312 05:46:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z f4211dee-c285-48cb-8ae2-1dcee9fa49b7 ]] 01:51:09.312 05:46:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 f4211dee-c285-48cb-8ae2-1dcee9fa49b7 5120 01:51:09.312 05:46:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 01:51:09.312 05:46:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 01:51:09.312 05:46:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=f4211dee-c285-48cb-8ae2-1dcee9fa49b7 01:51:09.312 05:46:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 01:51:09.312 05:46:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size f4211dee-c285-48cb-8ae2-1dcee9fa49b7 01:51:09.312 05:46:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=f4211dee-c285-48cb-8ae2-1dcee9fa49b7 01:51:09.312 05:46:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 01:51:09.313 05:46:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 01:51:09.313 05:46:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 01:51:09.313 05:46:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f4211dee-c285-48cb-8ae2-1dcee9fa49b7 01:51:09.313 05:46:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 01:51:09.313 { 01:51:09.313 "name": "f4211dee-c285-48cb-8ae2-1dcee9fa49b7", 01:51:09.313 "aliases": [ 01:51:09.313 "lvs/basen1p0" 01:51:09.313 ], 01:51:09.313 "product_name": "Logical Volume", 01:51:09.313 "block_size": 4096, 01:51:09.313 "num_blocks": 5242880, 01:51:09.313 "uuid": "f4211dee-c285-48cb-8ae2-1dcee9fa49b7", 01:51:09.313 "assigned_rate_limits": { 01:51:09.313 "rw_ios_per_sec": 0, 01:51:09.313 "rw_mbytes_per_sec": 0, 01:51:09.313 "r_mbytes_per_sec": 0, 01:51:09.313 "w_mbytes_per_sec": 0 01:51:09.313 }, 01:51:09.313 "claimed": false, 01:51:09.313 "zoned": false, 01:51:09.313 "supported_io_types": { 01:51:09.313 "read": true, 01:51:09.313 "write": true, 01:51:09.313 "unmap": true, 01:51:09.313 "flush": false, 01:51:09.313 "reset": true, 01:51:09.313 "nvme_admin": false, 01:51:09.313 "nvme_io": false, 01:51:09.313 "nvme_io_md": false, 01:51:09.313 "write_zeroes": true, 01:51:09.313 "zcopy": false, 01:51:09.313 "get_zone_info": false, 01:51:09.313 "zone_management": false, 01:51:09.313 "zone_append": false, 01:51:09.313 "compare": false, 01:51:09.313 "compare_and_write": false, 01:51:09.313 "abort": false, 01:51:09.313 "seek_hole": true, 01:51:09.313 "seek_data": true, 01:51:09.313 "copy": false, 01:51:09.313 "nvme_iov_md": false 01:51:09.313 }, 01:51:09.313 "driver_specific": { 01:51:09.313 "lvol": { 01:51:09.313 "lvol_store_uuid": "2a9a0461-2746-4b9a-87cf-4553cbb3216d", 01:51:09.313 "base_bdev": "basen1", 01:51:09.313 "thin_provision": true, 01:51:09.313 "num_allocated_clusters": 0, 01:51:09.313 "snapshot": false, 01:51:09.313 "clone": false, 01:51:09.313 "esnap_clone": false 01:51:09.313 } 01:51:09.313 } 01:51:09.313 } 01:51:09.313 ]' 01:51:09.313 05:46:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 01:51:09.571 05:46:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 01:51:09.571 05:46:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 01:51:09.571 05:46:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 01:51:09.571 05:46:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 01:51:09.571 05:46:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 01:51:09.571 05:46:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 01:51:09.571 05:46:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 01:51:09.571 05:46:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 01:51:09.829 05:46:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 01:51:09.829 05:46:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 01:51:09.829 05:46:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 01:51:10.087 05:46:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 01:51:10.087 05:46:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 01:51:10.087 05:46:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d f4211dee-c285-48cb-8ae2-1dcee9fa49b7 -c cachen1p0 --l2p_dram_limit 2 01:51:10.348 [2024-12-09 05:46:01.818678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:51:10.348 [2024-12-09 05:46:01.818958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 01:51:10.348 [2024-12-09 05:46:01.818998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 01:51:10.348 [2024-12-09 05:46:01.819012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:51:10.348 [2024-12-09 05:46:01.819098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:51:10.348 [2024-12-09 05:46:01.819115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 01:51:10.348 [2024-12-09 05:46:01.819146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.050 ms 01:51:10.348 [2024-12-09 05:46:01.819172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:51:10.348 [2024-12-09 05:46:01.819203] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 01:51:10.348 [2024-12-09 05:46:01.820084] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 01:51:10.348 [2024-12-09 05:46:01.820114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:51:10.348 [2024-12-09 05:46:01.820125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 01:51:10.348 [2024-12-09 05:46:01.820141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.915 ms 01:51:10.348 [2024-12-09 05:46:01.820152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:51:10.348 [2024-12-09 05:46:01.820274] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID e0174778-4451-42ff-a3d0-0e505c20040a 01:51:10.348 [2024-12-09 05:46:01.822197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:51:10.348 [2024-12-09 05:46:01.822231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 01:51:10.348 [2024-12-09 05:46:01.822283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 01:51:10.348 [2024-12-09 05:46:01.822299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:51:10.348 [2024-12-09 05:46:01.831971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:51:10.348 [2024-12-09 05:46:01.832217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 01:51:10.348 [2024-12-09 05:46:01.832343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.619 ms 01:51:10.348 [2024-12-09 05:46:01.832467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:51:10.348 [2024-12-09 05:46:01.832626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:51:10.348 [2024-12-09 05:46:01.832789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 01:51:10.348 [2024-12-09 05:46:01.832898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 01:51:10.348 [2024-12-09 05:46:01.832956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:51:10.348 [2024-12-09 05:46:01.833101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:51:10.348 [2024-12-09 05:46:01.833156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 01:51:10.348 [2024-12-09 05:46:01.833199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 01:51:10.348 [2024-12-09 05:46:01.833238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:51:10.348 [2024-12-09 05:46:01.833313] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 01:51:10.348 [2024-12-09 05:46:01.838160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:51:10.348 [2024-12-09 05:46:01.838385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 01:51:10.348 [2024-12-09 05:46:01.838523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.855 ms 01:51:10.348 [2024-12-09 05:46:01.838653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:51:10.348 [2024-12-09 05:46:01.838746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:51:10.348 [2024-12-09 05:46:01.838794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 01:51:10.348 [2024-12-09 05:46:01.838890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 01:51:10.348 [2024-12-09 05:46:01.838935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:51:10.348 [2024-12-09 05:46:01.839080] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 01:51:10.348 [2024-12-09 05:46:01.839310] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 01:51:10.348 [2024-12-09 05:46:01.839459] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 01:51:10.348 [2024-12-09 05:46:01.839588] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 01:51:10.348 [2024-12-09 05:46:01.839776] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 01:51:10.348 [2024-12-09 05:46:01.840033] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 01:51:10.348 [2024-12-09 05:46:01.840250] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 01:51:10.348 [2024-12-09 05:46:01.840408] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 01:51:10.348 [2024-12-09 05:46:01.840479] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 01:51:10.348 [2024-12-09 05:46:01.840521] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 01:51:10.348 [2024-12-09 05:46:01.840628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:51:10.348 [2024-12-09 05:46:01.840735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 01:51:10.348 [2024-12-09 05:46:01.840838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.550 ms 01:51:10.348 [2024-12-09 05:46:01.840935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:51:10.348 [2024-12-09 05:46:01.841080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:51:10.348 [2024-12-09 05:46:01.841136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 01:51:10.348 [2024-12-09 05:46:01.841228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.060 ms 01:51:10.348 [2024-12-09 05:46:01.841272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:51:10.348 [2024-12-09 05:46:01.841497] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 01:51:10.348 [2024-12-09 05:46:01.841617] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 01:51:10.348 [2024-12-09 05:46:01.841695] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 01:51:10.348 [2024-12-09 05:46:01.841810] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 01:51:10.348 [2024-12-09 05:46:01.841951] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 01:51:10.348 [2024-12-09 05:46:01.842080] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 01:51:10.348 [2024-12-09 05:46:01.842187] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 01:51:10.348 [2024-12-09 05:46:01.842233] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 01:51:10.348 [2024-12-09 05:46:01.842350] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 01:51:10.348 [2024-12-09 05:46:01.842397] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 01:51:10.348 [2024-12-09 05:46:01.842463] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 01:51:10.348 [2024-12-09 05:46:01.842519] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 01:51:10.348 [2024-12-09 05:46:01.842558] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 01:51:10.348 [2024-12-09 05:46:01.842645] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 01:51:10.349 [2024-12-09 05:46:01.842726] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 01:51:10.349 [2024-12-09 05:46:01.842853] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 01:51:10.349 [2024-12-09 05:46:01.842883] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 01:51:10.349 [2024-12-09 05:46:01.842896] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 01:51:10.349 [2024-12-09 05:46:01.842911] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 01:51:10.349 [2024-12-09 05:46:01.842922] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 01:51:10.349 [2024-12-09 05:46:01.842935] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 01:51:10.349 [2024-12-09 05:46:01.842945] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 01:51:10.349 [2024-12-09 05:46:01.842958] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 01:51:10.349 [2024-12-09 05:46:01.842968] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 01:51:10.349 [2024-12-09 05:46:01.842995] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 01:51:10.349 [2024-12-09 05:46:01.843020] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 01:51:10.349 [2024-12-09 05:46:01.843032] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 01:51:10.349 [2024-12-09 05:46:01.843043] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 01:51:10.349 [2024-12-09 05:46:01.843056] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 01:51:10.349 [2024-12-09 05:46:01.843066] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 01:51:10.349 [2024-12-09 05:46:01.843078] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 01:51:10.349 [2024-12-09 05:46:01.843087] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 01:51:10.349 [2024-12-09 05:46:01.843101] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 01:51:10.349 [2024-12-09 05:46:01.843111] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 01:51:10.349 [2024-12-09 05:46:01.843138] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 01:51:10.349 [2024-12-09 05:46:01.843148] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 01:51:10.349 [2024-12-09 05:46:01.843160] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 01:51:10.349 [2024-12-09 05:46:01.843169] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 01:51:10.349 [2024-12-09 05:46:01.843181] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 01:51:10.349 [2024-12-09 05:46:01.843190] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 01:51:10.349 [2024-12-09 05:46:01.843202] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 01:51:10.349 [2024-12-09 05:46:01.843211] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 01:51:10.349 [2024-12-09 05:46:01.843223] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 01:51:10.349 [2024-12-09 05:46:01.843232] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 01:51:10.349 [2024-12-09 05:46:01.843246] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 01:51:10.349 [2024-12-09 05:46:01.843256] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 01:51:10.349 [2024-12-09 05:46:01.843271] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 01:51:10.349 [2024-12-09 05:46:01.843283] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 01:51:10.349 [2024-12-09 05:46:01.843297] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 01:51:10.349 [2024-12-09 05:46:01.843307] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 01:51:10.349 [2024-12-09 05:46:01.843319] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 01:51:10.349 [2024-12-09 05:46:01.843329] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 01:51:10.349 [2024-12-09 05:46:01.843341] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 01:51:10.349 [2024-12-09 05:46:01.843356] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 01:51:10.349 [2024-12-09 05:46:01.843375] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 01:51:10.349 [2024-12-09 05:46:01.843388] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 01:51:10.349 [2024-12-09 05:46:01.843401] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 01:51:10.349 [2024-12-09 05:46:01.843412] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 01:51:10.349 [2024-12-09 05:46:01.843424] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 01:51:10.349 [2024-12-09 05:46:01.843436] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 01:51:10.349 [2024-12-09 05:46:01.843449] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 01:51:10.349 [2024-12-09 05:46:01.843461] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 01:51:10.349 [2024-12-09 05:46:01.843473] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 01:51:10.349 [2024-12-09 05:46:01.843484] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 01:51:10.349 [2024-12-09 05:46:01.843499] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 01:51:10.349 [2024-12-09 05:46:01.843510] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 01:51:10.349 [2024-12-09 05:46:01.843523] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 01:51:10.349 [2024-12-09 05:46:01.843534] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 01:51:10.349 [2024-12-09 05:46:01.843549] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 01:51:10.349 [2024-12-09 05:46:01.843560] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 01:51:10.349 [2024-12-09 05:46:01.843575] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 01:51:10.349 [2024-12-09 05:46:01.843587] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 01:51:10.349 [2024-12-09 05:46:01.843600] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 01:51:10.349 [2024-12-09 05:46:01.843610] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 01:51:10.349 [2024-12-09 05:46:01.843623] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 01:51:10.349 [2024-12-09 05:46:01.843636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:51:10.349 [2024-12-09 05:46:01.843649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 01:51:10.349 [2024-12-09 05:46:01.843660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.204 ms 01:51:10.349 [2024-12-09 05:46:01.843689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:51:10.349 [2024-12-09 05:46:01.843839] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 01:51:10.349 [2024-12-09 05:46:01.843867] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 01:51:15.663 [2024-12-09 05:46:06.324053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:51:15.663 [2024-12-09 05:46:06.324126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 01:51:15.663 [2024-12-09 05:46:06.324146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4480.244 ms 01:51:15.663 [2024-12-09 05:46:06.324159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:51:15.663 [2024-12-09 05:46:06.358640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:51:15.663 [2024-12-09 05:46:06.358740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 01:51:15.663 [2024-12-09 05:46:06.358760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 34.250 ms 01:51:15.663 [2024-12-09 05:46:06.358773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:51:15.663 [2024-12-09 05:46:06.358902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:51:15.663 [2024-12-09 05:46:06.358924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 01:51:15.663 [2024-12-09 05:46:06.358937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 01:51:15.663 [2024-12-09 05:46:06.358955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:51:15.663 [2024-12-09 05:46:06.397043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:51:15.663 [2024-12-09 05:46:06.397123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 01:51:15.663 [2024-12-09 05:46:06.397140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 38.038 ms 01:51:15.663 [2024-12-09 05:46:06.397154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:51:15.663 [2024-12-09 05:46:06.397201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:51:15.663 [2024-12-09 05:46:06.397216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 01:51:15.663 [2024-12-09 05:46:06.397228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 01:51:15.663 [2024-12-09 05:46:06.397240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:51:15.663 [2024-12-09 05:46:06.397893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:51:15.663 [2024-12-09 05:46:06.397930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 01:51:15.663 [2024-12-09 05:46:06.397955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.564 ms 01:51:15.663 [2024-12-09 05:46:06.397969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:51:15.663 [2024-12-09 05:46:06.398020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:51:15.663 [2024-12-09 05:46:06.398054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 01:51:15.663 [2024-12-09 05:46:06.398082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 01:51:15.663 [2024-12-09 05:46:06.398113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:51:15.663 [2024-12-09 05:46:06.416543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:51:15.663 [2024-12-09 05:46:06.416815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 01:51:15.663 [2024-12-09 05:46:06.416843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.391 ms 01:51:15.663 [2024-12-09 05:46:06.416859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:51:15.663 [2024-12-09 05:46:06.439607] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 01:51:15.663 [2024-12-09 05:46:06.441107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:51:15.663 [2024-12-09 05:46:06.441138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 01:51:15.663 [2024-12-09 05:46:06.441173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.144 ms 01:51:15.663 [2024-12-09 05:46:06.441184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:51:15.663 [2024-12-09 05:46:06.475371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:51:15.663 [2024-12-09 05:46:06.475416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 01:51:15.663 [2024-12-09 05:46:06.475452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 34.133 ms 01:51:15.663 [2024-12-09 05:46:06.475464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:51:15.663 [2024-12-09 05:46:06.475564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:51:15.663 [2024-12-09 05:46:06.475582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 01:51:15.663 [2024-12-09 05:46:06.475599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.054 ms 01:51:15.663 [2024-12-09 05:46:06.475610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:51:15.663 [2024-12-09 05:46:06.501580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:51:15.663 [2024-12-09 05:46:06.501621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 01:51:15.663 [2024-12-09 05:46:06.501657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.874 ms 01:51:15.663 [2024-12-09 05:46:06.501684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:51:15.663 [2024-12-09 05:46:06.526728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:51:15.663 [2024-12-09 05:46:06.526765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 01:51:15.663 [2024-12-09 05:46:06.526784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.959 ms 01:51:15.663 [2024-12-09 05:46:06.526794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:51:15.663 [2024-12-09 05:46:06.527507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:51:15.663 [2024-12-09 05:46:06.527539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 01:51:15.663 [2024-12-09 05:46:06.527560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.671 ms 01:51:15.663 [2024-12-09 05:46:06.527570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:51:15.663 [2024-12-09 05:46:06.625111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:51:15.663 [2024-12-09 05:46:06.625160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 01:51:15.663 [2024-12-09 05:46:06.625183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 97.493 ms 01:51:15.663 [2024-12-09 05:46:06.625194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:51:15.663 [2024-12-09 05:46:06.651433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:51:15.663 [2024-12-09 05:46:06.651474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 01:51:15.663 [2024-12-09 05:46:06.651492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.171 ms 01:51:15.663 [2024-12-09 05:46:06.651503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:51:15.663 [2024-12-09 05:46:06.675896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:51:15.663 [2024-12-09 05:46:06.675934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 01:51:15.663 [2024-12-09 05:46:06.675951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.362 ms 01:51:15.663 [2024-12-09 05:46:06.675961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:51:15.663 [2024-12-09 05:46:06.700474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:51:15.663 [2024-12-09 05:46:06.700513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 01:51:15.663 [2024-12-09 05:46:06.700531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.485 ms 01:51:15.663 [2024-12-09 05:46:06.700541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:51:15.663 [2024-12-09 05:46:06.700574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:51:15.663 [2024-12-09 05:46:06.700586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 01:51:15.663 [2024-12-09 05:46:06.700601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 01:51:15.663 [2024-12-09 05:46:06.700611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:51:15.663 [2024-12-09 05:46:06.700737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:51:15.663 [2024-12-09 05:46:06.700758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 01:51:15.663 [2024-12-09 05:46:06.700771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.073 ms 01:51:15.663 [2024-12-09 05:46:06.700781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:51:15.663 [2024-12-09 05:46:06.702395] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4882.936 ms, result 0 01:51:15.663 { 01:51:15.663 "name": "ftl", 01:51:15.663 "uuid": "e0174778-4451-42ff-a3d0-0e505c20040a" 01:51:15.663 } 01:51:15.663 05:46:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 01:51:15.663 [2024-12-09 05:46:07.009096] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:51:15.663 05:46:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 01:51:15.663 05:46:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 01:51:15.921 [2024-12-09 05:46:07.449436] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 01:51:15.921 05:46:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 01:51:16.179 [2024-12-09 05:46:07.735017] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 01:51:16.179 05:46:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 01:51:16.746 05:46:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 01:51:16.746 05:46:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 01:51:16.746 05:46:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 01:51:16.746 05:46:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 01:51:16.746 05:46:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 01:51:16.746 05:46:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 01:51:16.746 05:46:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 01:51:16.746 05:46:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 01:51:16.746 05:46:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 01:51:16.746 05:46:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 01:51:16.746 05:46:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 01:51:16.746 Fill FTL, iteration 1 01:51:16.746 05:46:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 01:51:16.746 05:46:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 01:51:16.746 05:46:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 01:51:16.746 05:46:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 01:51:16.746 05:46:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 01:51:16.746 05:46:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=84006 01:51:16.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 01:51:16.746 05:46:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 01:51:16.746 05:46:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 84006 /var/tmp/spdk.tgt.sock 01:51:16.746 05:46:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84006 ']' 01:51:16.746 05:46:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 01:51:16.746 05:46:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 01:51:16.746 05:46:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 01:51:16.746 05:46:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 01:51:16.746 05:46:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 01:51:16.746 05:46:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 01:51:16.746 [2024-12-09 05:46:08.255981] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:51:16.746 [2024-12-09 05:46:08.256600] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84006 ] 01:51:17.004 [2024-12-09 05:46:08.435184] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:51:17.004 [2024-12-09 05:46:08.578791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:51:17.938 05:46:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:51:17.938 05:46:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 01:51:17.938 05:46:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 01:51:18.196 ftln1 01:51:18.196 05:46:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 01:51:18.196 05:46:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 01:51:18.454 05:46:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 01:51:18.454 05:46:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 84006 01:51:18.454 05:46:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 84006 ']' 01:51:18.454 05:46:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 84006 01:51:18.454 05:46:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 01:51:18.454 05:46:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:51:18.454 05:46:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84006 01:51:18.454 killing process with pid 84006 01:51:18.454 05:46:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:51:18.454 05:46:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:51:18.454 05:46:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84006' 01:51:18.454 05:46:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 84006 01:51:18.454 05:46:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 84006 01:51:20.353 05:46:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 01:51:20.353 05:46:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 01:51:20.611 [2024-12-09 05:46:11.976001] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:51:20.611 [2024-12-09 05:46:11.976189] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84058 ] 01:51:20.611 [2024-12-09 05:46:12.160994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:51:20.869 [2024-12-09 05:46:12.263139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:51:22.246  [2024-12-09T05:46:14.799Z] Copying: 217/1024 [MB] (217 MBps) [2024-12-09T05:46:15.734Z] Copying: 433/1024 [MB] (216 MBps) [2024-12-09T05:46:17.110Z] Copying: 655/1024 [MB] (222 MBps) [2024-12-09T05:46:17.677Z] Copying: 870/1024 [MB] (215 MBps) [2024-12-09T05:46:18.614Z] Copying: 1024/1024 [MB] (average 217 MBps) 01:51:26.997 01:51:26.997 Calculate MD5 checksum, iteration 1 01:51:26.997 05:46:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 01:51:26.997 05:46:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 01:51:26.997 05:46:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 01:51:26.997 05:46:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 01:51:26.997 05:46:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 01:51:26.997 05:46:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 01:51:26.997 05:46:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 01:51:26.997 05:46:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 01:51:26.997 [2024-12-09 05:46:18.459375] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:51:26.997 [2024-12-09 05:46:18.459550] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84118 ] 01:51:27.256 [2024-12-09 05:46:18.637703] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:51:27.256 [2024-12-09 05:46:18.748967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:51:28.632  [2024-12-09T05:46:21.192Z] Copying: 461/1024 [MB] (461 MBps) [2024-12-09T05:46:21.529Z] Copying: 932/1024 [MB] (471 MBps) [2024-12-09T05:46:22.473Z] Copying: 1024/1024 [MB] (average 467 MBps) 01:51:30.856 01:51:30.856 05:46:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 01:51:30.856 05:46:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 01:51:32.757 05:46:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 01:51:32.757 Fill FTL, iteration 2 01:51:32.757 05:46:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=772568825810109396ec634543a9a25e 01:51:32.757 05:46:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 01:51:32.757 05:46:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 01:51:32.757 05:46:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 01:51:32.757 05:46:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 01:51:32.758 05:46:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 01:51:32.758 05:46:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 01:51:32.758 05:46:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 01:51:32.758 05:46:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 01:51:32.758 05:46:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 01:51:32.758 [2024-12-09 05:46:24.091122] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:51:32.758 [2024-12-09 05:46:24.091304] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84188 ] 01:51:32.758 [2024-12-09 05:46:24.280031] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:51:33.016 [2024-12-09 05:46:24.423391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:51:34.393  [2024-12-09T05:46:26.944Z] Copying: 218/1024 [MB] (218 MBps) [2024-12-09T05:46:27.879Z] Copying: 435/1024 [MB] (217 MBps) [2024-12-09T05:46:29.256Z] Copying: 657/1024 [MB] (222 MBps) [2024-12-09T05:46:29.824Z] Copying: 876/1024 [MB] (219 MBps) [2024-12-09T05:46:30.761Z] Copying: 1024/1024 [MB] (average 215 MBps) 01:51:39.144 01:51:39.144 Calculate MD5 checksum, iteration 2 01:51:39.144 05:46:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 01:51:39.144 05:46:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 01:51:39.144 05:46:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 01:51:39.144 05:46:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 01:51:39.144 05:46:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 01:51:39.144 05:46:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 01:51:39.144 05:46:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 01:51:39.144 05:46:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 01:51:39.144 [2024-12-09 05:46:30.673704] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:51:39.144 [2024-12-09 05:46:30.674790] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84262 ] 01:51:39.403 [2024-12-09 05:46:30.853015] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:51:39.403 [2024-12-09 05:46:30.963564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:51:41.305  [2024-12-09T05:46:33.857Z] Copying: 466/1024 [MB] (466 MBps) [2024-12-09T05:46:33.857Z] Copying: 938/1024 [MB] (472 MBps) [2024-12-09T05:46:35.230Z] Copying: 1024/1024 [MB] (average 468 MBps) 01:51:43.613 01:51:43.613 05:46:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 01:51:43.613 05:46:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 01:51:45.514 05:46:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 01:51:45.514 05:46:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=7c4dda1e996d7ba05b83f8fba330507d 01:51:45.514 05:46:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 01:51:45.514 05:46:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 01:51:45.514 05:46:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 01:51:45.514 [2024-12-09 05:46:37.000187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:51:45.514 [2024-12-09 05:46:37.000240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 01:51:45.514 [2024-12-09 05:46:37.000259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 01:51:45.514 [2024-12-09 05:46:37.000270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:51:45.514 [2024-12-09 05:46:37.000301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:51:45.514 [2024-12-09 05:46:37.000320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 01:51:45.514 [2024-12-09 05:46:37.000332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 01:51:45.514 [2024-12-09 05:46:37.000341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:51:45.514 [2024-12-09 05:46:37.000365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:51:45.514 [2024-12-09 05:46:37.000376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 01:51:45.514 [2024-12-09 05:46:37.000386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 01:51:45.514 [2024-12-09 05:46:37.000396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:51:45.514 [2024-12-09 05:46:37.000463] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.264 ms, result 0 01:51:45.514 true 01:51:45.514 05:46:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 01:51:45.773 { 01:51:45.773 "name": "ftl", 01:51:45.773 "properties": [ 01:51:45.773 { 01:51:45.773 "name": "superblock_version", 01:51:45.773 "value": 5, 01:51:45.773 "read-only": true 01:51:45.773 }, 01:51:45.773 { 01:51:45.773 "name": "base_device", 01:51:45.773 "bands": [ 01:51:45.773 { 01:51:45.773 "id": 0, 01:51:45.773 "state": "FREE", 01:51:45.773 "validity": 0.0 01:51:45.773 }, 01:51:45.773 { 01:51:45.773 "id": 1, 01:51:45.773 "state": "FREE", 01:51:45.773 "validity": 0.0 01:51:45.773 }, 01:51:45.773 { 01:51:45.773 "id": 2, 01:51:45.773 "state": "FREE", 01:51:45.773 "validity": 0.0 01:51:45.773 }, 01:51:45.773 { 01:51:45.773 "id": 3, 01:51:45.773 "state": "FREE", 01:51:45.773 "validity": 0.0 01:51:45.773 }, 01:51:45.773 { 01:51:45.773 "id": 4, 01:51:45.773 "state": "FREE", 01:51:45.773 "validity": 0.0 01:51:45.773 }, 01:51:45.773 { 01:51:45.773 "id": 5, 01:51:45.773 "state": "FREE", 01:51:45.773 "validity": 0.0 01:51:45.773 }, 01:51:45.773 { 01:51:45.773 "id": 6, 01:51:45.773 "state": "FREE", 01:51:45.773 "validity": 0.0 01:51:45.773 }, 01:51:45.773 { 01:51:45.773 "id": 7, 01:51:45.773 "state": "FREE", 01:51:45.773 "validity": 0.0 01:51:45.773 }, 01:51:45.773 { 01:51:45.773 "id": 8, 01:51:45.773 "state": "FREE", 01:51:45.773 "validity": 0.0 01:51:45.773 }, 01:51:45.773 { 01:51:45.773 "id": 9, 01:51:45.773 "state": "FREE", 01:51:45.773 "validity": 0.0 01:51:45.773 }, 01:51:45.773 { 01:51:45.773 "id": 10, 01:51:45.773 "state": "FREE", 01:51:45.773 "validity": 0.0 01:51:45.773 }, 01:51:45.773 { 01:51:45.773 "id": 11, 01:51:45.773 "state": "FREE", 01:51:45.773 "validity": 0.0 01:51:45.773 }, 01:51:45.773 { 01:51:45.773 "id": 12, 01:51:45.773 "state": "FREE", 01:51:45.773 "validity": 0.0 01:51:45.773 }, 01:51:45.774 { 01:51:45.774 "id": 13, 01:51:45.774 "state": "FREE", 01:51:45.774 "validity": 0.0 01:51:45.774 }, 01:51:45.774 { 01:51:45.774 "id": 14, 01:51:45.774 "state": "FREE", 01:51:45.774 "validity": 0.0 01:51:45.774 }, 01:51:45.774 { 01:51:45.774 "id": 15, 01:51:45.774 "state": "FREE", 01:51:45.774 "validity": 0.0 01:51:45.774 }, 01:51:45.774 { 01:51:45.774 "id": 16, 01:51:45.774 "state": "FREE", 01:51:45.774 "validity": 0.0 01:51:45.774 }, 01:51:45.774 { 01:51:45.774 "id": 17, 01:51:45.774 "state": "FREE", 01:51:45.774 "validity": 0.0 01:51:45.774 } 01:51:45.774 ], 01:51:45.774 "read-only": true 01:51:45.774 }, 01:51:45.774 { 01:51:45.774 "name": "cache_device", 01:51:45.774 "type": "bdev", 01:51:45.774 "chunks": [ 01:51:45.774 { 01:51:45.774 "id": 0, 01:51:45.774 "state": "INACTIVE", 01:51:45.774 "utilization": 0.0 01:51:45.774 }, 01:51:45.774 { 01:51:45.774 "id": 1, 01:51:45.774 "state": "CLOSED", 01:51:45.774 "utilization": 1.0 01:51:45.774 }, 01:51:45.774 { 01:51:45.774 "id": 2, 01:51:45.774 "state": "CLOSED", 01:51:45.774 "utilization": 1.0 01:51:45.774 }, 01:51:45.774 { 01:51:45.774 "id": 3, 01:51:45.774 "state": "OPEN", 01:51:45.774 "utilization": 0.001953125 01:51:45.774 }, 01:51:45.774 { 01:51:45.774 "id": 4, 01:51:45.774 "state": "OPEN", 01:51:45.774 "utilization": 0.0 01:51:45.774 } 01:51:45.774 ], 01:51:45.774 "read-only": true 01:51:45.774 }, 01:51:45.774 { 01:51:45.774 "name": "verbose_mode", 01:51:45.774 "value": true, 01:51:45.774 "unit": "", 01:51:45.774 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 01:51:45.774 }, 01:51:45.774 { 01:51:45.774 "name": "prep_upgrade_on_shutdown", 01:51:45.774 "value": false, 01:51:45.774 "unit": "", 01:51:45.774 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 01:51:45.774 } 01:51:45.774 ] 01:51:45.774 } 01:51:45.774 05:46:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 01:51:46.045 [2024-12-09 05:46:37.484554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:51:46.045 [2024-12-09 05:46:37.484597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 01:51:46.045 [2024-12-09 05:46:37.484612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 01:51:46.045 [2024-12-09 05:46:37.484622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:51:46.045 [2024-12-09 05:46:37.484649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:51:46.045 [2024-12-09 05:46:37.484693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 01:51:46.045 [2024-12-09 05:46:37.484723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 01:51:46.045 [2024-12-09 05:46:37.484733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:51:46.045 [2024-12-09 05:46:37.484758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:51:46.045 [2024-12-09 05:46:37.484782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 01:51:46.045 [2024-12-09 05:46:37.484793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 01:51:46.045 [2024-12-09 05:46:37.484818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:51:46.045 [2024-12-09 05:46:37.484879] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.309 ms, result 0 01:51:46.045 true 01:51:46.045 05:46:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 01:51:46.046 05:46:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 01:51:46.046 05:46:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 01:51:46.319 05:46:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 01:51:46.319 05:46:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 01:51:46.319 05:46:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 01:51:46.579 [2024-12-09 05:46:37.985136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:51:46.579 [2024-12-09 05:46:37.985180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 01:51:46.579 [2024-12-09 05:46:37.985197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 01:51:46.579 [2024-12-09 05:46:37.985207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:51:46.579 [2024-12-09 05:46:37.985236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:51:46.579 [2024-12-09 05:46:37.985250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 01:51:46.579 [2024-12-09 05:46:37.985260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 01:51:46.579 [2024-12-09 05:46:37.985269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:51:46.579 [2024-12-09 05:46:37.985291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:51:46.579 [2024-12-09 05:46:37.985302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 01:51:46.579 [2024-12-09 05:46:37.985312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 01:51:46.579 [2024-12-09 05:46:37.985321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:51:46.579 [2024-12-09 05:46:37.985381] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.234 ms, result 0 01:51:46.579 true 01:51:46.579 05:46:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 01:51:46.839 { 01:51:46.839 "name": "ftl", 01:51:46.839 "properties": [ 01:51:46.839 { 01:51:46.839 "name": "superblock_version", 01:51:46.839 "value": 5, 01:51:46.839 "read-only": true 01:51:46.839 }, 01:51:46.839 { 01:51:46.839 "name": "base_device", 01:51:46.839 "bands": [ 01:51:46.839 { 01:51:46.839 "id": 0, 01:51:46.839 "state": "FREE", 01:51:46.839 "validity": 0.0 01:51:46.839 }, 01:51:46.839 { 01:51:46.839 "id": 1, 01:51:46.839 "state": "FREE", 01:51:46.839 "validity": 0.0 01:51:46.839 }, 01:51:46.839 { 01:51:46.839 "id": 2, 01:51:46.839 "state": "FREE", 01:51:46.839 "validity": 0.0 01:51:46.839 }, 01:51:46.839 { 01:51:46.839 "id": 3, 01:51:46.839 "state": "FREE", 01:51:46.839 "validity": 0.0 01:51:46.839 }, 01:51:46.839 { 01:51:46.839 "id": 4, 01:51:46.839 "state": "FREE", 01:51:46.839 "validity": 0.0 01:51:46.839 }, 01:51:46.839 { 01:51:46.839 "id": 5, 01:51:46.839 "state": "FREE", 01:51:46.839 "validity": 0.0 01:51:46.839 }, 01:51:46.839 { 01:51:46.839 "id": 6, 01:51:46.839 "state": "FREE", 01:51:46.839 "validity": 0.0 01:51:46.839 }, 01:51:46.839 { 01:51:46.839 "id": 7, 01:51:46.839 "state": "FREE", 01:51:46.839 "validity": 0.0 01:51:46.839 }, 01:51:46.839 { 01:51:46.839 "id": 8, 01:51:46.839 "state": "FREE", 01:51:46.839 "validity": 0.0 01:51:46.839 }, 01:51:46.839 { 01:51:46.839 "id": 9, 01:51:46.839 "state": "FREE", 01:51:46.839 "validity": 0.0 01:51:46.839 }, 01:51:46.839 { 01:51:46.839 "id": 10, 01:51:46.839 "state": "FREE", 01:51:46.839 "validity": 0.0 01:51:46.839 }, 01:51:46.839 { 01:51:46.839 "id": 11, 01:51:46.839 "state": "FREE", 01:51:46.839 "validity": 0.0 01:51:46.839 }, 01:51:46.840 { 01:51:46.840 "id": 12, 01:51:46.840 "state": "FREE", 01:51:46.840 "validity": 0.0 01:51:46.840 }, 01:51:46.840 { 01:51:46.840 "id": 13, 01:51:46.840 "state": "FREE", 01:51:46.840 "validity": 0.0 01:51:46.840 }, 01:51:46.840 { 01:51:46.840 "id": 14, 01:51:46.840 "state": "FREE", 01:51:46.840 "validity": 0.0 01:51:46.840 }, 01:51:46.840 { 01:51:46.840 "id": 15, 01:51:46.840 "state": "FREE", 01:51:46.840 "validity": 0.0 01:51:46.840 }, 01:51:46.840 { 01:51:46.840 "id": 16, 01:51:46.840 "state": "FREE", 01:51:46.840 "validity": 0.0 01:51:46.840 }, 01:51:46.840 { 01:51:46.840 "id": 17, 01:51:46.840 "state": "FREE", 01:51:46.840 "validity": 0.0 01:51:46.840 } 01:51:46.840 ], 01:51:46.840 "read-only": true 01:51:46.840 }, 01:51:46.840 { 01:51:46.840 "name": "cache_device", 01:51:46.840 "type": "bdev", 01:51:46.840 "chunks": [ 01:51:46.840 { 01:51:46.840 "id": 0, 01:51:46.840 "state": "INACTIVE", 01:51:46.840 "utilization": 0.0 01:51:46.840 }, 01:51:46.840 { 01:51:46.840 "id": 1, 01:51:46.840 "state": "CLOSED", 01:51:46.840 "utilization": 1.0 01:51:46.840 }, 01:51:46.840 { 01:51:46.840 "id": 2, 01:51:46.840 "state": "CLOSED", 01:51:46.840 "utilization": 1.0 01:51:46.840 }, 01:51:46.840 { 01:51:46.840 "id": 3, 01:51:46.840 "state": "OPEN", 01:51:46.840 "utilization": 0.001953125 01:51:46.840 }, 01:51:46.840 { 01:51:46.840 "id": 4, 01:51:46.840 "state": "OPEN", 01:51:46.840 "utilization": 0.0 01:51:46.840 } 01:51:46.840 ], 01:51:46.840 "read-only": true 01:51:46.840 }, 01:51:46.840 { 01:51:46.840 "name": "verbose_mode", 01:51:46.840 "value": true, 01:51:46.840 "unit": "", 01:51:46.840 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 01:51:46.840 }, 01:51:46.840 { 01:51:46.840 "name": "prep_upgrade_on_shutdown", 01:51:46.840 "value": true, 01:51:46.840 "unit": "", 01:51:46.840 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 01:51:46.840 } 01:51:46.840 ] 01:51:46.840 } 01:51:46.840 05:46:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 01:51:46.840 05:46:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 83871 ]] 01:51:46.840 05:46:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 83871 01:51:46.840 05:46:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 83871 ']' 01:51:46.840 05:46:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 83871 01:51:46.840 05:46:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 01:51:46.840 05:46:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:51:46.840 05:46:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83871 01:51:46.840 killing process with pid 83871 01:51:46.840 05:46:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:51:46.840 05:46:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:51:46.840 05:46:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83871' 01:51:46.840 05:46:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 83871 01:51:46.840 05:46:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 83871 01:51:47.778 [2024-12-09 05:46:39.068751] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 01:51:47.778 [2024-12-09 05:46:39.083154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:51:47.778 [2024-12-09 05:46:39.083195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 01:51:47.778 [2024-12-09 05:46:39.083231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 01:51:47.778 [2024-12-09 05:46:39.083241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:51:47.778 [2024-12-09 05:46:39.083268] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 01:51:47.778 [2024-12-09 05:46:39.086509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:51:47.778 [2024-12-09 05:46:39.086777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 01:51:47.778 [2024-12-09 05:46:39.086819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.222 ms 01:51:47.778 [2024-12-09 05:46:39.086839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:51:55.894 [2024-12-09 05:46:47.170855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:51:55.894 [2024-12-09 05:46:47.170929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 01:51:55.894 [2024-12-09 05:46:47.170951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8084.037 ms 01:51:55.894 [2024-12-09 05:46:47.170967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:51:55.894 [2024-12-09 05:46:47.172047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:51:55.894 [2024-12-09 05:46:47.172095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 01:51:55.894 [2024-12-09 05:46:47.172110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.060 ms 01:51:55.894 [2024-12-09 05:46:47.172121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:51:55.894 [2024-12-09 05:46:47.173262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:51:55.894 [2024-12-09 05:46:47.173295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 01:51:55.894 [2024-12-09 05:46:47.173309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.103 ms 01:51:55.894 [2024-12-09 05:46:47.173320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:51:55.894 [2024-12-09 05:46:47.184881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:51:55.894 [2024-12-09 05:46:47.184919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 01:51:55.894 [2024-12-09 05:46:47.184934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.517 ms 01:51:55.894 [2024-12-09 05:46:47.184944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:51:55.894 [2024-12-09 05:46:47.192368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:51:55.894 [2024-12-09 05:46:47.192409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 01:51:55.894 [2024-12-09 05:46:47.192441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.386 ms 01:51:55.894 [2024-12-09 05:46:47.192451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:51:55.894 [2024-12-09 05:46:47.192561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:51:55.894 [2024-12-09 05:46:47.192581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 01:51:55.894 [2024-12-09 05:46:47.192600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.072 ms 01:51:55.894 [2024-12-09 05:46:47.192610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:51:55.894 [2024-12-09 05:46:47.204999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:51:55.894 [2024-12-09 05:46:47.205277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 01:51:55.894 [2024-12-09 05:46:47.205304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.369 ms 01:51:55.894 [2024-12-09 05:46:47.205331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:51:55.894 [2024-12-09 05:46:47.216953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:51:55.894 [2024-12-09 05:46:47.216990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 01:51:55.894 [2024-12-09 05:46:47.217020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.577 ms 01:51:55.894 [2024-12-09 05:46:47.217030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:51:55.894 [2024-12-09 05:46:47.228304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:51:55.894 [2024-12-09 05:46:47.228515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 01:51:55.894 [2024-12-09 05:46:47.228543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.209 ms 01:51:55.894 [2024-12-09 05:46:47.228555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:51:55.894 [2024-12-09 05:46:47.239815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:51:55.894 [2024-12-09 05:46:47.239854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 01:51:55.894 [2024-12-09 05:46:47.239885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.160 ms 01:51:55.894 [2024-12-09 05:46:47.239895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:51:55.894 [2024-12-09 05:46:47.239933] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 01:51:55.894 [2024-12-09 05:46:47.239968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 01:51:55.894 [2024-12-09 05:46:47.239981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 01:51:55.894 [2024-12-09 05:46:47.239993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 01:51:55.894 [2024-12-09 05:46:47.240004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 01:51:55.894 [2024-12-09 05:46:47.240030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 01:51:55.894 [2024-12-09 05:46:47.240055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 01:51:55.894 [2024-12-09 05:46:47.240065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 01:51:55.894 [2024-12-09 05:46:47.240076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 01:51:55.894 [2024-12-09 05:46:47.240086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 01:51:55.894 [2024-12-09 05:46:47.240112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 01:51:55.894 [2024-12-09 05:46:47.240121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 01:51:55.894 [2024-12-09 05:46:47.240131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 01:51:55.894 [2024-12-09 05:46:47.240140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 01:51:55.894 [2024-12-09 05:46:47.240150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 01:51:55.894 [2024-12-09 05:46:47.240159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 01:51:55.894 [2024-12-09 05:46:47.240169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 01:51:55.894 [2024-12-09 05:46:47.240179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 01:51:55.894 [2024-12-09 05:46:47.240188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 01:51:55.894 [2024-12-09 05:46:47.240200] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 01:51:55.894 [2024-12-09 05:46:47.240210] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: e0174778-4451-42ff-a3d0-0e505c20040a 01:51:55.894 [2024-12-09 05:46:47.240221] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 01:51:55.894 [2024-12-09 05:46:47.240230] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 01:51:55.894 [2024-12-09 05:46:47.240239] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 01:51:55.894 [2024-12-09 05:46:47.240249] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 01:51:55.894 [2024-12-09 05:46:47.240259] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 01:51:55.894 [2024-12-09 05:46:47.240273] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 01:51:55.894 [2024-12-09 05:46:47.240283] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 01:51:55.894 [2024-12-09 05:46:47.240291] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 01:51:55.894 [2024-12-09 05:46:47.240301] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 01:51:55.894 [2024-12-09 05:46:47.240310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:51:55.894 [2024-12-09 05:46:47.240323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 01:51:55.894 [2024-12-09 05:46:47.240334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.379 ms 01:51:55.894 [2024-12-09 05:46:47.240345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:51:55.894 [2024-12-09 05:46:47.255461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:51:55.894 [2024-12-09 05:46:47.255498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 01:51:55.894 [2024-12-09 05:46:47.255529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.079 ms 01:51:55.894 [2024-12-09 05:46:47.255546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:51:55.894 [2024-12-09 05:46:47.256117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:51:55.894 [2024-12-09 05:46:47.256176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 01:51:55.894 [2024-12-09 05:46:47.256191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.545 ms 01:51:55.894 [2024-12-09 05:46:47.256201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:51:55.894 [2024-12-09 05:46:47.303873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 01:51:55.894 [2024-12-09 05:46:47.303920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 01:51:55.894 [2024-12-09 05:46:47.303957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 01:51:55.894 [2024-12-09 05:46:47.303969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:51:55.894 [2024-12-09 05:46:47.304009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 01:51:55.894 [2024-12-09 05:46:47.304022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 01:51:55.894 [2024-12-09 05:46:47.304033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 01:51:55.894 [2024-12-09 05:46:47.304043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:51:55.894 [2024-12-09 05:46:47.304145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 01:51:55.895 [2024-12-09 05:46:47.304163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 01:51:55.895 [2024-12-09 05:46:47.304174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 01:51:55.895 [2024-12-09 05:46:47.304189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:51:55.895 [2024-12-09 05:46:47.304211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 01:51:55.895 [2024-12-09 05:46:47.304222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 01:51:55.895 [2024-12-09 05:46:47.304233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 01:51:55.895 [2024-12-09 05:46:47.304242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:51:55.895 [2024-12-09 05:46:47.390923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 01:51:55.895 [2024-12-09 05:46:47.390982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 01:51:55.895 [2024-12-09 05:46:47.391014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 01:51:55.895 [2024-12-09 05:46:47.391030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:51:55.895 [2024-12-09 05:46:47.460995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 01:51:55.895 [2024-12-09 05:46:47.461045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 01:51:55.895 [2024-12-09 05:46:47.461078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 01:51:55.895 [2024-12-09 05:46:47.461088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:51:55.895 [2024-12-09 05:46:47.461199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 01:51:55.895 [2024-12-09 05:46:47.461216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 01:51:55.895 [2024-12-09 05:46:47.461227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 01:51:55.895 [2024-12-09 05:46:47.461237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:51:55.895 [2024-12-09 05:46:47.461298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 01:51:55.895 [2024-12-09 05:46:47.461314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 01:51:55.895 [2024-12-09 05:46:47.461324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 01:51:55.895 [2024-12-09 05:46:47.461334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:51:55.895 [2024-12-09 05:46:47.461443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 01:51:55.895 [2024-12-09 05:46:47.461460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 01:51:55.895 [2024-12-09 05:46:47.461471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 01:51:55.895 [2024-12-09 05:46:47.461490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:51:55.895 [2024-12-09 05:46:47.461534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 01:51:55.895 [2024-12-09 05:46:47.461554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 01:51:55.895 [2024-12-09 05:46:47.461566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 01:51:55.895 [2024-12-09 05:46:47.461576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:51:55.895 [2024-12-09 05:46:47.461619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 01:51:55.895 [2024-12-09 05:46:47.461633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 01:51:55.895 [2024-12-09 05:46:47.461643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 01:51:55.895 [2024-12-09 05:46:47.461653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:51:55.895 [2024-12-09 05:46:47.461768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 01:51:55.895 [2024-12-09 05:46:47.461786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 01:51:55.895 [2024-12-09 05:46:47.461813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 01:51:55.895 [2024-12-09 05:46:47.461823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:51:55.895 [2024-12-09 05:46:47.461966] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8378.841 ms, result 0 01:51:59.182 05:46:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 01:51:59.182 05:46:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 01:51:59.182 05:46:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 01:51:59.182 05:46:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 01:51:59.182 05:46:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 01:51:59.182 05:46:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=84467 01:51:59.182 05:46:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 01:51:59.182 05:46:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 01:51:59.182 05:46:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 84467 01:51:59.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:51:59.182 05:46:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84467 ']' 01:51:59.182 05:46:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:51:59.182 05:46:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 01:51:59.182 05:46:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:51:59.182 05:46:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 01:51:59.182 05:46:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 01:51:59.182 [2024-12-09 05:46:50.614553] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:51:59.182 [2024-12-09 05:46:50.614934] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84467 ] 01:51:59.182 [2024-12-09 05:46:50.783171] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:51:59.442 [2024-12-09 05:46:50.888397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:52:00.381 [2024-12-09 05:46:51.705435] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 01:52:00.381 [2024-12-09 05:46:51.705513] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 01:52:00.381 [2024-12-09 05:46:51.851240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:52:00.381 [2024-12-09 05:46:51.851284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 01:52:00.381 [2024-12-09 05:46:51.851319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 01:52:00.381 [2024-12-09 05:46:51.851330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:00.381 [2024-12-09 05:46:51.851405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:52:00.381 [2024-12-09 05:46:51.851424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 01:52:00.381 [2024-12-09 05:46:51.851436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.043 ms 01:52:00.381 [2024-12-09 05:46:51.851445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:00.381 [2024-12-09 05:46:51.851475] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 01:52:00.381 [2024-12-09 05:46:51.852391] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 01:52:00.381 [2024-12-09 05:46:51.852422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:52:00.381 [2024-12-09 05:46:51.852435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 01:52:00.381 [2024-12-09 05:46:51.852446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.953 ms 01:52:00.381 [2024-12-09 05:46:51.852456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:00.381 [2024-12-09 05:46:51.854619] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 01:52:00.381 [2024-12-09 05:46:51.869337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:52:00.381 [2024-12-09 05:46:51.869397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 01:52:00.381 [2024-12-09 05:46:51.869429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.719 ms 01:52:00.381 [2024-12-09 05:46:51.869440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:00.381 [2024-12-09 05:46:51.869506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:52:00.381 [2024-12-09 05:46:51.869524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 01:52:00.381 [2024-12-09 05:46:51.869536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 01:52:00.381 [2024-12-09 05:46:51.869545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:00.381 [2024-12-09 05:46:51.878668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:52:00.381 [2024-12-09 05:46:51.878739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 01:52:00.381 [2024-12-09 05:46:51.878771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.032 ms 01:52:00.381 [2024-12-09 05:46:51.878782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:00.381 [2024-12-09 05:46:51.878873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:52:00.381 [2024-12-09 05:46:51.878893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 01:52:00.381 [2024-12-09 05:46:51.878905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.061 ms 01:52:00.381 [2024-12-09 05:46:51.878915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:00.381 [2024-12-09 05:46:51.878974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:52:00.381 [2024-12-09 05:46:51.878995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 01:52:00.381 [2024-12-09 05:46:51.879007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 01:52:00.381 [2024-12-09 05:46:51.879017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:00.381 [2024-12-09 05:46:51.879053] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 01:52:00.381 [2024-12-09 05:46:51.883620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:52:00.381 [2024-12-09 05:46:51.883699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 01:52:00.381 [2024-12-09 05:46:51.883738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.574 ms 01:52:00.381 [2024-12-09 05:46:51.883748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:00.381 [2024-12-09 05:46:51.883780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:52:00.381 [2024-12-09 05:46:51.883794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 01:52:00.381 [2024-12-09 05:46:51.883805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 01:52:00.381 [2024-12-09 05:46:51.883815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:00.381 [2024-12-09 05:46:51.883877] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 01:52:00.381 [2024-12-09 05:46:51.883911] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 01:52:00.381 [2024-12-09 05:46:51.883949] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 01:52:00.381 [2024-12-09 05:46:51.883968] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 01:52:00.381 [2024-12-09 05:46:51.884064] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 01:52:00.381 [2024-12-09 05:46:51.884095] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 01:52:00.381 [2024-12-09 05:46:51.884108] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 01:52:00.381 [2024-12-09 05:46:51.884121] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 01:52:00.381 [2024-12-09 05:46:51.884138] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 01:52:00.381 [2024-12-09 05:46:51.884149] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 01:52:00.381 [2024-12-09 05:46:51.884159] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 01:52:00.381 [2024-12-09 05:46:51.884170] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 01:52:00.381 [2024-12-09 05:46:51.884179] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 01:52:00.381 [2024-12-09 05:46:51.884191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:52:00.381 [2024-12-09 05:46:51.884201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 01:52:00.381 [2024-12-09 05:46:51.884212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.317 ms 01:52:00.381 [2024-12-09 05:46:51.884222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:00.381 [2024-12-09 05:46:51.884303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:52:00.381 [2024-12-09 05:46:51.884316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 01:52:00.381 [2024-12-09 05:46:51.884331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.057 ms 01:52:00.381 [2024-12-09 05:46:51.884341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:00.381 [2024-12-09 05:46:51.884441] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 01:52:00.381 [2024-12-09 05:46:51.884457] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 01:52:00.381 [2024-12-09 05:46:51.884468] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 01:52:00.381 [2024-12-09 05:46:51.884478] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 01:52:00.381 [2024-12-09 05:46:51.884489] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 01:52:00.381 [2024-12-09 05:46:51.884498] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 01:52:00.381 [2024-12-09 05:46:51.884508] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 01:52:00.381 [2024-12-09 05:46:51.884517] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 01:52:00.381 [2024-12-09 05:46:51.884528] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 01:52:00.381 [2024-12-09 05:46:51.884538] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 01:52:00.381 [2024-12-09 05:46:51.884548] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 01:52:00.381 [2024-12-09 05:46:51.884557] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 01:52:00.381 [2024-12-09 05:46:51.884566] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 01:52:00.381 [2024-12-09 05:46:51.884575] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 01:52:00.381 [2024-12-09 05:46:51.884592] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 01:52:00.381 [2024-12-09 05:46:51.884602] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 01:52:00.381 [2024-12-09 05:46:51.884616] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 01:52:00.382 [2024-12-09 05:46:51.884626] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 01:52:00.382 [2024-12-09 05:46:51.884636] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 01:52:00.382 [2024-12-09 05:46:51.884645] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 01:52:00.382 [2024-12-09 05:46:51.884655] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 01:52:00.382 [2024-12-09 05:46:51.884664] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 01:52:00.382 [2024-12-09 05:46:51.884674] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 01:52:00.382 [2024-12-09 05:46:51.884711] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 01:52:00.382 [2024-12-09 05:46:51.884738] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 01:52:00.382 [2024-12-09 05:46:51.884764] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 01:52:00.382 [2024-12-09 05:46:51.884775] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 01:52:00.382 [2024-12-09 05:46:51.884784] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 01:52:00.382 [2024-12-09 05:46:51.884793] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 01:52:00.382 [2024-12-09 05:46:51.884819] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 01:52:00.382 [2024-12-09 05:46:51.884829] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 01:52:00.382 [2024-12-09 05:46:51.884838] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 01:52:00.382 [2024-12-09 05:46:51.884849] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 01:52:00.382 [2024-12-09 05:46:51.884859] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 01:52:00.382 [2024-12-09 05:46:51.884869] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 01:52:00.382 [2024-12-09 05:46:51.884879] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 01:52:00.382 [2024-12-09 05:46:51.884889] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 01:52:00.382 [2024-12-09 05:46:51.884899] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 01:52:00.382 [2024-12-09 05:46:51.884909] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 01:52:00.382 [2024-12-09 05:46:51.884919] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 01:52:00.382 [2024-12-09 05:46:51.884929] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 01:52:00.382 [2024-12-09 05:46:51.884939] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 01:52:00.382 [2024-12-09 05:46:51.884949] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 01:52:00.382 [2024-12-09 05:46:51.884959] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 01:52:00.382 [2024-12-09 05:46:51.884970] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 01:52:00.382 [2024-12-09 05:46:51.884980] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 01:52:00.382 [2024-12-09 05:46:51.885004] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 01:52:00.382 [2024-12-09 05:46:51.885016] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 01:52:00.382 [2024-12-09 05:46:51.885027] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 01:52:00.382 [2024-12-09 05:46:51.885037] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 01:52:00.382 [2024-12-09 05:46:51.885047] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 01:52:00.382 [2024-12-09 05:46:51.885057] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 01:52:00.382 [2024-12-09 05:46:51.885067] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 01:52:00.382 [2024-12-09 05:46:51.885079] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 01:52:00.382 [2024-12-09 05:46:51.885092] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 01:52:00.382 [2024-12-09 05:46:51.885104] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 01:52:00.382 [2024-12-09 05:46:51.885115] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 01:52:00.382 [2024-12-09 05:46:51.885126] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 01:52:00.382 [2024-12-09 05:46:51.885137] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 01:52:00.382 [2024-12-09 05:46:51.885148] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 01:52:00.382 [2024-12-09 05:46:51.885159] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 01:52:00.382 [2024-12-09 05:46:51.885170] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 01:52:00.382 [2024-12-09 05:46:51.885181] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 01:52:00.382 [2024-12-09 05:46:51.885191] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 01:52:00.382 [2024-12-09 05:46:51.885202] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 01:52:00.382 [2024-12-09 05:46:51.885213] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 01:52:00.382 [2024-12-09 05:46:51.885224] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 01:52:00.382 [2024-12-09 05:46:51.885234] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 01:52:00.382 [2024-12-09 05:46:51.885245] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 01:52:00.382 [2024-12-09 05:46:51.885255] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 01:52:00.382 [2024-12-09 05:46:51.885267] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 01:52:00.382 [2024-12-09 05:46:51.885278] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 01:52:00.382 [2024-12-09 05:46:51.885290] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 01:52:00.382 [2024-12-09 05:46:51.885301] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 01:52:00.382 [2024-12-09 05:46:51.885312] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 01:52:00.382 [2024-12-09 05:46:51.885323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:52:00.382 [2024-12-09 05:46:51.885335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 01:52:00.382 [2024-12-09 05:46:51.885346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.938 ms 01:52:00.382 [2024-12-09 05:46:51.885363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:00.382 [2024-12-09 05:46:51.885430] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 01:52:00.382 [2024-12-09 05:46:51.885458] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 01:52:03.688 [2024-12-09 05:46:54.814124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:52:03.688 [2024-12-09 05:46:54.814190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 01:52:03.688 [2024-12-09 05:46:54.814235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2928.714 ms 01:52:03.688 [2024-12-09 05:46:54.814247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:03.688 [2024-12-09 05:46:54.848658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:52:03.688 [2024-12-09 05:46:54.848717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 01:52:03.688 [2024-12-09 05:46:54.848751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 34.150 ms 01:52:03.689 [2024-12-09 05:46:54.848762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:03.689 [2024-12-09 05:46:54.848889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:52:03.689 [2024-12-09 05:46:54.848909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 01:52:03.689 [2024-12-09 05:46:54.848922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 01:52:03.689 [2024-12-09 05:46:54.848932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:03.689 [2024-12-09 05:46:54.890091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:52:03.689 [2024-12-09 05:46:54.890138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 01:52:03.689 [2024-12-09 05:46:54.890177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 41.087 ms 01:52:03.689 [2024-12-09 05:46:54.890188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:03.689 [2024-12-09 05:46:54.890278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:52:03.689 [2024-12-09 05:46:54.890295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 01:52:03.689 [2024-12-09 05:46:54.890307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 01:52:03.689 [2024-12-09 05:46:54.890318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:03.689 [2024-12-09 05:46:54.891036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:52:03.689 [2024-12-09 05:46:54.891062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 01:52:03.689 [2024-12-09 05:46:54.891076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.615 ms 01:52:03.689 [2024-12-09 05:46:54.891096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:03.689 [2024-12-09 05:46:54.891189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:52:03.689 [2024-12-09 05:46:54.891204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 01:52:03.689 [2024-12-09 05:46:54.891216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 01:52:03.689 [2024-12-09 05:46:54.891227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:03.689 [2024-12-09 05:46:54.910930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:52:03.689 [2024-12-09 05:46:54.911171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 01:52:03.689 [2024-12-09 05:46:54.911199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.675 ms 01:52:03.689 [2024-12-09 05:46:54.911212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:03.689 [2024-12-09 05:46:54.935727] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 01:52:03.689 [2024-12-09 05:46:54.935770] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 01:52:03.689 [2024-12-09 05:46:54.935805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:52:03.689 [2024-12-09 05:46:54.935817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 01:52:03.689 [2024-12-09 05:46:54.935829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.436 ms 01:52:03.689 [2024-12-09 05:46:54.935839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:03.689 [2024-12-09 05:46:54.951306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:52:03.689 [2024-12-09 05:46:54.951348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 01:52:03.689 [2024-12-09 05:46:54.951381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.420 ms 01:52:03.689 [2024-12-09 05:46:54.951392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:03.689 [2024-12-09 05:46:54.964502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:52:03.689 [2024-12-09 05:46:54.964729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 01:52:03.689 [2024-12-09 05:46:54.964756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.062 ms 01:52:03.689 [2024-12-09 05:46:54.964769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:03.689 [2024-12-09 05:46:54.978481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:52:03.689 [2024-12-09 05:46:54.978730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 01:52:03.689 [2024-12-09 05:46:54.978757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.664 ms 01:52:03.689 [2024-12-09 05:46:54.978769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:03.689 [2024-12-09 05:46:54.979624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:52:03.689 [2024-12-09 05:46:54.979659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 01:52:03.689 [2024-12-09 05:46:54.979724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.723 ms 01:52:03.689 [2024-12-09 05:46:54.979737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:03.689 [2024-12-09 05:46:55.045926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:52:03.689 [2024-12-09 05:46:55.045995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 01:52:03.689 [2024-12-09 05:46:55.046031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 66.159 ms 01:52:03.689 [2024-12-09 05:46:55.046043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:03.689 [2024-12-09 05:46:55.056257] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 01:52:03.689 [2024-12-09 05:46:55.056959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:52:03.689 [2024-12-09 05:46:55.056996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 01:52:03.689 [2024-12-09 05:46:55.057013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.851 ms 01:52:03.689 [2024-12-09 05:46:55.057024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:03.689 [2024-12-09 05:46:55.057146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:52:03.689 [2024-12-09 05:46:55.057168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 01:52:03.689 [2024-12-09 05:46:55.057182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 01:52:03.689 [2024-12-09 05:46:55.057193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:03.689 [2024-12-09 05:46:55.057269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:52:03.689 [2024-12-09 05:46:55.057287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 01:52:03.689 [2024-12-09 05:46:55.057300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 01:52:03.689 [2024-12-09 05:46:55.057310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:03.689 [2024-12-09 05:46:55.057375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:52:03.689 [2024-12-09 05:46:55.057389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 01:52:03.689 [2024-12-09 05:46:55.057405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 01:52:03.689 [2024-12-09 05:46:55.057416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:03.689 [2024-12-09 05:46:55.057458] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 01:52:03.689 [2024-12-09 05:46:55.057474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:52:03.689 [2024-12-09 05:46:55.057485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 01:52:03.689 [2024-12-09 05:46:55.057496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 01:52:03.689 [2024-12-09 05:46:55.057506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:03.689 [2024-12-09 05:46:55.082593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:52:03.689 [2024-12-09 05:46:55.082832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 01:52:03.689 [2024-12-09 05:46:55.082956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.058 ms 01:52:03.689 [2024-12-09 05:46:55.083006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:03.689 [2024-12-09 05:46:55.083132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:52:03.689 [2024-12-09 05:46:55.083278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 01:52:03.689 [2024-12-09 05:46:55.083327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.048 ms 01:52:03.689 [2024-12-09 05:46:55.083363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:03.689 [2024-12-09 05:46:55.084895] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3233.066 ms, result 0 01:52:03.689 [2024-12-09 05:46:55.099354] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:52:03.689 [2024-12-09 05:46:55.115361] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 01:52:03.689 [2024-12-09 05:46:55.123540] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 01:52:03.689 05:46:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:52:03.689 05:46:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 01:52:03.689 05:46:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 01:52:03.689 05:46:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 01:52:03.689 05:46:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 01:52:03.950 [2024-12-09 05:46:55.419586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:52:03.950 [2024-12-09 05:46:55.419633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 01:52:03.951 [2024-12-09 05:46:55.419657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 01:52:03.951 [2024-12-09 05:46:55.419714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:03.951 [2024-12-09 05:46:55.419747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:52:03.951 [2024-12-09 05:46:55.419762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 01:52:03.951 [2024-12-09 05:46:55.419774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 01:52:03.951 [2024-12-09 05:46:55.419784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:03.951 [2024-12-09 05:46:55.419825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:52:03.951 [2024-12-09 05:46:55.419839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 01:52:03.951 [2024-12-09 05:46:55.419851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 01:52:03.951 [2024-12-09 05:46:55.419862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:03.951 [2024-12-09 05:46:55.419935] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.333 ms, result 0 01:52:03.951 true 01:52:03.951 05:46:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 01:52:04.209 { 01:52:04.209 "name": "ftl", 01:52:04.209 "properties": [ 01:52:04.209 { 01:52:04.209 "name": "superblock_version", 01:52:04.209 "value": 5, 01:52:04.209 "read-only": true 01:52:04.209 }, 01:52:04.209 { 01:52:04.209 "name": "base_device", 01:52:04.209 "bands": [ 01:52:04.209 { 01:52:04.209 "id": 0, 01:52:04.209 "state": "CLOSED", 01:52:04.209 "validity": 1.0 01:52:04.209 }, 01:52:04.209 { 01:52:04.209 "id": 1, 01:52:04.209 "state": "CLOSED", 01:52:04.209 "validity": 1.0 01:52:04.209 }, 01:52:04.209 { 01:52:04.209 "id": 2, 01:52:04.209 "state": "CLOSED", 01:52:04.209 "validity": 0.007843137254901933 01:52:04.209 }, 01:52:04.209 { 01:52:04.209 "id": 3, 01:52:04.209 "state": "FREE", 01:52:04.209 "validity": 0.0 01:52:04.209 }, 01:52:04.209 { 01:52:04.209 "id": 4, 01:52:04.209 "state": "FREE", 01:52:04.209 "validity": 0.0 01:52:04.209 }, 01:52:04.209 { 01:52:04.209 "id": 5, 01:52:04.209 "state": "FREE", 01:52:04.209 "validity": 0.0 01:52:04.209 }, 01:52:04.209 { 01:52:04.209 "id": 6, 01:52:04.209 "state": "FREE", 01:52:04.209 "validity": 0.0 01:52:04.209 }, 01:52:04.209 { 01:52:04.209 "id": 7, 01:52:04.209 "state": "FREE", 01:52:04.209 "validity": 0.0 01:52:04.209 }, 01:52:04.209 { 01:52:04.209 "id": 8, 01:52:04.209 "state": "FREE", 01:52:04.209 "validity": 0.0 01:52:04.209 }, 01:52:04.209 { 01:52:04.209 "id": 9, 01:52:04.209 "state": "FREE", 01:52:04.209 "validity": 0.0 01:52:04.209 }, 01:52:04.209 { 01:52:04.209 "id": 10, 01:52:04.209 "state": "FREE", 01:52:04.209 "validity": 0.0 01:52:04.209 }, 01:52:04.209 { 01:52:04.209 "id": 11, 01:52:04.209 "state": "FREE", 01:52:04.209 "validity": 0.0 01:52:04.209 }, 01:52:04.209 { 01:52:04.209 "id": 12, 01:52:04.209 "state": "FREE", 01:52:04.209 "validity": 0.0 01:52:04.209 }, 01:52:04.209 { 01:52:04.209 "id": 13, 01:52:04.209 "state": "FREE", 01:52:04.209 "validity": 0.0 01:52:04.209 }, 01:52:04.209 { 01:52:04.209 "id": 14, 01:52:04.209 "state": "FREE", 01:52:04.209 "validity": 0.0 01:52:04.209 }, 01:52:04.209 { 01:52:04.209 "id": 15, 01:52:04.209 "state": "FREE", 01:52:04.209 "validity": 0.0 01:52:04.209 }, 01:52:04.209 { 01:52:04.209 "id": 16, 01:52:04.209 "state": "FREE", 01:52:04.209 "validity": 0.0 01:52:04.209 }, 01:52:04.209 { 01:52:04.209 "id": 17, 01:52:04.209 "state": "FREE", 01:52:04.209 "validity": 0.0 01:52:04.209 } 01:52:04.209 ], 01:52:04.209 "read-only": true 01:52:04.209 }, 01:52:04.210 { 01:52:04.210 "name": "cache_device", 01:52:04.210 "type": "bdev", 01:52:04.210 "chunks": [ 01:52:04.210 { 01:52:04.210 "id": 0, 01:52:04.210 "state": "INACTIVE", 01:52:04.210 "utilization": 0.0 01:52:04.210 }, 01:52:04.210 { 01:52:04.210 "id": 1, 01:52:04.210 "state": "OPEN", 01:52:04.210 "utilization": 0.0 01:52:04.210 }, 01:52:04.210 { 01:52:04.210 "id": 2, 01:52:04.210 "state": "OPEN", 01:52:04.210 "utilization": 0.0 01:52:04.210 }, 01:52:04.210 { 01:52:04.210 "id": 3, 01:52:04.210 "state": "FREE", 01:52:04.210 "utilization": 0.0 01:52:04.210 }, 01:52:04.210 { 01:52:04.210 "id": 4, 01:52:04.210 "state": "FREE", 01:52:04.210 "utilization": 0.0 01:52:04.210 } 01:52:04.210 ], 01:52:04.210 "read-only": true 01:52:04.210 }, 01:52:04.210 { 01:52:04.210 "name": "verbose_mode", 01:52:04.210 "value": true, 01:52:04.210 "unit": "", 01:52:04.210 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 01:52:04.210 }, 01:52:04.210 { 01:52:04.210 "name": "prep_upgrade_on_shutdown", 01:52:04.210 "value": false, 01:52:04.210 "unit": "", 01:52:04.210 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 01:52:04.210 } 01:52:04.210 ] 01:52:04.210 } 01:52:04.210 05:46:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 01:52:04.210 05:46:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 01:52:04.210 05:46:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 01:52:04.468 05:46:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 01:52:04.468 05:46:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 01:52:04.468 05:46:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 01:52:04.468 05:46:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 01:52:04.468 05:46:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 01:52:04.726 Validate MD5 checksum, iteration 1 01:52:04.726 05:46:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 01:52:04.726 05:46:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 01:52:04.726 05:46:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 01:52:04.726 05:46:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 01:52:04.726 05:46:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 01:52:04.726 05:46:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 01:52:04.726 05:46:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 01:52:04.726 05:46:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 01:52:04.727 05:46:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 01:52:04.727 05:46:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 01:52:04.727 05:46:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 01:52:04.727 05:46:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 01:52:04.727 05:46:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 01:52:04.727 [2024-12-09 05:46:56.227169] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:52:04.727 [2024-12-09 05:46:56.227550] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84541 ] 01:52:04.985 [2024-12-09 05:46:56.405079] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:52:04.985 [2024-12-09 05:46:56.548807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:52:06.887  [2024-12-09T05:46:59.441Z] Copying: 499/1024 [MB] (499 MBps) [2024-12-09T05:46:59.441Z] Copying: 965/1024 [MB] (466 MBps) [2024-12-09T05:47:00.814Z] Copying: 1024/1024 [MB] (average 481 MBps) 01:52:09.197 01:52:09.197 05:47:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 01:52:09.197 05:47:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 01:52:11.103 05:47:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 01:52:11.103 05:47:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=772568825810109396ec634543a9a25e 01:52:11.103 05:47:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 772568825810109396ec634543a9a25e != \7\7\2\5\6\8\8\2\5\8\1\0\1\0\9\3\9\6\e\c\6\3\4\5\4\3\a\9\a\2\5\e ]] 01:52:11.103 05:47:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 01:52:11.103 05:47:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 01:52:11.103 05:47:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 01:52:11.103 Validate MD5 checksum, iteration 2 01:52:11.103 05:47:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 01:52:11.103 05:47:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 01:52:11.103 05:47:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 01:52:11.103 05:47:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 01:52:11.103 05:47:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 01:52:11.103 05:47:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 01:52:11.103 [2024-12-09 05:47:02.482592] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:52:11.103 [2024-12-09 05:47:02.483002] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84606 ] 01:52:11.104 [2024-12-09 05:47:02.657380] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:52:11.364 [2024-12-09 05:47:02.798354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:52:13.270  [2024-12-09T05:47:05.455Z] Copying: 503/1024 [MB] (503 MBps) [2024-12-09T05:47:05.455Z] Copying: 990/1024 [MB] (487 MBps) [2024-12-09T05:47:07.355Z] Copying: 1024/1024 [MB] (average 495 MBps) 01:52:15.738 01:52:15.738 05:47:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 01:52:15.738 05:47:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 01:52:17.639 05:47:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 01:52:17.639 05:47:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=7c4dda1e996d7ba05b83f8fba330507d 01:52:17.639 05:47:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 7c4dda1e996d7ba05b83f8fba330507d != \7\c\4\d\d\a\1\e\9\9\6\d\7\b\a\0\5\b\8\3\f\8\f\b\a\3\3\0\5\0\7\d ]] 01:52:17.639 05:47:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 01:52:17.639 05:47:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 01:52:17.639 05:47:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 01:52:17.639 05:47:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 84467 ]] 01:52:17.639 05:47:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 84467 01:52:17.639 05:47:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 01:52:17.639 05:47:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 01:52:17.639 05:47:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 01:52:17.639 05:47:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 01:52:17.639 05:47:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 01:52:17.639 05:47:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 01:52:17.639 05:47:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=84673 01:52:17.639 05:47:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 01:52:17.639 05:47:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 84673 01:52:17.639 05:47:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84673 ']' 01:52:17.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:52:17.639 05:47:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:52:17.639 05:47:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 01:52:17.639 05:47:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:52:17.639 05:47:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 01:52:17.639 05:47:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 01:52:17.639 [2024-12-09 05:47:08.982565] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:52:17.639 [2024-12-09 05:47:08.982751] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84673 ] 01:52:17.639 [2024-12-09 05:47:09.149126] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:52:17.639 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 84467 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 01:52:17.639 [2024-12-09 05:47:09.248586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:52:18.612 [2024-12-09 05:47:10.085151] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 01:52:18.612 [2024-12-09 05:47:10.085226] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 01:52:18.872 [2024-12-09 05:47:10.230858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:52:18.872 [2024-12-09 05:47:10.230908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 01:52:18.872 [2024-12-09 05:47:10.230927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 01:52:18.872 [2024-12-09 05:47:10.230937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:18.872 [2024-12-09 05:47:10.231008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:52:18.872 [2024-12-09 05:47:10.231026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 01:52:18.872 [2024-12-09 05:47:10.231037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.040 ms 01:52:18.872 [2024-12-09 05:47:10.231046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:18.872 [2024-12-09 05:47:10.231073] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 01:52:18.872 [2024-12-09 05:47:10.231850] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 01:52:18.872 [2024-12-09 05:47:10.231876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:52:18.872 [2024-12-09 05:47:10.231888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 01:52:18.872 [2024-12-09 05:47:10.231899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.809 ms 01:52:18.872 [2024-12-09 05:47:10.231909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:18.872 [2024-12-09 05:47:10.232479] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 01:52:18.872 [2024-12-09 05:47:10.250609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:52:18.872 [2024-12-09 05:47:10.250650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 01:52:18.872 [2024-12-09 05:47:10.250696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.132 ms 01:52:18.872 [2024-12-09 05:47:10.250710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:18.872 [2024-12-09 05:47:10.260024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:52:18.872 [2024-12-09 05:47:10.260064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 01:52:18.872 [2024-12-09 05:47:10.260079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 01:52:18.872 [2024-12-09 05:47:10.260088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:18.872 [2024-12-09 05:47:10.260481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:52:18.872 [2024-12-09 05:47:10.260499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 01:52:18.872 [2024-12-09 05:47:10.260510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.305 ms 01:52:18.872 [2024-12-09 05:47:10.260519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:18.872 [2024-12-09 05:47:10.260582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:52:18.872 [2024-12-09 05:47:10.260599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 01:52:18.872 [2024-12-09 05:47:10.260609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 01:52:18.872 [2024-12-09 05:47:10.260618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:18.872 [2024-12-09 05:47:10.260649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:52:18.872 [2024-12-09 05:47:10.260682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 01:52:18.872 [2024-12-09 05:47:10.260713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 01:52:18.872 [2024-12-09 05:47:10.260722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:18.872 [2024-12-09 05:47:10.260748] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 01:52:18.872 [2024-12-09 05:47:10.264012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:52:18.872 [2024-12-09 05:47:10.264195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 01:52:18.872 [2024-12-09 05:47:10.264219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.270 ms 01:52:18.872 [2024-12-09 05:47:10.264239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:18.872 [2024-12-09 05:47:10.264284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:52:18.872 [2024-12-09 05:47:10.264300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 01:52:18.872 [2024-12-09 05:47:10.264311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 01:52:18.872 [2024-12-09 05:47:10.264321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:18.872 [2024-12-09 05:47:10.264370] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 01:52:18.872 [2024-12-09 05:47:10.264398] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 01:52:18.872 [2024-12-09 05:47:10.264434] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 01:52:18.872 [2024-12-09 05:47:10.264455] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 01:52:18.872 [2024-12-09 05:47:10.264566] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 01:52:18.872 [2024-12-09 05:47:10.264580] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 01:52:18.872 [2024-12-09 05:47:10.264593] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 01:52:18.872 [2024-12-09 05:47:10.264605] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 01:52:18.872 [2024-12-09 05:47:10.264616] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 01:52:18.872 [2024-12-09 05:47:10.264627] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 01:52:18.872 [2024-12-09 05:47:10.264636] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 01:52:18.872 [2024-12-09 05:47:10.264645] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 01:52:18.872 [2024-12-09 05:47:10.264654] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 01:52:18.872 [2024-12-09 05:47:10.264669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:52:18.872 [2024-12-09 05:47:10.264695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 01:52:18.872 [2024-12-09 05:47:10.264705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.301 ms 01:52:18.872 [2024-12-09 05:47:10.264787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:18.872 [2024-12-09 05:47:10.264869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:52:18.872 [2024-12-09 05:47:10.264882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 01:52:18.872 [2024-12-09 05:47:10.264892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.058 ms 01:52:18.872 [2024-12-09 05:47:10.264902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:18.872 [2024-12-09 05:47:10.264997] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 01:52:18.872 [2024-12-09 05:47:10.265017] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 01:52:18.872 [2024-12-09 05:47:10.265028] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 01:52:18.872 [2024-12-09 05:47:10.265037] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 01:52:18.872 [2024-12-09 05:47:10.265048] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 01:52:18.872 [2024-12-09 05:47:10.265057] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 01:52:18.873 [2024-12-09 05:47:10.265066] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 01:52:18.873 [2024-12-09 05:47:10.265090] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 01:52:18.873 [2024-12-09 05:47:10.265100] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 01:52:18.873 [2024-12-09 05:47:10.265108] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 01:52:18.873 [2024-12-09 05:47:10.265131] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 01:52:18.873 [2024-12-09 05:47:10.265140] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 01:52:18.873 [2024-12-09 05:47:10.265148] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 01:52:18.873 [2024-12-09 05:47:10.265156] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 01:52:18.873 [2024-12-09 05:47:10.265166] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 01:52:18.873 [2024-12-09 05:47:10.265175] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 01:52:18.873 [2024-12-09 05:47:10.265183] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 01:52:18.873 [2024-12-09 05:47:10.265192] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 01:52:18.873 [2024-12-09 05:47:10.265200] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 01:52:18.873 [2024-12-09 05:47:10.265209] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 01:52:18.873 [2024-12-09 05:47:10.265218] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 01:52:18.873 [2024-12-09 05:47:10.265237] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 01:52:18.873 [2024-12-09 05:47:10.265246] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 01:52:18.873 [2024-12-09 05:47:10.265255] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 01:52:18.873 [2024-12-09 05:47:10.265264] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 01:52:18.873 [2024-12-09 05:47:10.265272] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 01:52:18.873 [2024-12-09 05:47:10.265280] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 01:52:18.873 [2024-12-09 05:47:10.265289] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 01:52:18.873 [2024-12-09 05:47:10.265298] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 01:52:18.873 [2024-12-09 05:47:10.265306] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 01:52:18.873 [2024-12-09 05:47:10.265325] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 01:52:18.873 [2024-12-09 05:47:10.265333] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 01:52:18.873 [2024-12-09 05:47:10.265346] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 01:52:18.873 [2024-12-09 05:47:10.265355] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 01:52:18.873 [2024-12-09 05:47:10.265363] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 01:52:18.873 [2024-12-09 05:47:10.265372] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 01:52:18.873 [2024-12-09 05:47:10.265380] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 01:52:18.873 [2024-12-09 05:47:10.265389] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 01:52:18.873 [2024-12-09 05:47:10.265398] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 01:52:18.873 [2024-12-09 05:47:10.265406] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 01:52:18.873 [2024-12-09 05:47:10.265415] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 01:52:18.873 [2024-12-09 05:47:10.265424] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 01:52:18.873 [2024-12-09 05:47:10.265432] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 01:52:18.873 [2024-12-09 05:47:10.265440] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 01:52:18.873 [2024-12-09 05:47:10.265451] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 01:52:18.873 [2024-12-09 05:47:10.265460] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 01:52:18.873 [2024-12-09 05:47:10.265470] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 01:52:18.873 [2024-12-09 05:47:10.265480] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 01:52:18.873 [2024-12-09 05:47:10.265490] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 01:52:18.873 [2024-12-09 05:47:10.265498] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 01:52:18.873 [2024-12-09 05:47:10.265507] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 01:52:18.873 [2024-12-09 05:47:10.265516] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 01:52:18.873 [2024-12-09 05:47:10.265525] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 01:52:18.873 [2024-12-09 05:47:10.265535] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 01:52:18.873 [2024-12-09 05:47:10.265547] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 01:52:18.873 [2024-12-09 05:47:10.265557] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 01:52:18.873 [2024-12-09 05:47:10.265567] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 01:52:18.873 [2024-12-09 05:47:10.265575] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 01:52:18.873 [2024-12-09 05:47:10.265585] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 01:52:18.873 [2024-12-09 05:47:10.265593] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 01:52:18.873 [2024-12-09 05:47:10.265602] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 01:52:18.873 [2024-12-09 05:47:10.265611] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 01:52:18.873 [2024-12-09 05:47:10.265620] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 01:52:18.873 [2024-12-09 05:47:10.265629] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 01:52:18.873 [2024-12-09 05:47:10.265638] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 01:52:18.873 [2024-12-09 05:47:10.265647] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 01:52:18.873 [2024-12-09 05:47:10.265656] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 01:52:18.873 [2024-12-09 05:47:10.265665] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 01:52:18.873 [2024-12-09 05:47:10.265675] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 01:52:18.873 [2024-12-09 05:47:10.265684] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 01:52:18.873 [2024-12-09 05:47:10.265694] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 01:52:18.873 [2024-12-09 05:47:10.265721] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 01:52:18.873 [2024-12-09 05:47:10.265733] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 01:52:18.873 [2024-12-09 05:47:10.265742] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 01:52:18.873 [2024-12-09 05:47:10.265752] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 01:52:18.873 [2024-12-09 05:47:10.265762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:52:18.873 [2024-12-09 05:47:10.265772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 01:52:18.873 [2024-12-09 05:47:10.265782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.824 ms 01:52:18.873 [2024-12-09 05:47:10.265791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:18.873 [2024-12-09 05:47:10.296639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:52:18.873 [2024-12-09 05:47:10.296703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 01:52:18.873 [2024-12-09 05:47:10.296737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.792 ms 01:52:18.873 [2024-12-09 05:47:10.296747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:18.873 [2024-12-09 05:47:10.296814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:52:18.873 [2024-12-09 05:47:10.296828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 01:52:18.873 [2024-12-09 05:47:10.296840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 01:52:18.873 [2024-12-09 05:47:10.296850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:18.873 [2024-12-09 05:47:10.334131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:52:18.873 [2024-12-09 05:47:10.334382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 01:52:18.873 [2024-12-09 05:47:10.334409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 37.209 ms 01:52:18.873 [2024-12-09 05:47:10.334422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:18.873 [2024-12-09 05:47:10.334477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:52:18.873 [2024-12-09 05:47:10.334494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 01:52:18.873 [2024-12-09 05:47:10.334506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 01:52:18.873 [2024-12-09 05:47:10.334539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:18.873 [2024-12-09 05:47:10.334685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:52:18.873 [2024-12-09 05:47:10.334757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 01:52:18.873 [2024-12-09 05:47:10.334771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.060 ms 01:52:18.873 [2024-12-09 05:47:10.334782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:18.873 [2024-12-09 05:47:10.334838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:52:18.873 [2024-12-09 05:47:10.334853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 01:52:18.873 [2024-12-09 05:47:10.334864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 01:52:18.873 [2024-12-09 05:47:10.334882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:18.874 [2024-12-09 05:47:10.353370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:52:18.874 [2024-12-09 05:47:10.353409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 01:52:18.874 [2024-12-09 05:47:10.353424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.445 ms 01:52:18.874 [2024-12-09 05:47:10.353439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:18.874 [2024-12-09 05:47:10.353572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:52:18.874 [2024-12-09 05:47:10.353600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 01:52:18.874 [2024-12-09 05:47:10.353613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 01:52:18.874 [2024-12-09 05:47:10.353622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:18.874 [2024-12-09 05:47:10.379222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:52:18.874 [2024-12-09 05:47:10.379264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 01:52:18.874 [2024-12-09 05:47:10.379280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.578 ms 01:52:18.874 [2024-12-09 05:47:10.379290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:18.874 [2024-12-09 05:47:10.389060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:52:18.874 [2024-12-09 05:47:10.389105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 01:52:18.874 [2024-12-09 05:47:10.389120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.530 ms 01:52:18.874 [2024-12-09 05:47:10.389130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:18.874 [2024-12-09 05:47:10.451815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:52:18.874 [2024-12-09 05:47:10.451892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 01:52:18.874 [2024-12-09 05:47:10.451911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 62.620 ms 01:52:18.874 [2024-12-09 05:47:10.451922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:18.874 [2024-12-09 05:47:10.452120] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 01:52:18.874 [2024-12-09 05:47:10.452253] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 01:52:18.874 [2024-12-09 05:47:10.452370] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 01:52:18.874 [2024-12-09 05:47:10.452484] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 01:52:18.874 [2024-12-09 05:47:10.452497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:52:18.874 [2024-12-09 05:47:10.452507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 01:52:18.874 [2024-12-09 05:47:10.452517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.519 ms 01:52:18.874 [2024-12-09 05:47:10.452527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:18.874 [2024-12-09 05:47:10.452627] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 01:52:18.874 [2024-12-09 05:47:10.452648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:52:18.874 [2024-12-09 05:47:10.452682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 01:52:18.874 [2024-12-09 05:47:10.452714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 01:52:18.874 [2024-12-09 05:47:10.452724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:18.874 [2024-12-09 05:47:10.468275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:52:18.874 [2024-12-09 05:47:10.468316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 01:52:18.874 [2024-12-09 05:47:10.468348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.520 ms 01:52:18.874 [2024-12-09 05:47:10.468359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:18.874 [2024-12-09 05:47:10.477606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:52:18.874 [2024-12-09 05:47:10.477642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 01:52:18.874 [2024-12-09 05:47:10.477656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 01:52:18.874 [2024-12-09 05:47:10.477713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:18.874 [2024-12-09 05:47:10.477858] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 01:52:18.874 [2024-12-09 05:47:10.478163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:52:18.874 [2024-12-09 05:47:10.478182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 01:52:18.874 [2024-12-09 05:47:10.478219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.307 ms 01:52:18.874 [2024-12-09 05:47:10.478242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:19.810 [2024-12-09 05:47:11.099880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:52:19.810 [2024-12-09 05:47:11.099979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 01:52:19.810 [2024-12-09 05:47:11.100016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 620.539 ms 01:52:19.810 [2024-12-09 05:47:11.100027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:19.810 [2024-12-09 05:47:11.104686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:52:19.810 [2024-12-09 05:47:11.104741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 01:52:19.810 [2024-12-09 05:47:11.104758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.081 ms 01:52:19.810 [2024-12-09 05:47:11.104777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:19.810 [2024-12-09 05:47:11.105285] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 01:52:19.810 [2024-12-09 05:47:11.105380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:52:19.810 [2024-12-09 05:47:11.105400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 01:52:19.810 [2024-12-09 05:47:11.105428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.533 ms 01:52:19.810 [2024-12-09 05:47:11.105455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:19.810 [2024-12-09 05:47:11.105595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:52:19.810 [2024-12-09 05:47:11.105614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 01:52:19.810 [2024-12-09 05:47:11.105627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 01:52:19.810 [2024-12-09 05:47:11.105644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:19.810 [2024-12-09 05:47:11.105707] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 627.859 ms, result 0 01:52:19.810 [2024-12-09 05:47:11.105781] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 01:52:19.810 [2024-12-09 05:47:11.105862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:52:19.810 [2024-12-09 05:47:11.105876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 01:52:19.810 [2024-12-09 05:47:11.105887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.083 ms 01:52:19.810 [2024-12-09 05:47:11.105898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:20.379 [2024-12-09 05:47:11.706445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:52:20.379 [2024-12-09 05:47:11.706560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 01:52:20.379 [2024-12-09 05:47:11.706597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 599.441 ms 01:52:20.379 [2024-12-09 05:47:11.706609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:20.379 [2024-12-09 05:47:11.711151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:52:20.379 [2024-12-09 05:47:11.711209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 01:52:20.379 [2024-12-09 05:47:11.711241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.128 ms 01:52:20.379 [2024-12-09 05:47:11.711252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:20.379 [2024-12-09 05:47:11.711783] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 01:52:20.379 [2024-12-09 05:47:11.711815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:52:20.379 [2024-12-09 05:47:11.711827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 01:52:20.379 [2024-12-09 05:47:11.711840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.497 ms 01:52:20.379 [2024-12-09 05:47:11.711850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:20.379 [2024-12-09 05:47:11.711891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:52:20.379 [2024-12-09 05:47:11.711909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 01:52:20.379 [2024-12-09 05:47:11.711920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 01:52:20.379 [2024-12-09 05:47:11.711946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:20.379 [2024-12-09 05:47:11.712009] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 606.212 ms, result 0 01:52:20.380 [2024-12-09 05:47:11.712104] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 01:52:20.380 [2024-12-09 05:47:11.712119] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 01:52:20.380 [2024-12-09 05:47:11.712133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:52:20.380 [2024-12-09 05:47:11.712144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 01:52:20.380 [2024-12-09 05:47:11.712156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1234.314 ms 01:52:20.380 [2024-12-09 05:47:11.712166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:20.380 [2024-12-09 05:47:11.712206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:52:20.380 [2024-12-09 05:47:11.712226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 01:52:20.380 [2024-12-09 05:47:11.712238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 01:52:20.380 [2024-12-09 05:47:11.712248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:20.380 [2024-12-09 05:47:11.723219] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 01:52:20.380 [2024-12-09 05:47:11.723507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:52:20.380 [2024-12-09 05:47:11.723532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 01:52:20.380 [2024-12-09 05:47:11.723547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.237 ms 01:52:20.380 [2024-12-09 05:47:11.723558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:20.380 [2024-12-09 05:47:11.724373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:52:20.380 [2024-12-09 05:47:11.724410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 01:52:20.380 [2024-12-09 05:47:11.724425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.726 ms 01:52:20.380 [2024-12-09 05:47:11.724436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:20.380 [2024-12-09 05:47:11.726655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:52:20.380 [2024-12-09 05:47:11.726870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 01:52:20.380 [2024-12-09 05:47:11.726896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.195 ms 01:52:20.380 [2024-12-09 05:47:11.726908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:20.380 [2024-12-09 05:47:11.726962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:52:20.380 [2024-12-09 05:47:11.726979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 01:52:20.380 [2024-12-09 05:47:11.726998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 01:52:20.380 [2024-12-09 05:47:11.727009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:20.380 [2024-12-09 05:47:11.727142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:52:20.380 [2024-12-09 05:47:11.727159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 01:52:20.380 [2024-12-09 05:47:11.727171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 01:52:20.380 [2024-12-09 05:47:11.727195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:20.380 [2024-12-09 05:47:11.727221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:52:20.380 [2024-12-09 05:47:11.727233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 01:52:20.380 [2024-12-09 05:47:11.727244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 01:52:20.380 [2024-12-09 05:47:11.727253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:20.380 [2024-12-09 05:47:11.727295] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 01:52:20.380 [2024-12-09 05:47:11.727312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:52:20.380 [2024-12-09 05:47:11.727322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 01:52:20.380 [2024-12-09 05:47:11.727332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 01:52:20.380 [2024-12-09 05:47:11.727342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:20.380 [2024-12-09 05:47:11.727430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:52:20.380 [2024-12-09 05:47:11.727444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 01:52:20.380 [2024-12-09 05:47:11.727453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.066 ms 01:52:20.380 [2024-12-09 05:47:11.727462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:20.380 [2024-12-09 05:47:11.728953] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1497.515 ms, result 0 01:52:20.380 [2024-12-09 05:47:11.743914] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:52:20.380 [2024-12-09 05:47:11.759923] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 01:52:20.380 [2024-12-09 05:47:11.768784] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 01:52:20.380 Validate MD5 checksum, iteration 1 01:52:20.380 05:47:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:52:20.380 05:47:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 01:52:20.380 05:47:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 01:52:20.380 05:47:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 01:52:20.380 05:47:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 01:52:20.380 05:47:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 01:52:20.380 05:47:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 01:52:20.380 05:47:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 01:52:20.380 05:47:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 01:52:20.380 05:47:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 01:52:20.380 05:47:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 01:52:20.380 05:47:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 01:52:20.380 05:47:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 01:52:20.380 05:47:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 01:52:20.380 05:47:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 01:52:20.380 [2024-12-09 05:47:11.878331] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:52:20.380 [2024-12-09 05:47:11.878775] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84708 ] 01:52:20.639 [2024-12-09 05:47:12.051905] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:52:20.639 [2024-12-09 05:47:12.191740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:52:22.541  [2024-12-09T05:47:15.092Z] Copying: 505/1024 [MB] (505 MBps) [2024-12-09T05:47:15.092Z] Copying: 992/1024 [MB] (487 MBps) [2024-12-09T05:47:16.474Z] Copying: 1024/1024 [MB] (average 491 MBps) 01:52:24.857 01:52:24.857 05:47:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 01:52:24.857 05:47:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 01:52:26.754 05:47:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 01:52:26.754 05:47:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=772568825810109396ec634543a9a25e 01:52:26.754 Validate MD5 checksum, iteration 2 01:52:26.754 05:47:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 772568825810109396ec634543a9a25e != \7\7\2\5\6\8\8\2\5\8\1\0\1\0\9\3\9\6\e\c\6\3\4\5\4\3\a\9\a\2\5\e ]] 01:52:26.754 05:47:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 01:52:26.754 05:47:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 01:52:26.754 05:47:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 01:52:26.754 05:47:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 01:52:26.754 05:47:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 01:52:26.754 05:47:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 01:52:26.754 05:47:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 01:52:26.754 05:47:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 01:52:26.754 05:47:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 01:52:26.754 [2024-12-09 05:47:18.142700] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:52:26.754 [2024-12-09 05:47:18.143063] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84774 ] 01:52:26.754 [2024-12-09 05:47:18.316551] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:52:27.012 [2024-12-09 05:47:18.419056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:52:28.384  [2024-12-09T05:47:21.390Z] Copying: 513/1024 [MB] (513 MBps) [2024-12-09T05:47:21.390Z] Copying: 1007/1024 [MB] (494 MBps) [2024-12-09T05:47:22.325Z] Copying: 1024/1024 [MB] (average 503 MBps) 01:52:30.708 01:52:30.708 05:47:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 01:52:30.708 05:47:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 01:52:32.612 05:47:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 01:52:32.612 05:47:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=7c4dda1e996d7ba05b83f8fba330507d 01:52:32.612 05:47:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 7c4dda1e996d7ba05b83f8fba330507d != \7\c\4\d\d\a\1\e\9\9\6\d\7\b\a\0\5\b\8\3\f\8\f\b\a\3\3\0\5\0\7\d ]] 01:52:32.612 05:47:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 01:52:32.612 05:47:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 01:52:32.612 05:47:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 01:52:32.612 05:47:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 01:52:32.612 05:47:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 01:52:32.612 05:47:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 01:52:32.612 05:47:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 01:52:32.612 05:47:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 01:52:32.612 05:47:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 01:52:32.612 05:47:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 01:52:32.612 05:47:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 84673 ]] 01:52:32.612 05:47:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 84673 01:52:32.612 05:47:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 84673 ']' 01:52:32.612 05:47:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 84673 01:52:32.612 05:47:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 01:52:32.612 05:47:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:52:32.612 05:47:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84673 01:52:32.612 killing process with pid 84673 01:52:32.612 05:47:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:52:32.612 05:47:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:52:32.612 05:47:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84673' 01:52:32.612 05:47:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 84673 01:52:32.612 05:47:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 84673 01:52:33.577 [2024-12-09 05:47:24.932802] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 01:52:33.577 [2024-12-09 05:47:24.949091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:52:33.577 [2024-12-09 05:47:24.949133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 01:52:33.577 [2024-12-09 05:47:24.949152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 01:52:33.577 [2024-12-09 05:47:24.949162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:33.577 [2024-12-09 05:47:24.949190] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 01:52:33.577 [2024-12-09 05:47:24.952424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:52:33.577 [2024-12-09 05:47:24.952595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 01:52:33.577 [2024-12-09 05:47:24.952626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.216 ms 01:52:33.577 [2024-12-09 05:47:24.952637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:33.577 [2024-12-09 05:47:24.952936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:52:33.577 [2024-12-09 05:47:24.952955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 01:52:33.577 [2024-12-09 05:47:24.952967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.266 ms 01:52:33.577 [2024-12-09 05:47:24.952978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:33.577 [2024-12-09 05:47:24.954218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:52:33.577 [2024-12-09 05:47:24.954271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 01:52:33.577 [2024-12-09 05:47:24.954304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.219 ms 01:52:33.577 [2024-12-09 05:47:24.954336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:33.577 [2024-12-09 05:47:24.955559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:52:33.577 [2024-12-09 05:47:24.955727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 01:52:33.577 [2024-12-09 05:47:24.955769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.183 ms 01:52:33.577 [2024-12-09 05:47:24.955780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:33.577 [2024-12-09 05:47:24.966062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:52:33.577 [2024-12-09 05:47:24.966253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 01:52:33.577 [2024-12-09 05:47:24.966369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.219 ms 01:52:33.577 [2024-12-09 05:47:24.966424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:33.577 [2024-12-09 05:47:24.972178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:52:33.577 [2024-12-09 05:47:24.972349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 01:52:33.577 [2024-12-09 05:47:24.972460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.514 ms 01:52:33.577 [2024-12-09 05:47:24.972507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:33.577 [2024-12-09 05:47:24.972731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:52:33.577 [2024-12-09 05:47:24.972895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 01:52:33.577 [2024-12-09 05:47:24.972997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.042 ms 01:52:33.577 [2024-12-09 05:47:24.973100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:33.577 [2024-12-09 05:47:24.983451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:52:33.577 [2024-12-09 05:47:24.983613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 01:52:33.577 [2024-12-09 05:47:24.983790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.290 ms 01:52:33.577 [2024-12-09 05:47:24.983846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:33.577 [2024-12-09 05:47:24.994082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:52:33.577 [2024-12-09 05:47:24.994245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 01:52:33.577 [2024-12-09 05:47:24.994399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.078 ms 01:52:33.577 [2024-12-09 05:47:24.994445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:33.577 [2024-12-09 05:47:25.004371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:52:33.577 [2024-12-09 05:47:25.004535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 01:52:33.577 [2024-12-09 05:47:25.004637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.856 ms 01:52:33.577 [2024-12-09 05:47:25.004722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:33.577 [2024-12-09 05:47:25.014730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:52:33.577 [2024-12-09 05:47:25.014891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 01:52:33.577 [2024-12-09 05:47:25.014987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.903 ms 01:52:33.577 [2024-12-09 05:47:25.015031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:33.577 [2024-12-09 05:47:25.015098] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 01:52:33.577 [2024-12-09 05:47:25.015212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 01:52:33.577 [2024-12-09 05:47:25.015276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 01:52:33.577 [2024-12-09 05:47:25.015379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 01:52:33.577 [2024-12-09 05:47:25.015519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 01:52:33.577 [2024-12-09 05:47:25.015630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 01:52:33.577 [2024-12-09 05:47:25.015836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 01:52:33.577 [2024-12-09 05:47:25.015894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 01:52:33.577 [2024-12-09 05:47:25.016023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 01:52:33.577 [2024-12-09 05:47:25.016100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 01:52:33.577 [2024-12-09 05:47:25.016224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 01:52:33.577 [2024-12-09 05:47:25.016315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 01:52:33.577 [2024-12-09 05:47:25.016332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 01:52:33.577 [2024-12-09 05:47:25.016343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 01:52:33.577 [2024-12-09 05:47:25.016354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 01:52:33.577 [2024-12-09 05:47:25.016364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 01:52:33.577 [2024-12-09 05:47:25.016375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 01:52:33.577 [2024-12-09 05:47:25.016386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 01:52:33.577 [2024-12-09 05:47:25.016396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 01:52:33.577 [2024-12-09 05:47:25.016408] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 01:52:33.577 [2024-12-09 05:47:25.016419] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: e0174778-4451-42ff-a3d0-0e505c20040a 01:52:33.577 [2024-12-09 05:47:25.016430] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 01:52:33.577 [2024-12-09 05:47:25.016440] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 01:52:33.577 [2024-12-09 05:47:25.016449] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 01:52:33.577 [2024-12-09 05:47:25.016460] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 01:52:33.577 [2024-12-09 05:47:25.016469] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 01:52:33.577 [2024-12-09 05:47:25.016479] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 01:52:33.577 [2024-12-09 05:47:25.016490] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 01:52:33.577 [2024-12-09 05:47:25.016499] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 01:52:33.577 [2024-12-09 05:47:25.016507] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 01:52:33.577 [2024-12-09 05:47:25.016517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:52:33.577 [2024-12-09 05:47:25.016534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 01:52:33.577 [2024-12-09 05:47:25.016546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.421 ms 01:52:33.577 [2024-12-09 05:47:25.016556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:33.577 [2024-12-09 05:47:25.033187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:52:33.577 [2024-12-09 05:47:25.033219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 01:52:33.578 [2024-12-09 05:47:25.033232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.605 ms 01:52:33.578 [2024-12-09 05:47:25.033242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:33.578 [2024-12-09 05:47:25.033650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 01:52:33.578 [2024-12-09 05:47:25.033694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 01:52:33.578 [2024-12-09 05:47:25.033706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.363 ms 01:52:33.578 [2024-12-09 05:47:25.033715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:33.578 [2024-12-09 05:47:25.080513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 01:52:33.578 [2024-12-09 05:47:25.080555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 01:52:33.578 [2024-12-09 05:47:25.080570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 01:52:33.578 [2024-12-09 05:47:25.080579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:33.578 [2024-12-09 05:47:25.082185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 01:52:33.578 [2024-12-09 05:47:25.082240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 01:52:33.578 [2024-12-09 05:47:25.082254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 01:52:33.578 [2024-12-09 05:47:25.082263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:33.578 [2024-12-09 05:47:25.082351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 01:52:33.578 [2024-12-09 05:47:25.082369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 01:52:33.578 [2024-12-09 05:47:25.082379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 01:52:33.578 [2024-12-09 05:47:25.082389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:33.578 [2024-12-09 05:47:25.082418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 01:52:33.578 [2024-12-09 05:47:25.082431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 01:52:33.578 [2024-12-09 05:47:25.082441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 01:52:33.578 [2024-12-09 05:47:25.082451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:33.578 [2024-12-09 05:47:25.165953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 01:52:33.578 [2024-12-09 05:47:25.166024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 01:52:33.578 [2024-12-09 05:47:25.166040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 01:52:33.578 [2024-12-09 05:47:25.166050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:33.836 [2024-12-09 05:47:25.234316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 01:52:33.836 [2024-12-09 05:47:25.234363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 01:52:33.836 [2024-12-09 05:47:25.234396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 01:52:33.836 [2024-12-09 05:47:25.234406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:33.836 [2024-12-09 05:47:25.234540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 01:52:33.836 [2024-12-09 05:47:25.234558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 01:52:33.836 [2024-12-09 05:47:25.234569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 01:52:33.836 [2024-12-09 05:47:25.234594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:33.836 [2024-12-09 05:47:25.234665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 01:52:33.836 [2024-12-09 05:47:25.234694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 01:52:33.836 [2024-12-09 05:47:25.234706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 01:52:33.836 [2024-12-09 05:47:25.234715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:33.836 [2024-12-09 05:47:25.234876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 01:52:33.836 [2024-12-09 05:47:25.234894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 01:52:33.836 [2024-12-09 05:47:25.234905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 01:52:33.836 [2024-12-09 05:47:25.234915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:33.836 [2024-12-09 05:47:25.234967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 01:52:33.836 [2024-12-09 05:47:25.234998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 01:52:33.836 [2024-12-09 05:47:25.235032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 01:52:33.836 [2024-12-09 05:47:25.235058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:33.836 [2024-12-09 05:47:25.235105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 01:52:33.836 [2024-12-09 05:47:25.235134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 01:52:33.836 [2024-12-09 05:47:25.235145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 01:52:33.836 [2024-12-09 05:47:25.235155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:33.836 [2024-12-09 05:47:25.235208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 01:52:33.836 [2024-12-09 05:47:25.235228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 01:52:33.836 [2024-12-09 05:47:25.235240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 01:52:33.836 [2024-12-09 05:47:25.235250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 01:52:33.836 [2024-12-09 05:47:25.235399] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 286.264 ms, result 0 01:52:34.771 05:47:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 01:52:34.771 05:47:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 01:52:34.771 05:47:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 01:52:34.771 05:47:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 01:52:34.771 05:47:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 01:52:34.771 05:47:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 01:52:34.771 Remove shared memory files 01:52:34.771 05:47:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 01:52:34.771 05:47:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 01:52:34.771 05:47:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 01:52:34.771 05:47:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 01:52:34.771 05:47:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid84467 01:52:34.771 05:47:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 01:52:34.771 05:47:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 01:52:34.771 ************************************ 01:52:34.771 END TEST ftl_upgrade_shutdown 01:52:34.771 ************************************ 01:52:34.771 01:52:34.771 real 1m28.646s 01:52:34.771 user 2m4.746s 01:52:34.771 sys 0m23.054s 01:52:34.771 05:47:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 01:52:34.771 05:47:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 01:52:34.771 05:47:26 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]] 01:52:34.771 05:47:26 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 01:52:34.771 05:47:26 ftl -- ftl/ftl.sh@14 -- # killprocess 76808 01:52:34.771 05:47:26 ftl -- common/autotest_common.sh@954 -- # '[' -z 76808 ']' 01:52:34.771 Process with pid 76808 is not found 01:52:34.771 05:47:26 ftl -- common/autotest_common.sh@958 -- # kill -0 76808 01:52:34.771 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (76808) - No such process 01:52:34.771 05:47:26 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 76808 is not found' 01:52:34.771 05:47:26 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 01:52:34.771 05:47:26 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=84895 01:52:34.771 05:47:26 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:52:34.771 05:47:26 ftl -- ftl/ftl.sh@20 -- # waitforlisten 84895 01:52:34.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:52:34.771 05:47:26 ftl -- common/autotest_common.sh@835 -- # '[' -z 84895 ']' 01:52:34.771 05:47:26 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:52:34.771 05:47:26 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 01:52:34.771 05:47:26 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:52:34.771 05:47:26 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 01:52:34.771 05:47:26 ftl -- common/autotest_common.sh@10 -- # set +x 01:52:35.029 [2024-12-09 05:47:26.464169] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:52:35.029 [2024-12-09 05:47:26.464654] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84895 ] 01:52:35.029 [2024-12-09 05:47:26.632343] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:52:35.287 [2024-12-09 05:47:26.728759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:52:35.852 05:47:27 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:52:35.852 05:47:27 ftl -- common/autotest_common.sh@868 -- # return 0 01:52:35.852 05:47:27 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 01:52:36.416 nvme0n1 01:52:36.416 05:47:27 ftl -- ftl/ftl.sh@22 -- # clear_lvols 01:52:36.416 05:47:27 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 01:52:36.416 05:47:27 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 01:52:36.674 05:47:28 ftl -- ftl/common.sh@28 -- # stores=2a9a0461-2746-4b9a-87cf-4553cbb3216d 01:52:36.674 05:47:28 ftl -- ftl/common.sh@29 -- # for lvs in $stores 01:52:36.674 05:47:28 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2a9a0461-2746-4b9a-87cf-4553cbb3216d 01:52:36.674 05:47:28 ftl -- ftl/ftl.sh@23 -- # killprocess 84895 01:52:36.674 05:47:28 ftl -- common/autotest_common.sh@954 -- # '[' -z 84895 ']' 01:52:36.674 05:47:28 ftl -- common/autotest_common.sh@958 -- # kill -0 84895 01:52:36.674 05:47:28 ftl -- common/autotest_common.sh@959 -- # uname 01:52:36.674 05:47:28 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:52:36.674 05:47:28 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84895 01:52:36.932 killing process with pid 84895 01:52:36.932 05:47:28 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:52:36.932 05:47:28 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:52:36.932 05:47:28 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84895' 01:52:36.932 05:47:28 ftl -- common/autotest_common.sh@973 -- # kill 84895 01:52:36.932 05:47:28 ftl -- common/autotest_common.sh@978 -- # wait 84895 01:52:38.829 05:47:30 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 01:52:38.829 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:52:38.829 Waiting for block devices as requested 01:52:39.087 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 01:52:39.088 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 01:52:39.088 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 01:52:39.347 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 01:52:44.619 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 01:52:44.619 05:47:35 ftl -- ftl/ftl.sh@28 -- # remove_shm 01:52:44.619 Remove shared memory files 01:52:44.619 05:47:35 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 01:52:44.619 05:47:35 ftl -- ftl/common.sh@205 -- # rm -f rm -f 01:52:44.619 05:47:35 ftl -- ftl/common.sh@206 -- # rm -f rm -f 01:52:44.619 05:47:35 ftl -- ftl/common.sh@207 -- # rm -f rm -f 01:52:44.619 05:47:35 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 01:52:44.619 05:47:35 ftl -- ftl/common.sh@209 -- # rm -f rm -f 01:52:44.619 01:52:44.619 real 12m21.406s 01:52:44.619 user 15m18.414s 01:52:44.619 sys 1m30.501s 01:52:44.619 05:47:35 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 01:52:44.619 ************************************ 01:52:44.619 END TEST ftl 01:52:44.619 ************************************ 01:52:44.619 05:47:35 ftl -- common/autotest_common.sh@10 -- # set +x 01:52:44.619 05:47:35 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 01:52:44.619 05:47:35 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 01:52:44.619 05:47:35 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 01:52:44.619 05:47:35 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 01:52:44.619 05:47:35 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 01:52:44.619 05:47:35 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 01:52:44.619 05:47:35 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 01:52:44.619 05:47:35 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 01:52:44.619 05:47:35 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 01:52:44.619 05:47:35 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 01:52:44.619 05:47:35 -- common/autotest_common.sh@726 -- # xtrace_disable 01:52:44.619 05:47:35 -- common/autotest_common.sh@10 -- # set +x 01:52:44.619 05:47:35 -- spdk/autotest.sh@388 -- # autotest_cleanup 01:52:44.619 05:47:35 -- common/autotest_common.sh@1396 -- # local autotest_es=0 01:52:44.619 05:47:35 -- common/autotest_common.sh@1397 -- # xtrace_disable 01:52:44.619 05:47:35 -- common/autotest_common.sh@10 -- # set +x 01:52:45.996 INFO: APP EXITING 01:52:45.996 INFO: killing all VMs 01:52:45.996 INFO: killing vhost app 01:52:45.996 INFO: EXIT DONE 01:52:46.253 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:52:46.510 0000:00:11.0 (1b36 0010): Already using the nvme driver 01:52:46.510 0000:00:10.0 (1b36 0010): Already using the nvme driver 01:52:46.510 0000:00:12.0 (1b36 0010): Already using the nvme driver 01:52:46.767 0000:00:13.0 (1b36 0010): Already using the nvme driver 01:52:47.024 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:52:47.281 Cleaning 01:52:47.281 Removing: /var/run/dpdk/spdk0/config 01:52:47.281 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 01:52:47.281 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 01:52:47.281 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 01:52:47.281 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 01:52:47.281 Removing: /var/run/dpdk/spdk0/fbarray_memzone 01:52:47.281 Removing: /var/run/dpdk/spdk0/hugepage_info 01:52:47.281 Removing: /var/run/dpdk/spdk0 01:52:47.281 Removing: /var/run/dpdk/spdk_pid57693 01:52:47.281 Removing: /var/run/dpdk/spdk_pid57933 01:52:47.281 Removing: /var/run/dpdk/spdk_pid58167 01:52:47.281 Removing: /var/run/dpdk/spdk_pid58277 01:52:47.281 Removing: /var/run/dpdk/spdk_pid58333 01:52:47.281 Removing: /var/run/dpdk/spdk_pid58471 01:52:47.281 Removing: /var/run/dpdk/spdk_pid58490 01:52:47.281 Removing: /var/run/dpdk/spdk_pid58700 01:52:47.281 Removing: /var/run/dpdk/spdk_pid58812 01:52:47.281 Removing: /var/run/dpdk/spdk_pid58924 01:52:47.281 Removing: /var/run/dpdk/spdk_pid59046 01:52:47.281 Removing: /var/run/dpdk/spdk_pid59149 01:52:47.539 Removing: /var/run/dpdk/spdk_pid59194 01:52:47.539 Removing: /var/run/dpdk/spdk_pid59231 01:52:47.539 Removing: /var/run/dpdk/spdk_pid59306 01:52:47.539 Removing: /var/run/dpdk/spdk_pid59419 01:52:47.539 Removing: /var/run/dpdk/spdk_pid59890 01:52:47.539 Removing: /var/run/dpdk/spdk_pid59967 01:52:47.539 Removing: /var/run/dpdk/spdk_pid60036 01:52:47.539 Removing: /var/run/dpdk/spdk_pid60058 01:52:47.539 Removing: /var/run/dpdk/spdk_pid60208 01:52:47.539 Removing: /var/run/dpdk/spdk_pid60230 01:52:47.539 Removing: /var/run/dpdk/spdk_pid60377 01:52:47.539 Removing: /var/run/dpdk/spdk_pid60394 01:52:47.539 Removing: /var/run/dpdk/spdk_pid60458 01:52:47.539 Removing: /var/run/dpdk/spdk_pid60476 01:52:47.539 Removing: /var/run/dpdk/spdk_pid60546 01:52:47.539 Removing: /var/run/dpdk/spdk_pid60569 01:52:47.539 Removing: /var/run/dpdk/spdk_pid60764 01:52:47.539 Removing: /var/run/dpdk/spdk_pid60801 01:52:47.539 Removing: /var/run/dpdk/spdk_pid60890 01:52:47.539 Removing: /var/run/dpdk/spdk_pid61073 01:52:47.539 Removing: /var/run/dpdk/spdk_pid61168 01:52:47.539 Removing: /var/run/dpdk/spdk_pid61210 01:52:47.539 Removing: /var/run/dpdk/spdk_pid61696 01:52:47.539 Removing: /var/run/dpdk/spdk_pid61794 01:52:47.539 Removing: /var/run/dpdk/spdk_pid61914 01:52:47.539 Removing: /var/run/dpdk/spdk_pid61967 01:52:47.539 Removing: /var/run/dpdk/spdk_pid61993 01:52:47.539 Removing: /var/run/dpdk/spdk_pid62078 01:52:47.539 Removing: /var/run/dpdk/spdk_pid62715 01:52:47.539 Removing: /var/run/dpdk/spdk_pid62757 01:52:47.539 Removing: /var/run/dpdk/spdk_pid63290 01:52:47.539 Removing: /var/run/dpdk/spdk_pid63388 01:52:47.539 Removing: /var/run/dpdk/spdk_pid63508 01:52:47.539 Removing: /var/run/dpdk/spdk_pid63567 01:52:47.539 Removing: /var/run/dpdk/spdk_pid63598 01:52:47.539 Removing: /var/run/dpdk/spdk_pid63618 01:52:47.539 Removing: /var/run/dpdk/spdk_pid65531 01:52:47.539 Removing: /var/run/dpdk/spdk_pid65679 01:52:47.539 Removing: /var/run/dpdk/spdk_pid65683 01:52:47.539 Removing: /var/run/dpdk/spdk_pid65706 01:52:47.539 Removing: /var/run/dpdk/spdk_pid65745 01:52:47.539 Removing: /var/run/dpdk/spdk_pid65749 01:52:47.539 Removing: /var/run/dpdk/spdk_pid65772 01:52:47.539 Removing: /var/run/dpdk/spdk_pid65811 01:52:47.539 Removing: /var/run/dpdk/spdk_pid65815 01:52:47.539 Removing: /var/run/dpdk/spdk_pid65838 01:52:47.539 Removing: /var/run/dpdk/spdk_pid65877 01:52:47.539 Removing: /var/run/dpdk/spdk_pid65881 01:52:47.539 Removing: /var/run/dpdk/spdk_pid65904 01:52:47.539 Removing: /var/run/dpdk/spdk_pid67309 01:52:47.539 Removing: /var/run/dpdk/spdk_pid67423 01:52:47.539 Removing: /var/run/dpdk/spdk_pid68841 01:52:47.539 Removing: /var/run/dpdk/spdk_pid70568 01:52:47.539 Removing: /var/run/dpdk/spdk_pid70653 01:52:47.539 Removing: /var/run/dpdk/spdk_pid70731 01:52:47.539 Removing: /var/run/dpdk/spdk_pid70841 01:52:47.539 Removing: /var/run/dpdk/spdk_pid70933 01:52:47.539 Removing: /var/run/dpdk/spdk_pid71040 01:52:47.539 Removing: /var/run/dpdk/spdk_pid71114 01:52:47.539 Removing: /var/run/dpdk/spdk_pid71195 01:52:47.539 Removing: /var/run/dpdk/spdk_pid71305 01:52:47.539 Removing: /var/run/dpdk/spdk_pid71397 01:52:47.540 Removing: /var/run/dpdk/spdk_pid71498 01:52:47.540 Removing: /var/run/dpdk/spdk_pid71578 01:52:47.540 Removing: /var/run/dpdk/spdk_pid71659 01:52:47.540 Removing: /var/run/dpdk/spdk_pid71763 01:52:47.540 Removing: /var/run/dpdk/spdk_pid71865 01:52:47.540 Removing: /var/run/dpdk/spdk_pid71963 01:52:47.540 Removing: /var/run/dpdk/spdk_pid72043 01:52:47.540 Removing: /var/run/dpdk/spdk_pid72118 01:52:47.540 Removing: /var/run/dpdk/spdk_pid72228 01:52:47.540 Removing: /var/run/dpdk/spdk_pid72321 01:52:47.540 Removing: /var/run/dpdk/spdk_pid72421 01:52:47.540 Removing: /var/run/dpdk/spdk_pid72502 01:52:47.540 Removing: /var/run/dpdk/spdk_pid72583 01:52:47.540 Removing: /var/run/dpdk/spdk_pid72657 01:52:47.540 Removing: /var/run/dpdk/spdk_pid72739 01:52:47.540 Removing: /var/run/dpdk/spdk_pid72849 01:52:47.540 Removing: /var/run/dpdk/spdk_pid72946 01:52:47.540 Removing: /var/run/dpdk/spdk_pid73046 01:52:47.540 Removing: /var/run/dpdk/spdk_pid73126 01:52:47.540 Removing: /var/run/dpdk/spdk_pid73195 01:52:47.540 Removing: /var/run/dpdk/spdk_pid73275 01:52:47.540 Removing: /var/run/dpdk/spdk_pid73349 01:52:47.540 Removing: /var/run/dpdk/spdk_pid73457 01:52:47.540 Removing: /var/run/dpdk/spdk_pid73555 01:52:47.540 Removing: /var/run/dpdk/spdk_pid73703 01:52:47.540 Removing: /var/run/dpdk/spdk_pid74000 01:52:47.540 Removing: /var/run/dpdk/spdk_pid74042 01:52:47.798 Removing: /var/run/dpdk/spdk_pid74524 01:52:47.798 Removing: /var/run/dpdk/spdk_pid74710 01:52:47.798 Removing: /var/run/dpdk/spdk_pid74810 01:52:47.798 Removing: /var/run/dpdk/spdk_pid74920 01:52:47.798 Removing: /var/run/dpdk/spdk_pid74979 01:52:47.798 Removing: /var/run/dpdk/spdk_pid75005 01:52:47.798 Removing: /var/run/dpdk/spdk_pid75300 01:52:47.798 Removing: /var/run/dpdk/spdk_pid75362 01:52:47.798 Removing: /var/run/dpdk/spdk_pid75453 01:52:47.798 Removing: /var/run/dpdk/spdk_pid75876 01:52:47.798 Removing: /var/run/dpdk/spdk_pid76023 01:52:47.798 Removing: /var/run/dpdk/spdk_pid76808 01:52:47.798 Removing: /var/run/dpdk/spdk_pid76957 01:52:47.798 Removing: /var/run/dpdk/spdk_pid77162 01:52:47.798 Removing: /var/run/dpdk/spdk_pid77270 01:52:47.798 Removing: /var/run/dpdk/spdk_pid77634 01:52:47.798 Removing: /var/run/dpdk/spdk_pid77921 01:52:47.798 Removing: /var/run/dpdk/spdk_pid78274 01:52:47.798 Removing: /var/run/dpdk/spdk_pid78483 01:52:47.798 Removing: /var/run/dpdk/spdk_pid78636 01:52:47.798 Removing: /var/run/dpdk/spdk_pid78707 01:52:47.798 Removing: /var/run/dpdk/spdk_pid78862 01:52:47.798 Removing: /var/run/dpdk/spdk_pid78898 01:52:47.798 Removing: /var/run/dpdk/spdk_pid78959 01:52:47.798 Removing: /var/run/dpdk/spdk_pid79182 01:52:47.798 Removing: /var/run/dpdk/spdk_pid79424 01:52:47.798 Removing: /var/run/dpdk/spdk_pid79884 01:52:47.798 Removing: /var/run/dpdk/spdk_pid80378 01:52:47.798 Removing: /var/run/dpdk/spdk_pid80837 01:52:47.798 Removing: /var/run/dpdk/spdk_pid81411 01:52:47.798 Removing: /var/run/dpdk/spdk_pid81560 01:52:47.798 Removing: /var/run/dpdk/spdk_pid81648 01:52:47.798 Removing: /var/run/dpdk/spdk_pid82353 01:52:47.798 Removing: /var/run/dpdk/spdk_pid82422 01:52:47.798 Removing: /var/run/dpdk/spdk_pid82894 01:52:47.798 Removing: /var/run/dpdk/spdk_pid83316 01:52:47.798 Removing: /var/run/dpdk/spdk_pid83871 01:52:47.798 Removing: /var/run/dpdk/spdk_pid84006 01:52:47.798 Removing: /var/run/dpdk/spdk_pid84058 01:52:47.798 Removing: /var/run/dpdk/spdk_pid84118 01:52:47.798 Removing: /var/run/dpdk/spdk_pid84188 01:52:47.799 Removing: /var/run/dpdk/spdk_pid84262 01:52:47.799 Removing: /var/run/dpdk/spdk_pid84467 01:52:47.799 Removing: /var/run/dpdk/spdk_pid84541 01:52:47.799 Removing: /var/run/dpdk/spdk_pid84606 01:52:47.799 Removing: /var/run/dpdk/spdk_pid84673 01:52:47.799 Removing: /var/run/dpdk/spdk_pid84708 01:52:47.799 Removing: /var/run/dpdk/spdk_pid84774 01:52:47.799 Removing: /var/run/dpdk/spdk_pid84895 01:52:47.799 Clean 01:52:47.799 05:47:39 -- common/autotest_common.sh@1453 -- # return 0 01:52:47.799 05:47:39 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 01:52:47.799 05:47:39 -- common/autotest_common.sh@732 -- # xtrace_disable 01:52:47.799 05:47:39 -- common/autotest_common.sh@10 -- # set +x 01:52:47.799 05:47:39 -- spdk/autotest.sh@391 -- # timing_exit autotest 01:52:47.799 05:47:39 -- common/autotest_common.sh@732 -- # xtrace_disable 01:52:47.799 05:47:39 -- common/autotest_common.sh@10 -- # set +x 01:52:48.057 05:47:39 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 01:52:48.057 05:47:39 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 01:52:48.057 05:47:39 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 01:52:48.057 05:47:39 -- spdk/autotest.sh@396 -- # [[ y == y ]] 01:52:48.057 05:47:39 -- spdk/autotest.sh@398 -- # hostname 01:52:48.057 05:47:39 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 01:52:48.057 geninfo: WARNING: invalid characters removed from testname! 01:53:09.993 05:48:01 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 01:53:13.277 05:48:04 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 01:53:15.808 05:48:07 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 01:53:18.340 05:48:09 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 01:53:20.872 05:48:12 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 01:53:23.402 05:48:14 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 01:53:25.968 05:48:17 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 01:53:25.968 05:48:17 -- spdk/autorun.sh@1 -- $ timing_finish 01:53:25.968 05:48:17 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 01:53:25.968 05:48:17 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 01:53:25.968 05:48:17 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 01:53:25.969 05:48:17 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 01:53:25.969 + [[ -n 5297 ]] 01:53:25.969 + sudo kill 5297 01:53:25.978 [Pipeline] } 01:53:25.994 [Pipeline] // timeout 01:53:25.999 [Pipeline] } 01:53:26.014 [Pipeline] // stage 01:53:26.019 [Pipeline] } 01:53:26.033 [Pipeline] // catchError 01:53:26.043 [Pipeline] stage 01:53:26.045 [Pipeline] { (Stop VM) 01:53:26.058 [Pipeline] sh 01:53:26.338 + vagrant halt 01:53:29.625 ==> default: Halting domain... 01:53:36.190 [Pipeline] sh 01:53:36.464 + vagrant destroy -f 01:53:39.763 ==> default: Removing domain... 01:53:39.771 [Pipeline] sh 01:53:40.047 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 01:53:40.055 [Pipeline] } 01:53:40.066 [Pipeline] // stage 01:53:40.070 [Pipeline] } 01:53:40.080 [Pipeline] // dir 01:53:40.084 [Pipeline] } 01:53:40.094 [Pipeline] // wrap 01:53:40.099 [Pipeline] } 01:53:40.107 [Pipeline] // catchError 01:53:40.114 [Pipeline] stage 01:53:40.116 [Pipeline] { (Epilogue) 01:53:40.125 [Pipeline] sh 01:53:40.401 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 01:53:46.979 [Pipeline] catchError 01:53:46.981 [Pipeline] { 01:53:46.995 [Pipeline] sh 01:53:47.277 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 01:53:47.536 Artifacts sizes are good 01:53:47.545 [Pipeline] } 01:53:47.563 [Pipeline] // catchError 01:53:47.578 [Pipeline] archiveArtifacts 01:53:47.588 Archiving artifacts 01:53:47.719 [Pipeline] cleanWs 01:53:47.770 [WS-CLEANUP] Deleting project workspace... 01:53:47.770 [WS-CLEANUP] Deferred wipeout is used... 01:53:47.777 [WS-CLEANUP] done 01:53:47.778 [Pipeline] } 01:53:47.793 [Pipeline] // stage 01:53:47.798 [Pipeline] } 01:53:47.809 [Pipeline] // node 01:53:47.814 [Pipeline] End of Pipeline 01:53:47.879 Finished: SUCCESS